halid
stringlengths
8
12
lang
stringclasses
1 value
domain
sequencelengths
1
7
timestamp
stringlengths
19
19
year
stringclasses
49 values
url
stringlengths
43
389
text
stringlengths
908
2.18M
size
int64
908
2.18M
authorids
sequencelengths
1
102
affiliations
sequencelengths
1
229
01753415
en
[ "info" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01753415/file/ACC2017_last-minute.pdf
Nohemi Alvarez-Jarquin Antonio Loría email: antonio.loria@lss.supelec.fr. José Luis Avila email: javila@centrogeo.edu.mx Consensus under switching spanning-tree topologies and persistently exciting interconnections We study the consensus problem for networks with changing communication topology and with time-dependent communication links. That is, the network changes in two dimensions: "geographical" and "temporal". We establish that consensus is reached provided that there always exists a spanning tree for a minimal dwell-time and the interconnection gains are persistently exciting. Our main result covers the particular case, studied in the literature, of one fixed-topology with time-varying interconnections but also that of changing topologies with reliable interconnections during a dwell-time. Another originality of our work lies in the method of proof, based on stability theory of time-varying and switched systems. Simulations on an academic example are provided to illustrate our theoretical results. I. INTRODUCTION In spite of the considerable bulk of literature on consensus consensus analysis, such problem for systems with taime-varying changing topologies and time-varying interconnections, has been little studied; some recent works include [START_REF] Chowdhury | Persistence based convergence rate analysis of consensus protocols for dynamic graph networks[END_REF], [START_REF] Kumar | Consensus analysis of systems with time-varying interactions : An event-triggered approach[END_REF], [START_REF] Chowdhury | On the estimation of algebraic connectivity in graphs with persistently exciting interconnections[END_REF], [START_REF] Maghenem | Lyapunov functions for persistently-excited cascaded time-varying systems: application in consensus analysis[END_REF]. This problem, however, is of great interest for researchers of several disciplines due to the multiple applications related to networked multiagent systems: satellite formation flying [START_REF] Carpenter | Decentralized control of satellite formations[END_REF], [START_REF] Sarlette | Cooperative attitude synchronization in satellite swarms: a consensus approach[END_REF], coupled oscillators, formation tracking control for mobile robots [START_REF] Dasdemir | A simple formation-tracking controller of mobile robots based on a "spanning-tree" communication[END_REF], coupled air traffic control [START_REF] Tomlin | Conflict resolution for air traffic management: A study in multiagent hybrid systems communication[END_REF] just to mention a few. These applications justify the design of appropriate consensus protocols to drive all dynamic agents to a common value. The consensus problem consists in establishing conditions under which the differences between any two motions among a group of dynamic systems, converge to zero asymptotically. In [START_REF] Ren | Consensus seeking in multi-agent systems under dynamically changing interaction topologies[END_REF] the authors present multiple agents in the presence of limited and unreliable information exchange with changing communication topologies; the analysis relies on the graph theory and consensus may be established if the union of the directed interaction graphs have a spanning tree. In [START_REF] Olfati-Saber | Consensus problems in networks of agents with switching topology and time-delays[END_REF] directed networks with switching topology are treated as a hybrid system. A common Lyapunov function allows to show convergence analysis of an agreement protocol for this system. The authors of [START_REF] Gong | Average consensus in networks of dynamic agents with switching topologies and multiple time-varying delays[END_REF] study the consensus problem in undirected networks of dynamic agents with fixed and switching topologies. Using Lyapunov theory; it is showed that all the nodes in the network achieve consensus asymptotically for appropriate communication delays, if the network topology graph is connected. In [START_REF] Xiao | State consensus for multi-agent systems with switching topologies and time-variant delays[END_REF] the authors address the consensus problem for discrete-time multiagent systems with changing communications topologies and bounded time-varying communication delays. In this paper, we consider the consensus problem for networks of dynamic systems interconnected in a directed graph through time-varying links. In contrast to the related literature, see for instance [START_REF] Kim | Consensus of output-coupled linear multi-agent systems under fast switching network: Averaging approach[END_REF], [START_REF] You | Consensus condition for linear multi-agent systems over randomly switching topologies[END_REF], we assume that the network's graph is time-varying with respect to two time-scales. Firstly, we assume that the interconnection topology changes that is, the agent A may communicate with B over a period of time and with C over a (dwell-)time interval. Secondly, in clear contrast with the literature, we assume that during a dwell-time in which the topology is fixed, the communication links are not reliable that is, they vary with time and, in particular, they may have random failures. The necessary and sufficient condition is that each interconnection gain is, separately, persistently exciting. Persistently exciting covers in particular random signals of positive mean (offset). This is also in contrast to conditions based on excitation of an "averaged" graph Laplacian -cf. [START_REF] Kim | Consensus of output-coupled linear multi-agent systems under fast switching network: Averaging approach[END_REF]. Thus, the problem we analyze covers both the case of fixed topology with time-varying interconnections and that of switching topologies with reliable interconnections. In the following section we present our main results. For clarity of exposition, we present in the first place, an auxiliary result on consensus under a fixed spanning tree topology with time-varying reliable interconnections. Then, we show that the switched topology problem may be approached using stability theory for switched linear time-varying systems. In Section III we present an example of three agents whose interconnection topology changes among six possible configurations. Concluding remarks are provided in Section IV. II. MAIN RESULTS A. Problem statement Consider N dynamic agents Ψ λ : ẋλ = u λ , λ = 1, 2, • • • , N, (1) where u λ represents a protocol of interconnection. The most common continuous consensus protocol under an all-to-all communication assumption, is given by -see [START_REF] Olfati-Saber | Consensus problems in networks of agents with switching topology and time-delays[END_REF], [START_REF] Xiao | State consensus for multi-agent systems with switching topologies and time-variant delays[END_REF], u λ (t, x) = - N κ=1 a λκ (t) x λ (t) -x κ (t) , (2) where a λκ is the (λ, κ) entry of the adjacency matrix and x λ is the information state of the λ-th agent. The system (1), (2) reaches consensus if for every initial condition all the states reach a common value as t tends to infinity. The consensus problem has been thoroughly studied both for the case of constant and time-varying interconnections, mostly under the assumption of an allto-all communication topology. It also is well known, from graph theory, that it is necessary and sufficient to reach consensus that there exists a spanning tree. In the case that the interconnections are time-varying, a similar result was established in [START_REF] Moreau | Stability of continuous-time distributed consensus algorithms[END_REF] based on the assumption that, roughly speaking, there exists an average spanning tree. In this paper, we analyze consensus under time-varying topologies; as opposed to the more traditional graphtheory based analysis [START_REF] Ren | Distributed consensus in multivehicle cooperative control[END_REF], we adopt a stability theory approach. B. The network model With little loss of generality, let us consider the following consensus protocol u λ =    -a λλ+1 (t) x λ (t) -x λ+1 (t) , ∀ λ ∈ [1, N -1], 0, λ = N, (3) where a λλ+1 ≥ 0 and it is strictly positive whenever information flows from the (λ + 1)th node to the λth node. This protocol leads to a spanning-tree configuration topology; the closed-loop equations are                    ẋ1 = -a 12 (t) x 1 -x 2 , . . . ẋλ = -a λ,λ+1 (t) x λ -x λ+1 , . . . ẋN-1 = -a N -1 N (t) x N -1 -x N , ẋN = 0. (4) In a leader-follower configuration, the N th node may be considered as a "swarm master" with its own dynamics. For simplicity, here we consider it to be static. It is clear that there are many other possible spanningtree configurations; the one showed above is considered conventionally. Actually, there exist a total number of N ! spanning-tree configurations; for instance, for a group of three agents there exist six possible spaning-tree configuration topologies which determine six different sequences Thus, to determine the N ! possible spanning-tree communication topologies, among N agents, we introduce the following notation. For each k ≤ N we define a function π k which takes integer values in {1, . . . N }. We also introduce the sequence of agents {Ψ π k } N k=1 with the following properties: 1) every agent Ψ λ is in the sequence; 2) no repetitions of agents in the sequence is allowed 3) the root agent is labeled Ψ π N and it communicates with the agent Ψ π N -1 , the latter is parent of Ψ π N -2 and so on down to the leaf agent Ψ π1 . That is, the information flows with interconnection gain a π k π k+1 (t) ≥ 0, from the agent Ψ π k+1 to the agent Ψ π k . The subindex k represents the position of the agent Ψ π k in the sequence. Note that any sequence {Ψ π1 , Ψ π2 , ..., Ψ π N -1 , Ψ π N } of the agents may be represented as a spanning-tree topology which is depicted in Figure 2. Thus, in general, each possible fixed topology labeled i ∈ 1, . . . , N ! is generated by a protocol of the form (3) which we write as {Ψ 3 , Ψ 2 , Ψ 1 }, {Ψ 2 , Ψ 1 , Ψ 3 }, etc. -see Figure 1. Ψ 1 a 12 (t) Ψ2 a 23 (t) Ψ3 i = 1 Ψ 2 a 21 (t) Ψ1 a 13 (t) Ψ3 i = 2 Ψ 3 a 31 (t) Ψ1 a 12 (t) Ψ2 i = 3 Ψ 1 a 13 (t) Ψ3 a 32 (t) Ψ2 i = 4 Ψ 2 a 23 (t) Ψ3 a 31 (t) Ψ1 i = 5 Ψ 3 a 32 (t) Ψ2 a 21 (t) Ψ1 i = 6 Ψ π 1 aπ 1 π 2 (t) Ψ π 2 • • • Ψ π N -1 Ψ π N aπ N -1 π N (t) u i π k = -a i π k π k+1 (t) x π k -x π k+1 , k ∈ {1, . . . , N -1}, 0, k = N, (5) where k denotes the position of the agent Ψ λ in the sequence {Ψ π k } N k=1 and π k represents which agent Ψ λ is in the position k, this is, π k = λ. Under (5), the system (1) takes the form ẋdi = -L i (t)x di , i ∈ {1, . . . , N !}, (6) where to each topology i ≤ N ! corresponds a state vector x di = x π1 , x π2 , . . . , x π N which contains the states of all interconnected agents in a distinct order, depending on the topology. For instance, referring to Figure 1, for i = 1 we have x d1 = x 1 , x 2 , x 3 while x d4 = x 1 , x 3 , x 2 while for i = 4. Accordingly, to each topology we associate a distinct Laplacian matrix L i (t) which is given by L i (t) :=        a i π 1 π 2 (t) -a i π 1 π 2 (t) 0 0 0 0 a i π 2 π 3 (t) -a i π 2 π 3 (t) 0 0 . . . . . . . . . . . . . . . 0 0 • • • a i π k-1 π k (t) -a i π k-1 π k (t) 0 0 • • • • • • 0        . ( 7 ) Since any of the N ! configurations is a spanning tree, which is a necessary and sufficient condition for consensus, all configurations may be considered equivalent in some sense, to the first topology, i.e., with i = 1. As a convention, for the purpose of analysis we denote the state of the latter by x = x 1 , x 2 , . . . , x N and refer to it as an ordered topology. See Figure 3. It is clear (at least intuitively) that consensus of all systems ( 6) is equivalent to that of ẋ = L 1 (t)x, where Ψ 1 a12(t) Ψ 2 • • • Ψ N -1 Ψ N a N -1N (t) L 1 (t) :=       a 12 (t) -a 12 (t) 0 0 0 0 a 23 (t) -a 23 (t) 0 0 . . . . . . . . . . . . . . . 0 0 • • • a N -1N (t) -a N -1N (t) 0 0 • • • 0 0       . ( 8 ) More precisely, the linear transformation from a "disordered" vector x di to the ordered vector x is defined via a permutation matrix P i that is, x di = P i x (9) with P i ∈ R n×n defined as P i =     E π1 E π2 . . . E π N     , i ∈ {1, . . . , N !}, (10) and the rows E π k = 0, 0, . . . , 1 . . . , 0 . π k th position The permutation matrix P i is a nonsingular matrix with P -1 i = P i [START_REF] Horn | Matrix Analysis[END_REF]. For instance, relative to Figure 1 we have x d2 = [x 2 , x 1 , x 3 ] and P 2 =   0 1 0 1 0 0 0 0 1   . In order to study the consensus problem for (6) for any i it is both sufficient and necessary to study that of any configuration topology. Moreover, we may do so by studying the error dynamics corresponding to the differences between any pair of states. C. Fixed topology For clarity of exposition we start with a preliminary result which applies to the case of a fixed but arbitrary topology -cf. [START_REF] Ren | Distributed consensus in multivehicle cooperative control[END_REF]Theorem 2.33]. In view of the previous discussion, without loss of generality, we focus on the study of the ordered topology depicted in Figure 3. Consensus may be established using an argument on stability of cascaded systems. To see this, let z 1 denote the vector of ordered errors corresponding to this first topology that is, z 1λ := x λ -x λ+1 , ∀ λ ∈ {1, . . . , N -1}. Then, the systems in [START_REF] Horn | Matrix Analysis[END_REF] . . . ż1N-1 = -a N -1 N (t)z 1N -1 (11) is (globally) uniformly asymptotically stable. In a fixed topology we have a λ,λ+1 (t) > 0 for all t ≥ 0 that is, the λth node in the sequence always receives information from its parent labeled λ + 1, albeit with varying intensity. The origin of the decoupled bottom equation, which corresponds to the dynamics of the root node, is uniformly exponentially stable if a N -1 N (t) > 0 for all t. Each of the subsystems in [START_REF] Olfati-Saber | Consensus problems in networks of agents with switching topology and time-delays[END_REF] from the bottom to the top is input to state stable. Uniform exponential stability of the origin {z = 0} follows provided that a λλ+1 is bounded. In compact form, the consensus dynamics becomes ż1 = A 1 (t)z 1 , z 1 = [z 11 • • • z 1 N -1 ] (12) where the matrix A 1 (t) ∈ R N -1×N -1 is defined as A 1 (t) =         -a 12 (t) a 23 (t) 0 • • • 0 0 -a 23 (t) a 34 (t) 0 . . . . . . . . . . . . . . . . . . 0 0 0 -a N-2N-1 (t) a N-1N (t) 0 0 0 • • • -a N-1N (t)         . ( 13 ) Lemma 1: Let Φ(t; t • ) = A 1 (t)Φ(t; t • ), Φ(t • ; t • ) = I N -1 , ∀t ≥ t • > 0. (14) Assume that, for every i ∈ {1, ..., N -1}, a ii+1 is a bounded persistently exciting signal that is, there exist T i and µ i > 0 such that t+Ti t a ii+1 (s)ds ≥ µ i ∀ t ≥ 0. (15) Then, there exist ᾱ > 0, α > 0 such that ||Φ(t; t • )|| ≤ ᾱe -α(t-t•) , ∀ t ≥ t • ≥ 0. ( 16 ) Proof: Note that the solution of the differential equation ( 14) is given by Φ(t; t • ) = [φ ij (t; t • )], where (18) For each j = i + 1 such that i < N -1 the third integral in [START_REF] You | Consensus condition for linear multi-agent systems over randomly switching topologies[END_REF] depends on φ ii and φ i+1i+1 which are bounded by ki e -ki(t-s) and ki+1 e -ki+1(t-s) , respectively. Consequently, φ ij (t; t • ) =            0, i > j |φ ij (t; t • )| ≤ ki kj |a i+1,i+2 | ∞ 1 |k i -k j | e -min{kj ,ki}(t-t•) where by assumption, |a i+1,i+2 | ∞ is bounded. Thus, all elements of Φ ij (t; t • ) are bounded in norm by a decaying exponential. D. Time-varying topology In this section we study the more general case, in which not only the interconnection gains are time-varying, as in the previous section and [START_REF] Ren | Distributed consensus in multivehicle cooperative control[END_REF], but the topology may be randomly chosen as long as there always is a spanning tree which lasts for at least a dwell-time. For the purpose of analysis we aim at identifying, with each possible topology, a linear time-varying system of the form [START_REF] Ren | Consensus seeking in multi-agent systems under dynamically changing interaction topologies[END_REF] with a stable origin and to establish stability of the switched system. To that end, let i determine one among the N ! topologies schematically represented by a graph as showed in Figure 2. Let x λ denote the state of system Ψ λ then, for the ith topology, we define the error z i = [z i1 • • • z iN -1 ] , ( 19 ) z ik = x π k -x π k+1 , k ∈ {1, • • • , N -1}, ( 20 ) where k denotes the graphical position of the agent Ψ λ in the sequence {Ψ π k } N k=1 and π k represents which agent Ψ λ is in the position j, this is, π k = λ. Example 1: Consider two possible topologies among those showed in Figure 1 represented in more detail in Figure 4 (for i = 1) and Figure 5 (for i = 4). Then, we have whereas in the second case, when i = 4, That is, for each topology i the dynamics of the interconnected agents is governed by the equation z 11 = x π1 -x π2 = x 1 -x 2 z 12 = x π2 -x π3 = x 2 -x 3 Ψ 1 k = 1 Ψ 2 k = 2 Ψ 3 k = 3 z 21 = x π1 -x π2 = x 1 -x 3 z 22 = x π2 -x π3 = x 3 -x 2 Ψ 1 k = 1 Ψ 3 k = 2 Ψ 2 k = 3 żi = A i (t)z i (21) where A i (t) :=      -a i π 1 π 2 (t) a i π 2 π 3 (t) 0 0 . . . . . . . . . . . . 0 0 -a i π N-2 π N-1 (t) a i π N-1 π N (t) 0 0 • • • -a i π N-1 π N (t)      (22) According to Lemma 1 the origin {z i = 0} is uniformly globally exponentially stable provided that a π k π k+1 (t) is strictly positive for all t. It is clear that consensus follows if the origin {z i = 0} for any of the systems (21) (with i fixed for all t) is uniformly exponentially stable. Actually, there exist α i and ᾱi such that |z i (t)| ≤ ᾱi e -αit , ∀ t ≥ 0. (23) Observing that all the systems (21) are equivalent up to a linear transformation, our main result establishes consensus under the assumption that topology changes, provided that there exists a minimal dwell-time. Indeed, the coordinates z i are related to z 1 by the transformation z i = W i z 1 , (24) where W i := T P i T -1 , P i is defined in (10), T ∈ R N -1×N is given by T =       1 -1 0 • • • 0 0 0 1 -1 • • • 0 0 . . . . . . . . . . . . 0 0 0 1 -1 0 0 0 0 • • • 1 -1       (25) and T -1 ∈ R N ×N -1 denotes a right inverse of T .Note that the matrix W i ∈ R N -1×N -1 is invertible for each i ≤ N ! since each of its rows consists in a linear combination of two different rows of T -1 , which contains N -1 linearlyindependent rows. Actually, using (24) in ( 21) we obtain ż1 = Āi (t)z 1 , (26) where Āi (t) := W -1 i A i (t)W i . (27) We conclude that |z 1 (t)| ≤ α i e -αit , α i := W -1 i ᾱi , ∀ t ≥ 0. (28) Based on this fact we may now state the following result for the switched error systems which model the network of systems with switching topology. Lemma 2: Consider the switched system ż1 = Āσ(t) (t)z 1 (29) with σ : R ≥0 → {1, . . . , N !} and for each i ∈ {1, . . . , N !}, Āi is defined in (27). Let the dwell time τ d > ln N ! i=1 α i N ! i=1 α i . (30) Then, the equilibrium {z 1 = 0} of ( 26) is uniformly globally exponentially stable for any switching sequence {t p } such that t p+1 -t p > τ d for every switching time t p . Proof: Let t p be an arbitrary switching instant. For all t ≥ t p such that σ(t) = i we have ||z 1 (t)|| ≤ α i e -αi( Using the property of continuity of both the norm function and the state z(t), we have ||z 1 (t p+1 )|| ≤ ||z 1 (t p + τ d )|| (33) and therefore ||z 1 (t p+1 )|| ≤ α i e -αiτ d ||z 1 (t p )||. (34) Note that to guarantee asymptotic stability of (29) it is sufficient that for every pair of switching times t p and t q ||z 1 (t q )|| -||z 1 (t p )|| < 0 (35) whenever p < q and σ(t p ) = σ(t q ). Now consider the sequence of switching times t p , t p+1 , ..., t p+N !-1 , t p+N ! satisfying σ(t p ) = σ(t p+1 ) = . . . = σ(t p+N !-1 ) and σ(t p ) = σ(t p+N ! ) which corresponds to a switching signal in which all the N ! switched are chosen. From (34) it follows that ||z 1 (t p+N ! )|| ≤ N ! i=1 α i e -( N ! i=1 αi)τ d ||z 1 (t p )||. (36) To ensure that ||z 1 (t p+N ! )|| -||z 1 (t p )|| < 0 (37) it is sufficient that N ! i=1 α i e -( N ! i=1 αi)τ d -1 < 0 (38) Therefore, since the norm is a non-negative function we obtain N ! i=1 α i e -( N ! i=1 αi)τ d < 1 (39) and the proof follows. Finally, in view of Lemma 2 we can make the following statement. Theorem 1: Let {t p } denote a sequence of switching instants p ∈ Z ≥0 and let σ : R ≥0 → {1, . . . , N !} be a piecewise constant function satisfying σ(t) ≡ i for all t ∈ [t p , t p+1 ) with t p -t p+1 ≥ τ d and τ d satisfying (30). Consider the system (1) in closed loop with u σ(t) π k = -a σ(t) π k π k+1 (t) x π k -x π k+1 , k ∈ {1, . . . , N -1}, 0, k = N. (40) Let the interconnection gains a i λκ , for all i ∈ {1, . . . , N !} and all λ, κ ∈ {1, . . . , N -1}, be persistently exciting. Then, the system reaches consensus with uniform exponential convergence. III. EXAMPLE For illustration, we consider a network of three agents hence, with six possible topologies, as showed in Figure 1. The information exchange among agents in each topology is ensured via channels with persistently-exciting communication intensity; the corresponding parameters are shown in Table I , Ψ 2 , Ψ 3 }, {Ψ 2 , Ψ 1 , Ψ 3 } and {Ψ 3 , Ψ 1 , Ψ 2 }. The graphs corresponding to the interconnection gains are showed in Figures 6 and7. By applying Lemma 1, we can compute α i and α i for each topology i, see Table II. Substituting the values of α i and α i into (30), we find that the dwell time must satisfy τ d > 7.92. We performed some numerical simulations using Simulink of Matlab. In a first test, the initial conditions are set to x 1 (0) = -2, x 2 (0) = 1.5 and x 3 (0) = -0.5; the switching signal σ(t) is illustrated in Figure 8. The systems' trajectories, converging to a consensus equilibrium, are showed in Figure 9. It is worth mentioning, however, that the dwell-time condition (30) only provides a sufficient stability condition. In Figure 10 we show the graph of a switching signal which does not respect the dwell-time condition and, yet, all the states converge to a common value -see Figure 11. IV. CONCLUSIONS We provided the convergence analysis of a consensus problem for a network of integrators with directed information flow under time-varying topology. Our analysis relies on stability theory of time-varying and switched systems. We established a minimal dwell-time conditions over the switching signal. Fig. 1 . 1 Fig. 1. Example of 3 agents, where by changing their positions, we obtained six possible topologies. Fig. 2 . 2 Fig. 2. A spanning-tree topology with time dependent communication links between Ψπ k and Ψπ k+1 . Fig. 3 . 3 Fig. 3. A spanning-tree topology with time dependent communication links between Ψπ k and Ψπ k+1 . Fig. 4 . 4 Fig. 4. A topology with 3 agents where π 1 = 1, π 2 = 2 and π 3 = 3. Fig. 5 . 5 Fig. 5. The second topology with 3 agents where π 1 = 1, π 2 = 3 and π 3 = 2. Fig. 7 .Fig. 8 . 78 Fig. 7. Persistently exciting interconnection gains for the topologies {Ψ 1 , Ψ 3 , Ψ 2 }, {Ψ 2 , Ψ 3 , Ψ 1 } and {Ψ 3 , Ψ 2 , Ψ 1 }. Fig. 9 . 9 Fig. 9. Trajectories of x 1 , x 2 and x 3 . Fig. 10 . 10 Fig. 10. The switching signal σ(t), which does not satisfy the dwell-time condition. Fig. 11 . 11 Fig. 11. Trajectories of x 1 , x 2 and x 3 . with i = 1 reach consensus if and only if the origin of ż11 = -a 12 (t)z 11 + a 23 (t)z 12 ż12 = -a 23 (t)z 12 + a 34 (t)z 13 t-tp) ||z 1 (t p )||, ∀t p ≤ t < t p+1 . (31) Since by hypothesis τ d ∈ [t p , t p+1 ), from (31) we have ||z 1 (t p + τ d )|| ≤ α i e -αiτ d ||z 1 (t p )||. Table I . I . Parameters of the interconnection gains. i=1 T µ i=2 T µ i=3 T µ a 12 (t) 0.25 0.5 a 21 (t) 2.0 1.6 a 31 (t) 0.3 0.1 a 23 (t) 0.2 1 a 13 (t) 0.8 0.2 a 12 (t) 0.7 0.6 i=4 T µ i=5 T µ i=6 T µ a 13 (t) 2 1 a 23 (t) 0.4 0.1 a 32 (t) 0.5 0.3 a 32 (t) 4 0.4 a 31 (t) 0.5 0.4 a 21 (t) 4.2 1.8 0.5 1 a 12 (t) 0 0 0.2 0.4 0.6 0.8 1 1.2 0.5 1 a 23 (t) 0 0 0.5 1 1.5 2 2.5 0.5 1 a 21 (t) 0 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 0.5 1 a 13 (t) 0 0 0.5 1 1.5 2 2.5 0.5 1 a 31 (t) 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.5 1 a 12 (t) 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 Time Fig. 6. Persistently exciting interconnection gains for the topolo- gies {Ψ 1 Table II . II Parameters corresponding to the exponential bounds. i 1 2 3 4 5 6 α i 6.51 10.26 6.11 12.85 4.71 3.98 α 0.2 0.3 0.4 0.1 0.35 0.1
24,808
[ "9516" ]
[ "531331", "1289" ]
01753785
en
[ "sdv" ]
2024/03/05 22:32:10
2014
https://hal.science/hal-01753785/file/article.pdf
E Potier N C Rivron C A Van Blitterswijk K Ito email: k.ito@tue.nl PhD Dr K Ito Micro-aggregates do not influence bone marrow stromal cell chondrogenesis Keywords: bone marrow stromal cells, mesenchymal stem cells, chondrogenesis, cell-cell interactions, micro-aggregates, hydrogel Although bone marrow stromal cells (BMSCs) appear promising for cartilage repair, current clinical results are suboptimal and the success of BMSC-based therapies relies on a number of methodological improvements, among which is better understanding and control of their differentiation pathways. We investigated here the role of the cellular environment (paracrine vs juxtacrine signalling) in the chondrogenic differentiation of BMSCs. Bovine BMSCs were encapsulated in alginate beads, as dispersed cells or as small micro-aggregates, to create different paracrine and juxtacrine signalling conditions. BMSCs were then cultured for 21 days with TGFβ3 added for 0, 7 or 21 days. Chondrogenic differentiation was assessed at the gene (type II and X collagens, aggrecan, TGFβ, sp7) and matrix (biochemical assays and histology) levels. The results showed that micro-aggregates had no beneficial effects over dispersed cells: matrix production was similar, whereas chondrogenic marker gene expression was lower for the micro-aggregates, under all TGFβ conditions tested. This weakened chondrogenic differentiation might be explained by a different cytoskeleton organization at day 0 in the micro-aggregates. Introduction Articular hyaline cartilage only possesses a limited self-repair capacity. Most tissue damage, caused either by wear and tear or trauma, will not be healed but replaced by fibrocartilage. This tissue has inferior biochemical and biomechanical properties compared to native hyaline cartilage, altering the function of the joint and ultimately leading to severe pain [START_REF] Ahmed | Strategies for articular cartilage lesion repair and functional restoration[END_REF][START_REF] Nesic | Cartilage tissue engineering for degenerative joint disease[END_REF]. Surgical approaches are commonly proposed to promote the healing of cartilage damage. They present, however, several limitations linked to the cell/tissue source and may lead to the formation of fibrocartilage rather than hyaline cartilage [START_REF] Ahmed | Strategies for articular cartilage lesion repair and functional restoration[END_REF]Khan et al., 2010). Alternative sources of cells/tissues are therefore needed to regenerate cartilage damage. One of the most promising sources is bone marrow stromal cells (BMSCs) [START_REF] Gregory | Non-hematopoietic bone marrow stem cells: molecular control of expansion and differentiation[END_REF]Khan et al., 2010;[START_REF] Krampera | Mesenchymal stem cells for bone, cartilage, tendon and skeletal muscle repair[END_REF][START_REF] Prockop | Marrow stromal cells as stem cells for nonhematopoietic tissues[END_REF]. As these cells are isolated from the bone marrow, no cartilage tissue harvesting is required and the tissue source is exempt from degenerative cartilage disease. BMSCs also possess a high proliferative rate, allowing the regeneration of large defects. Many studies have established that BMSCs can differentiate in vitro into chondrocytes (Muraglia et al., 2000;[START_REF] Halleux | Multi-lineage potential of human mesenchymal stem cells following clonal expansion[END_REF][START_REF] Pittenger | Multilineage potential of adult human mesenchymal stem cells[END_REF]. The patient's condition can affect BMSC proliferation and differentiation: age and osteoarthritis have been reported to reduce the chondrogenic potential of BMSCs [START_REF] Murphy | Reduced chondrogenic and adipogenic activity of mesenchymal stem cells from patients with advanced osteoarthritis[END_REF]; although other studies report that these factors do not influence BMSC chondrogenesis [START_REF] Dudics | Chondrogenic potential of mesenchymal stem cells from patients with rheumatoid arthritis and osteoarthritis: measurements in a microculture system[END_REF][START_REF] Scharstuhl | Chondrogenic potential of human adult mesenchymal stem cells is independent of age or osteoarthritis etiology[END_REF]. BMSCs have been used to repair cartilage lesions in numerous animal models [START_REF] Guo | Repair of large articular cartilage defects with implants of autologous mesenchymal stem cells seeded into β-tricalcium phosphate in a sheep model[END_REF][START_REF] Uematsu | Cartilage regeneration using mesenchymal stem cells and a three-dimensional polylactic-glycolic acid (PLGA) scaffold[END_REF] but also in humans [START_REF] Wakitani | Human autologous culture expanded bone marrow mesenchymal cell transplantation for repair of cartilage defects in osteoarthritic knees[END_REF][START_REF] Nejadnik | Autologous bone marrow-derived mesenchymal stem cells versus autologous chondrocyte implantation: an observational cohort study[END_REF]. Although the results are promising, the repair tissues were not completely composed of hyaline cartilage [START_REF] Matsumoto | Articular cartilage repair with autologous bone marrow mesenchymal cells[END_REF]. However, it is the general belief that, with further advancements, BMSC-based therapies will eventually be helpful in clinics. One important direction for improvement is to better understand and control the differentiation pathway leading BMSCs to fully differentiated and functional chondrocytes. For many years, BMSC differentiation has been induced by various cocktails of biochemical factors [START_REF] Augello | The regulation of differentiation in mesenchymal stem cells[END_REF] and more and more evidence indicates that their biomechanical environment can control their differentiation [START_REF] Potier | Directing bone marrow-derived stromal cell function with mechanics[END_REF]. Other than applied exogenous stimulation, direct communication of cells with their environment can also affect their behaviour. For example, cells can respond to different substrate stiffness, for BMSCs, by adapting their differentiation pathways [START_REF] Engler | Matrix elasticity directs stem cell lineage specification[END_REF][START_REF] Pek | The effect of matrix stiffness on mesenchymal stem cell differentiation in a 3D thixotropic gel[END_REF] or, for chondrocytes, their chondrogenic phenotype [START_REF] Sanz-Ramos | Response of sheep chondrocytes to changes in substrate stiffness from2 to 20 Pa: effect of cell passaging[END_REF][START_REF] Schuh | Effect of matrix elasticity on the maintenance of the chondrogenic phenotype[END_REF]. However, so far, few studies have focused on the relationship between cell-cell communication and BMSC differentiation, when adhesion of cells to each other may also provide important clues to control BMSCs, as shown with the osteogenic differentiation pathway [START_REF] Tang | The regulation of stem cell differentiation by cell-cell contact on micropatterned material surfaces[END_REF]. The aim of this study was, therefore, to modulate the cell-cell interactions between BMSCs and evaluate the impact on BMSC in vitro chondrogenesis. In order to create different cell-cell interactions, BMSCs were seeded into hydrogel either as dispersed cells, where interactions rely on paracrine signalling, or as microaggregates, where interactions rely on paracrine and juxtacrine signalling. Micro-aggregates, rather than micromass, were used to promote cell-cell contact locally. Indeed, it has been shown that micromass culture, used to mimic the condensation phenomenon of mesenchymal cells during development [START_REF] Bobick | Regulation of the chondrogenic phenotype in culture[END_REF], leads to heterogeneous distribution of the cartilaginous matrix [START_REF] Barry | Chondrogenic differentiation of mesenchymal stem cells from bone marrow: differentiation-dependent gene expression of matrix components[END_REF][START_REF] Mackay | Chondrogenic differentiation of cultured human mesenchymal stem cells from marrow[END_REF][START_REF] Schmitt | BMP2 initiates chondrogenic lineage development of adult human mesenchymal stem cells in highdensity culture[END_REF][START_REF] Murdoch | Chondrogenic differentiation of human bone marrow stem cells in Transwell cultures: generation of scaffold-free cartilage[END_REF], most likely due to mass transport limitations within the micromass. Downscaling from micromass (200 000-250 000 cells) to micro-aggregates (50-300 cells) should overcome these mass transport issues. In fact, micro-aggregate culture has already been shown to be superior to micromass for BMSC chondrogenesis, with a more homogeneous differentiation and matrix deposition observed [START_REF] Markway | Enhanced chondrogenic differentiation of human bone marrow-derived mesenchymal stem cells in low oxygen environment micropellet cultures[END_REF]. Finally, the effects of cell-cell interactions on BMSC chondrogenesis could be attenuated by the presence of exogenous growth factors (e.g. TGFβ3) in the culture medium. We therefore used different patterns of TGFβ stimulation (0, 7 or 21 days) to assess the influence of different cellular environments [dispersed cells (DC) vs micro-aggregates (MA)] on BMSC chondrogenesis. Materials and Methods Bovine BMSC isolation and expansion Bovine BMSCs were isolated from three cows (8-12 months old, all skeletally immature), in accordance with local regulations. Bone marrow was aspirated from the pelvis and immediately mixed 1:1 with high-glucose (4.5 g/l) Dulbecco's modified Eagle's medium (hgDMEM; Gibco Invitrogen, Carlsbad, CA, USA) supplemented with 100 U/ml heparin (Sigma, Zwijndrecht, The Netherlands) and 3% penicillin-streptomycin (Lonza, Basel, Switzerland). Bone marrow samples were then centrifuged (300 × g, 5 min) and resuspended in growth medium: hgDMEM +10% fetal bovine serum (FBS; Gibco Invitrogen; batch selected for BMSC growth and differentiation) +1% penicillin-streptomycin. BMSCs were isolated by adhesion [START_REF] Friedenstein | The development of fibroblast colonies in monolayer cultures of guineapig bone marrow and spleen cells[END_REF][START_REF] Kon | Autologous bone marrow stromal cells loaded onto porous hydroxyapatite ceramic accelerate bone repair in critical size defects of sheep long bones[END_REF][START_REF] Potier | Hypoxia affects mesenchymal stromal cell osteogenic differentiation and angiogenic factor expression[END_REF]. Cells were seeded in flasks (using 7-10 ml medium:bone marrow mix per 75 cm2) and, after 4 days, the medium was changed. BMSCs were then expanded up to P1 (passage at 5,000 cells/cm2) before freezing [70-80% confluence; in 90% FBS/10% dimethylsulphoxide (Sigma)]. A fresh batch of BMSCs was thawed and cultured up to P4 for each experiment (each passage at 5000 cells/cm2). Cells from each donor were cultured separately. Bovine BMSCs (n = 4) isolated and expanded following these protocols showed successful chondrogenesis using the micromass approach [START_REF] Johnstone | In vitro chondrogenesis of bone marrow-derived mesenchymal progenitor cells[END_REF]) (as shown with safranin O staining). Production of agarose chips Custom-made PDMS stamps, with a microstructured surface consisting of 2865 rounded pins with a diameter of 200 μm and a spacing of 100 μm, were produced. The stamps were sterilized with alcohol and placed in a sixwell plate, microstructured surface up. Warm ultra-pure agarose solution [Gibco Invitrogen; 3% in phosphate-buffered saline (PBS)] was poured on the stamps, centrifuged for 1 min at 2,500 rpm and incubated for 30 min at 4°C. The agarose chips were then separated from the stamps, cut to size to fit in a well of a 12-well plate, covered with PBS and kept at 4°C until use [START_REF] Rivron | Tissue deformation spatially modulates VEGF signaling and angiogenesis[END_REF]. Formation of micro-aggregates and alginate seeding At passage 5, BMSCs were used to seed: (a) alginate beads (dispersed cells; DC); or (b) agarose chips (microaggregates; MA) (Figure 1). For the DC condition, BMSCs were resuspended in 1.2% sodium alginate (Sigma) solution (in 0.9% NaCl; Merck, Darmstadt, Germany) at a concentration of 7 × 106 cells/ml. The cell + alginate suspension was slowly forced through a 22G needle and added dropwise to a 102 mM CaCl2 (Merck) solution [START_REF] Guo | Culture and growth characteristics of chondrocytes encapsulated in alginate beads[END_REF][START_REF] Jonitz | Differentiation capacity of human chondrocytes embedded in alginate matrix[END_REF]. Beads were incubated for 10 min at 37°C to polymerize and were then rinsed three times in NaCl 0.9% and twice in hgDMEM + 1% penicillin-streptomycin. For the MA condition, BMSCs were resuspended in growth medium at 2 × 10 5 cells/ml and 750 μl cell suspension was used per agarose chip (with PBS previously removed) to produce the microaggregates. Seeded chips were centrifuged for 1 min at 200 × g to force the cells to the bottom of the microwells; 3 ml growth medium was then slowly added and the cells were cultured for an additional 3 days in growth medium to allow cell aggregation. Microaggregates were then collected, flushing the agarose chips with growth medium, and used to seed alginate beads at a final concentration of 7 × 10 6 cells/ml, as described for the DC condition. Culture Seeded beads (with either DC or MA) were cultured for 3 weeks in Ch-medium [hgDMEM + 1% penicillin-streptomycin + 0.1 μM dexamethasone (Sigma) + 1% ITS-1+ (Sigma) + 1.25 mg/ml bovine serum albumin (BSA; Sigma) + 50 μg/ml ascorbic acid 2-phosphate (Sigma) + 40 μg/ml L-proline (Sigma) + 100 μg/ml sodium pyruvate (Gibco Invitrogen)] [START_REF] Mackay | Chondrogenic differentiation of cultured human mesenchymal stem cells from marrow[END_REF]. This medium was supplemented with 10 ng/ml (TGFβ3; (Peprotech, Rocky Hill, NJ, USA) [START_REF] Barry | Chondrogenic differentiation of mesenchymal stem cells from bone marrow: differentiation-dependent gene expression of matrix components[END_REF] (Ch(+) medium) for 0, 7 or 21 days (Figure 1). BMSCs were cultured under 5% CO2 and 2% O2 [START_REF] Markway | Enhanced chondrogenic differentiation of human bone marrow-derived mesenchymal stem cells in low oxygen environment micropellet cultures[END_REF]; six beads/well, of a six-well plate containing 3 ml medium, were cultured. Cell viability At days 0 and 21, beads (n = 3 beads/donor/group) were washed in PBS and incubated in 10 μM calcein AM (Sigma)/10 μM propidium iodide (Gibco Invitrogen) solution (in PBS) for 1 h at 37°C. Cells were then imaged in the centres of the beads at a depth of 200 μm, using a confocal microscope (CLSM 510 Meta, Zeiss, Sliedrecht, The Netherlands). Cell morphology and adhesion At days 0 and 21, beads (n = 3 beads/donor/group) were embedded in cryo-compound (Tissue-Tek® OCT™; Sakura, Alphen aan den Rijn, The Netherlands) and snap-frozen in liquid nitrogen; 50 μm-thick cryosections were cut in the middle of the beads. The sections were then thawed, fixed for 30 min at room temperature (RT) in buffered formalin 3.7% (Merck), rinsed in PBS and incubated for 5 min at RT in Triton 1.5% in PBS. The sections were rinsed in PBS and stained with TRITC-phalloidin (Sigma; 1 μM in PBS + 1% BSA) for 2 h at RT. The sections were then rinsed in PBS, counterstained with DAPI for 15 min at RT (Sigma; 100 ng/ml in PBS), rinsed in PBS and MilliQ water, airdried and mounted in Entellan (Merck). The stained sections were observed using a confocal microscope. Morphometric analyses to determine cluster areas and numbers of cells/cluster were conducted on these images, using Zen 2012 software (Zeiss). For each group, 25 clusters or cells were analysed. Stained clusters or cells were manually outlined and the corresponding area determined. Cell numbers/cluster were also counted manually. Immunostaining for vinculin and pan-cadherin was conducted on 10 μm-thick cryosections. The sections were thawed, fixed for 10min at RT in buffered formalin 3.7%, rinsed in PBS and incubated for 10 min at RT in Triton 0.5% in PBS. After blocking in 3% BSA for 1 h, the sections were incubated for 1 h at RT with monoclonal mouse anti-vinculin antibodies (Sigma), diluted at 1:400, or with monoclonal mouse anti-cadherin antibodies (Abcam; Cambridge, UK), diluted at 1:100, in 3% BSA. The sections were then washed three times in PBS and incubated for 1 h at 38°C with Alexa 488-conjugated goat anti-mouse antibodies (Molecular Probes; Bleiswijk, The Netherlands), diluted 1:300 in PBS. The stained sections were then rinsed three times and mounted with Mowiol. For both stainings, human cardiomyocyte progenitor cells grown on coverslips were used as a positive control. Both antibodies are known to work with bovine material. Cartilaginous matrix formation and cell proliferation At days 0 and 21, five beads/donor and group were pooled and digested in papain solution [150 mM NaCl (Merck), 789 μg/ml Lcysteine (Sigma), 5 mM Na2EDTA.2H2O (Sigma), 55 mM Na3citrate.2H2O (Sigma) and 125 μg/ml papain (Sigma)] at 60°C for 16 h. Digested samples were then used to determine their content of sulphated glycosaminoglycans (sGAG), as a measure of proteoglycans, and DNA. sGAG content was determined using the dimethyl methylene blue (DMMB) assay, adapted for alginate presence [START_REF] Enobakhare | Quantification of sulfated glycosaminoglycans in chondrocyte/alginate cultures, by use of 1,9dimethylmethylene blue[END_REF]. Shark cartilage chondroitin sulphate (Sigma) was used as a reference and digested with empty alginate beads (i.e. alginate concentration identical for references and experimental samples). DNA content was measured using the Hoechst dye method [START_REF] Cesarone | Improved microfluorometric DNA determination in biological material using 33258 Hoechst[END_REF], with a calf thymus DNA reference (Sigma). For the 7 days of TGFβ3-treatment group, the beads were also analysed at day 7. At days 0 and 21, beads (n = 3 beads/donor/group) were also embedded in cryo-compound and snap-frozen in liquid nitrogen. 10 μm thick cryosections were cut in the middle of the beads. The sections were then thawed, incubated for 5 min in 0.1 M CaCl2 at RT and fixed in buffered formalin 3.7% for 3 min at RT. The sections were then rinsed in 3% glacial acetic acid (Merck) and stained in Alcian blue solution (Sigma; 1%, pH 1.0, for alginate presence) for 30 min at 37°C. The sections were then rinsed in 0.05 M CaCl2 and counterstained with nuclear fast red solution (Sigma) for 7 min at RT. The stained sections were rinsed in 0.05 M CaCl2 before mounting in Mowiol (Merck) and were observed using a brightfield microscope (Observer Z1, Zeiss). Gene expression At days 0 and 21, nine beads/donor and group were pooled, snap-frozen in liquid nitrogen and stored at -80°C until RNA isolation. Frozen beads were placed in between a 316 SS 8 mm bead and a custom-made lid, placed in a 2 ml Eppendorf tube, and were disrupted for 30 s at 1,500 rpm (Micro-dismembrator; Sartorius, Göttingen, Germany). RNA was then extracted using TRIzol® (Gibco Invitrogen) and purified using an RNeasy mini-kit (Qiagen, Venlo, The Netherlands). The quantity and purity of the isolated RNA were measured by spectrophotometry (ND-1000, Isogen, De Meern, The Netherlands) and integrity by gel electrophoresis. Absence of genomic DNA was validated by endpoint PCR and gel electrophoresis using primers for glyceraldehyde 3-phosphate dehydrogenase (GAPDH). Total RNA (300 ng) was then reverse-transcribed (M-MLV; Gibco Invitrogen) and the gene expression levels of sox9, aggrecan, type II collagen, TGFβ, type X collagen and sp7 (also known as Osterix) were assessed with SYBR green qPCR (iCycler; Biorad, Hercules, CA, USA) (see Table 1 for primer list). 18S (PrimerDesign Ltd, Southampton, UK) was selected as a reference gene from three genes (RPL13A, GAPDH and 18S) as the most stable gene throughout our experimental conditions. Expression of the gene of interest is reported as relative to 18S expression (2 -ΔCT method). When gene expression was not detected, the 2 -ΔCT value was set to 0 to conduct the statistical analysis. For the 7 days of TGFβ3 treatment group, beads were also analyzed at day 7. Statistical analysis General linear regression models based on ANOVAs were used to examine the effects of seeding (DC and MA), TGFβ treatment (0, 7 and 21 days) and days of culture (days 0, 7 and 21) and their interactions on the variables DNA and GAG/DNA contents, and sox9, type II collagen, aggrecan, TGFβ, type X collagen and sp7 gene expression. In all analyses, full factorial models were fitted to the data and then a backwards stepwise procedure was used to remove the non-significant effects. For each significant effect, a Tukey-HSD post hoc test was conducted; p < 0.05 was considered significant. All data analyses were performed in R v. 2.9.0 (R Development Core Team, 2009). Results Physical properties of the granule surfaces BMSCs showed high cell viability after seeding for all conditions (Figure 2A,E). At day 0, cells appeared as a welldispersed cell population for DC conditions (Figure 2A) or as dense micro-aggregates for MA conditions (Figure 2E). However, some dead cells could be observed around the micro-aggregates (Figure 2E), most likely due to a higher shear stress exerted on micro-aggregates than dispersed cells when producing the alginate beads. At day 21, cell viability remained high for all conditions (Figure 2B-D, F-H). DNA content confirmed that cells proliferated (Figure 2I). DC conditions, however, led to higher proliferation than MA conditions. Cell morphology and adhesion At day 0, the DC condition led to single dispersed cells (Figure 3A,I), while MA resulted in large clusters (Figure 3E,I) containing, on average, 14 cells (Figure 3J). In DC conditions, after 21 days of culture BMSCs proliferated (Figure 3J) and formed clusters (Figure 3B-D) whose size increased when TGFβ was added, although not significantly (Figure 3I). In MA conditions (Figure 3F-H), micro-aggregates grew during culture, with the bigger clusters observed for 7 days of TGFβ (Figure 3I), although cell proliferation was limited (Figure 3J). At day 0, cell-cell interactions were more developed in MA than in DC, as shown by the immunostaining of pancadherins (Figure 4D andA, respectively), which are glycoproteins involved in cell-cell adhesion. These improved cell-cell interactions, however, disappeared after 3 weeks of culture (Figure 4E). TGFβ treatment had no effects on cadherin expression for either DC or MA (data not shown). Regarding vinculin, a membrane-cytoskeletal protein involved in cell-matrix adhesion, its expression was similar for DC and MA (Figure 4G and J, respectively), with a dispersed localization of vinculin through the cell surface. At day 21, vinculin condensed in focal adhesions. Distribution seemed similar for DC and MA (Figure 4H and K, respectively), and was not influenced by the different TGFβ stimulation patterns (data not shown). Matrix production In all conditions, proteoglycans (PGs) were deposited (Figure 5A-H), demonstrating successful chondrogenic differentiation of the BMSCs. For both DC and MA conditions, prolonging exposure to TGFβ increased the PG production, as confirmed by a quantitative assay (Figure 5I). However, no differences between DC and MA could be detected. In both conditions, PGs appeared to be concentrated within the clusters (for DC) or the microaggregates (for MA), filling the void spaces previously observed. Gene expression Levels of gene expression of chondrogenic markers (sox9, a transcription factor involved in early chondrogenesis; type II collagen and aggrecan, both main components of cartilage matrix) increased at day 21 for all conditions (Figure 6A-C). In MA conditions, however, type II collagen and aggrecan mRNA expression were inhibited compared to DC conditions at day 21 (Figure 6B,C). Seven days of TGFβ treatment led to the highest levels of expression of all chondrogenic markers for both MA and DC conditions (Figure 6A-C). At day 0, MA upregulated TGFβ gene expression compared to DC, but this high level of expression disappeared at day 21 under all TGFβ stimulation conditions (Figure 6D). Transient TGFβ stimulation For the 7 days of TGFβ treatment, chondrogenic marker gene expression and matrix production were also evaluated at day 7. The results showed that BMSCs had already started to differentiate at that point. All chondrogenic markers (sox9, aggrecan, type II collagen) were already highly upregulated at the gene levels (Figure 7B-D). PGs were also produced at day 7 (Figure 7A), but to a limited amount. PG content and type II collagen expression were significantly higher at day 21 than at day 7, indicating that the cells were still going along the chondrogenic pathway although TGFβ was withdrawn. Hypertrophy and osteogenic differentiation Both type X collagen (Figure 8A), an indicator of chondrocyte hypertrophy, and sp7 (Figure 8B), a transduction factor involved in early osteogenesis, mRNA expressions increased at day 21 for the DC condition, while only type X collagen expression increased in the MA condition. For both conditions and genes, the highest level of expression at day 21 was for the 7 days of TGFβ treatment. When type X collagen levels of expression were similar for MA and DC conditions, sp7 mRNA expression was upregulated at day 0 for the MA condition (Figure 8B). Discussion In summary, these results show that BMSCs underwent (partial) chondrogenic differentiation for all conditions tested (PG production and increased chondrogenic marker expression). MA, however, did not perform better (matrix production) or even as well (gene expression) as DC under all the TGFβ stimulation patterns tested. Nonetheless, these data show that small microaggregates can be successfully integrated into hydrogel (alginate). Although cell death was slightly upregulated at day 0 (Figure 2E), BMSCs in MA survived up to 21 days (Figure 2) and differentiated into chondrocytes, with the deposition of a PG-rich matrix within the MA (Figure 5H) and substantial upregulation of sox9, type II collagen and aggrecan gene expression (Figure 6A-C). Increase of TGFβ gene expression at day 0 in MA compared to DC (Figure 6D) suggests an early stimulant effect of MA, maybe due to improved juxtacrine signaling (cell-cell contact) rather than paracrine signaling (limited distance between neighbouring cells), as cell-cell contact was improved at day 0 in the MA condition, as shown by pan-cadherin staining. This upregulation, however, was lost at day 21 and, more importantly, was not translated into enhanced chondrogenic matrix production (Figure 5) or gene expression (Figure 6), even if no exogenous TGFβ was added. One explanation for this absence of effects may be the disappearance, during the 3 weeks of culture, of the cell-cell interactions observed at day 0 (Figure 4). BMSCs in MA most likely lost contact with each other (as shown by pan-cadherin staining) due to extracellular matrix production between the cells, but formed new junctions (as shown by vinculin staining) with this matrix (Figure 5). The lack of effects of TGFβ gene expression upregulation may also been explained by a low translation efficiency or by post-transcriptional regulatory mechanisms. Several studies comparing genomic and proteomic analyses report, indeed, moderate correlation between mRNA and protein expression [START_REF] Chen | Discordant protein and mRNA expression in lung adenocarcinomas[END_REF][START_REF] Huber | Comparison of proteomic and genomic analyses of the human breast cancer cell line T47D and the anti-estrogen-resistant derivative T47D-r[END_REF][START_REF] Oberemm | Toxicogenomic analysis of N-nitrosomorpholine induced changes in rat liver: comparison of genomic and proteomic responses and anchoring to histopathological parameters[END_REF][START_REF] Tian | Integrated genomic and proteomic analyses of gene expression in mammalian cells[END_REF]. Another explanation for the absence of effects of upregulated TGFβ expression might be that BMSCs are not sensitive to the levels of TGFβ they are producing, either because these levels are too low or because BMSCs are less sensitive in MA. Cytoskeleton organization, indeed, has been shown to modulate cell sensitivity to TGFβ. Disorganization of the microfilaments in rabbit articular chondrocytes after treatment with dihydrocytochalasin B enhanced the sensitivity of the cells to TGFβ (increased PG and collagen synthesis) [START_REF] Benya | Dihydrocytochalasin B enhances transforming growth factor-β-induced re-expression of the differentiated chondrocyte phenotype without stimulation of collagen synthesis[END_REF]. In the present study, BMSCs at day 0 displayed more organized microfilaments in MA cells than in the round cells of the DC condition (Figure 3A/E). This difference in cytoskeleton organization may also explain why MA are not upregulating chondrogenic gene expression as well as DC under transient and continuous TGFβ treatment (Figure 6). Although no significant differences were observed at the matrix level (Figure 5), our data support results observed with bovine articular chondrocytes in hydrogel, where small micro-aggregates (5-18 cells) inhibit chondrocyte biosynthesis compared to dispersed cells [START_REF] Albrecht | Probing the role of multicellular organization in threedimensional microenvironments[END_REF]. Distribution of PGs, however, was quite distinct between the two conditions, with a more evenly distributed matrix for DC (Figure 5). As the cell concentration was similar at day 0 for both conditions, DC resulted in a more dispersed and homogeneous distribution of cells (Figure 2), which could account for a more even distribution of the matrix produced by the BMSCs. Still, MA might be more potent for the osteogenic differentiation of BMSCs. In fact, MA (not embedded into a hydrogel) have already been shown to promote osteogenic differentiation of human BMSCs compared to 2D culture (increased calcium deposition and osteogenic gene expression) [START_REF] Kabiri | 3D mesenchymal stem/stromal cell osteogenesis and autocrine signalling[END_REF]. In the present study, we observed an upregulation of sp7, a transcription factor involved in early osteogenic differentiation, in MA at day 0 compared to DC (Figure 8). This suggests a positive influence of the MA on the osteogenic differentiation pathway. The absence of factors required for osteogenic differentiation, such as FBS or β-glycerophosphate, during culture, however, probably nullifies this influence, and additional experiments need to be conducted to assess the potential of MA to promote BMSC osteogenesis. Contrary to MA, BMSCs in the DC condition proliferated during the 3 weeks of culture. This absence of significant proliferation in MA already containing several cells (Figure 3) may be explained by contact inhibition present in the MA but not in the DC. At day 21, cloned DC spontaneously formed clusters. Although these structures appeared similar to the MA, they were smaller and contained fewer cells (Figure 3). Recreating and amplifying this natural process of cloning and clustering in the MA, however, did not exert any substantial effect on BMSC differentiation, suggesting that cell-cell interactions are not required for initiating chondrogenesis. These results also confirm that bovine BMSCs can spontaneously differentiate toward the chondrogenic lineage without the presence of TGFβ (PG production and increased sox9, type II collagen and aggrecan gene expression; Figure 5, Figure 6, Figure 7) when cultured in hydrogel and serum-free conditions, as previously reported for bovine BMSCs in micromass culture [START_REF] Bosnakovski | Gene expression profile of bovine bone marrow mesenchymal stem cell during spontaneous chondrogenic differentiation in pellet culture system[END_REF]. Seven days of TGFβ treatment were enough to enhance the production of cartilaginous matrix, as shown previously with human BMSCs [START_REF] Buxton | Temporal exposure to chondrogenic factors modulates human mesenchymal stem cell chondrogenesis in hydrogels[END_REF], but, surprisingly, gave the highest upregulation of chondrogenic marker expression (Figure 6) for both MA and DC. However, the transient TGFβ stimulation also led to higher expression of type X collagen (a marker of chondrocyte hypertrophy). Hence, continuous stimulation with TGFβ resulted in a more stable chondrogenic phenotype; it also led to the highest matrix production (Figure 5). Conclusions on the (absence of) effects of MA on BMSC chondrogenesis, however, are only valid for the cell concentration and hydrogel tested here. Using a lower concentration may dilute paracrine signaling in the DC condition and, therefore, diminish the chondrogenic differentiation of BMSCs. [START_REF] Buxton | Temporal exposure to chondrogenic factors modulates human mesenchymal stem cell chondrogenesis in hydrogels[END_REF] have already evaluated the influence of cell concentration on the chondrogenesis of BMSCs seeded into a hydrogel. They reported a maximal PG/collagen synthesis/cell for concentrations in the range 12.5-25 million cells/ml. Lower concentrations led to lower matrix production, indicating the involvement of paracrine signaling in BMSC chondrogenesis. With the concentration used here (7 million cells/ml), paracrine signaling should be diluted in the DC condition and so MA could have a beneficial effect by locally increasing this paracrine signaling. As no positive effect was found for the MA, it seems that cell-cell contact or cytoskeleton organization have a stronger negative effect than paracrine signaling, a positive one for BMSC chondrogenesis. Moreover, the effect of MA on BMSC chondrogenesis has only been tested here in alginate and could, therefore, be an artifact of that system. The previous observation, that micro-aggregates inhibit chondrogenesis of bovine chondrocytes seeded in photo-polymerizable hydrogel [START_REF] Albrecht | Probing the role of multicellular organization in threedimensional microenvironments[END_REF] when compared to dispersed cells, tends to indicate, however, that the negative effects observed here were not an artifact of alginate. Another limitation of the study may be the use of exogenous TGFβ if it is the endogenous molecular agent involved in juxtacrine signaling. In this case, adding TGFβ to the culture medium may have overpowered any increase of TGFβ expression present in MA, but not in DC, conditions. Such a beneficial effect, however, should have been observed when the BMSCs were cultured without exogenous TGFβ, when no differences between MA and DC were observed (Figure 5, Figure 6, 0 days TGFβ group). Nonetheless, if TGFβ had been involved in cellular signaling after BMSC differentiation, bone morphogenic protein 2 (BMP2) could have been used to induce BMSC chondrogenic differentiation instead [START_REF] Schmitt | BMP2 initiates chondrogenic lineage development of adult human mesenchymal stem cells in highdensity culture[END_REF]. This study provides important clues about the communication of BMSCs with their environment, where cell-cell interaction seems to have a limited involvement in their (chondrogenic) differentiation. Although DC cloned and spontaneously formed clusters, accelerating and amplifying this process with the MA did not provide beneficial effects. This suggests that influencing cellmatrix, rather than cell-cell, interactions may be a more potent tool to control BMSC differentiation, at least for the chondrogenic pathway. To conclude, this study shows that micro-aggregates, although potentially promoting cell-cell contacts and improving paracrine signaling, have no beneficial effects on bovine BMSC chondrogenesis in alginate. Bovine BMSCs (n = 3) were expanded up to P4 in hgDMEM + 10% FBS + 1% P/S (growth medium). Cells were then used to seed either alginate beads at 7 million cells/ml (dispersed cells; DC) or agarose chips cast on PDMS stamps (micro-aggregates; MA). BMSCs on agarose chips were cultured for 3 additional days in growth medium to allow the cells to form micro-aggregates. Those were then used to seed alginate beads at 7 million cells/ml. After seeding in alginate, BMSCs (DC or MA) were cultured for 3 weeks in hgDMEM + 1%P/S + 0.1 μM dexamethasone + 1% ITS-1+ + 1.25 mg/ml BSA + 50 μg/ml ascorbic acid 2-phosphate + 40 μg/ml L-proline + 100 μg/ml sodium pyruvate (Ch-medium). This medium was supplemented with 10 ng/ml TGFβ3 (Ch+ medium) for 0, 7 or 21 days. At D0 and D21, cell viability was characterized by live/dead staining; cell morphology by histology (phalloidin, antivinculin and anti-pan-cadherin staining); produced matrix by histology (Alcian blue staining); biochemical assays [glycosaminoglycan (GAG) and DNA content]; and cell phenotype was characterized by qRT-PCR (types II and X collagens, sox9, aggrecan, TGFβ and sp7). Gene expression of type X collagen (A) and sp7 (B), as determined by qRT-PCR; expression is relative to 18S reference gene (2 -ΔCT method). Values are mean+ SD (NB: logarithmic y axis, and error bars are also logarithmic); * p < 0.05 vs D0; # p < 0.05 vs 7 days of TGFβ3 treatment; @ p < 0.05 vs dispersed cells. Table 1. Primer sequences for target and reference genes used in RT-qPCR assays RPL13a, ribosomal protein L13a; GAPDH, glyceraldehyde-3-phosphate dehydrogenase; SOX9, SRY (sex determining region Y)-box 9; COL2A1, collagen type II α1; ACAN, aggrecan; TGFB1, transforming growth factor-β1; COL10A1, collagen type X α1; SP7, Sp7 transcription factor. *GenBank™ accession number. **BD, primers designed with Beacon designer software (Premier Biosoft, Palo Alto, CA, USA) and ordered from Sigma. Figure 1 . 1 Figure 1. Experimental design. Figure 2 . 2 Figure 2. Cell viability and proliferation.(A-H) Bovine BMSCs seeded in alginate beads as dispersed cells (A-D) or as micro-aggregates (E-H) at days 0 (A, E) and 21 after exposure to TGFβ3 for 0 (B, F), 7 (C, G), and 21 (D, H) days. Cells were stained with calcein AM (green fluorescence) for living cells and propidium iodide (red fluorescence) for dead cells. White frames are ×2.5 digital magnification; representative of three donors/group; scale bar = 100 μm; colour images are available online. (I) DNA content/bead, as determined with Hoechst dye assay; values are mean ± SD; n = 3/group. * p < 0.05 vs D0; @ p < 0.05 vs dispersed cells. Figure 3 . 3 Figure 3. Cell morphology. (A-H) Bovine BMSCs seeded in alginate beads as dispersed cells (A-D) or micro-aggregates (E-H) at day 0 (A, E) and day 21 after exposure to TGFβ3 for 0 (B, F), 7 (C, G) and 21 (D, H) days. The beads were cryosectioned, fixed and stained with phalloidin (red fluorescence) for F-actin filaments and counterstained with DAPI (blue fluorescence) for cell nuclei; representative of three donors/group; scale bar = 20 μm; colour images available online. (I, J) Morphometric analysis: area covered by cells or clusters (I) and cell number/cluster (J) were determined by image analysis; values are mean ± SD; n = 25 clusters/cells analysed/ group. * p < 0.05 vs D0; # p < 0.05 vs 7 days of TGFβ3 treatment; $ p < 0.05 vs 21 days of TGFβ3 treatment; @ p < 0.05 vs dispersed cells. Figure 4 . 4 Figure 4. Cell adhesion.(A-F) Bovine BMSCs seeded in alginate beads as dispersed cells (A, B) or as micro-aggregates (D, E) at day 0 (A, D) and after 21 days of exposure to TGFβ3 (B, E). The beads were cryosectioned, fixed and stained with anti-pan-cadherin (green and counterstained with DAPI (blue fluorescence) for cell nuclei. Human cardiomyocyte progenitor cells were used as positive controls (C) and experimental samples for secondary antibody negative control (F). (G-L) Bovine BMSCs seeded in alginate beads as dispersed cells (G, H) or as micro-aggregates (J, K) at day 0 (G, I) and after 21 days of exposure to TGFβ3 (H, K). Beads were cryosectioned, fixed and stained with anti-vinculin (green fluorescence) and counterstained with DAPI (blue fluorescence) for cell nuclei. Human cardiomyocyte progenitor cells were used as positive controls (I) and experimental samples for secondary antibody negative control (L); representative of three donors/group; scale bar = 10 μm. Figure 5 . 5 Figure 5. Matrix production. (A-F) Bovine BMSCs seeded in alginate beads as dispersed cells (A-C) or as micro-aggregates (D-F) at day 21 after exposure to TGFβ3 for 0 (A, D), 7 (B, E) and 21 (C, F) days. Beads were cryosectioned, fixed and stained with Alcian blue for proteoglycans (note that light blue is alginate); representative of three donors/group; scale bar = 200 μm. (G, H) Higher magnifications of (B, E), respectively; scale bar = 50 μm; colour images available online. (I) GAG/DNA content after 21 days of culture, as determined with DMMB and Hoechst dye assays, respectively. Values are mean ± SD (NB: logarithmic y axis, and error bars are also logarithmic); n = 3/group; * p < 0.05 vs D0; $ p < 0.05 vs 21 days of TGFβ3 treatment; ND, not detected. Figure 6 . 6 Figure 6. Gene expression -chondrogenesis markers.Gene expression of sox9 (A), type II collagen (B), aggrecan (C) and TGFβ (D), as determined by qRT-PCR. Expression is relative to 18S reference gene (2 -ΔCT method). Values are mean + SD (NB: logarithmic y axis, and error bars are also logarithmic);n=3/group; * p < 0.05 vs D0; # p < 0.05 vs 7 days of TGFβ3 treatment; $ p < 0.05 vs 21 days of TGFβ3 treatment; @ p < 0.05 vs dispersed cells. Figure 7 . 7 Figure 7. Transient TGFβ stimulation.(A) GAG/DNA content at days 0, 7 and 21, as determined with DMMB and Hoechst dye assays, respectively; values are mean ± SD (NB: logarithmic y axis, and error bars are also logarithmic). (B-D) Gene expression of sox9 (B), type II collagen (C) and aggrecan (D), as determined by qRT-PCR; expression is relative to 18S reference gene (2-ΔCT method). Values are mean + SD (NB: logarithmic y axis, and error bars are also logarithmic); n = 3/group; * p < 0.05 vs D0; # p < 0.05 vs D21; @ p < 0.05 vs dispersed cells; ND, not detected. Figure 8 . 8 Figure 8. Gene expression -Hypertrophy and osteogenesis markers. Acknowledgments The authors would like to thank R. R. Delcher for his help with statistical analysis and Marina van Doeselaar for immunostainings. The authors did not receive financial support or benefits from any commercial source related to the scientific work reported in this manuscript. Disclosure Statement The authors declare no competing financial interests.
42,762
[ "740372" ]
[ "169365" ]
01709246
en
[ "info" ]
2024/03/05 22:32:10
2016
https://hal.science/hal-01709246/file/ObserverDesignSwitchedSystems.pdf
Diego Mincarelli email: diego.mincarelli@inria.fr Alessandro Pisano email: pisano@diee.unica.it Thierry Floquet email: thierry.floquet@ec-lille.fr Elio Usai email: eusai@diee.unica.it Uniformly convergent sliding mode-based Keywords: Switched systems, observer design, second order sliding modes come INTRODUCTION In the last decade, the control community has devoted a great deal of attention to the study of hybrid/switched systems [START_REF] Engell | Modelling and analysis of hybrid systems[END_REF][START_REF] Goebel | Hybrid dynamical systems[END_REF][START_REF] Liberzon | Basic problems in stability and design of switched systems[END_REF]. They represent a powerful tool to describe systems that exhibit switchings between several subsystems, inherently by nature or as a result of external control actions such as in switching supervisory control [START_REF] Morse | Supervisory control of families of linear set-point controllers -part 1: Exact matching[END_REF]. Switched systems and switched multi-controller synthesis have numerous applications in the control of mechanical systems [START_REF] Narendra | Adaptation and learning using multiple models, switching, and tuning[END_REF][START_REF] Chevallereau | Asymptotically stable running for a five-link, four-actuator, planar bipedal robot[END_REF], automotive industry [START_REF] Balluchi | Hybrid control in automotive applications: the cut-off control[END_REF], switching power converters [START_REF] Koning | Brief digital optimal reduced-order control of pulsewidth-modulated switched linear systems[END_REF], aircraft and traffic control [START_REF] Kamgarpour | Hybrid Optimal Control for Aircraft Trajectory Design with a Variable Sequence of Modes[END_REF][START_REF] Glover | A stochastic hybrid model for air traffic control simulation[END_REF], biological systems [START_REF] Vries | Hybrid system modeling and identification of cell biology systems: perspectives and challenges[END_REF][START_REF] Cinquemani | Identification of genetic regulatory networks: A stochastic hybrid approach[END_REF], and many other fields [START_REF] Branicky | Studies in hybrid systems: Modeling, analysis, and control[END_REF]. Remarkable theoretical results involving switched systems have been achieved concerning their stability and stabilizability [START_REF] Lin | Stability and stabilizability of switched linear systems: A survey of recent results[END_REF][START_REF] Liberzon | Switching in Systems and Control. Systems and Control: Foundations and Applications[END_REF], controllability and reachability [START_REF] Sun | Switched Linear Systems: Control and Design[END_REF][START_REF] Sun | Controllability and reachability criteria for switched linear systems[END_REF] and observability [START_REF] Vidal | Observability of linear hybrid systems[END_REF][START_REF] Xie | Necessary and sufficient conditions for controllability and observability of switched impulsive control systems[END_REF][START_REF] Babaali | Observability of switched linear systems in continuous time[END_REF][START_REF] Sun | Switched Linear Systems: Control and Design[END_REF]. The problem of observer design for linear switched systems has been thoroughly investigated by the control community and different approaches have been proposed. The assumptions about the knowledge of the discrete state play a crucial role. In the case of complete knowledge of the discrete state a Luenberger-like switched observer matching the currently active dynamics can be used and the problem is that of guaranteeing the stability of the switched error dynamics. In [START_REF] Alessandri | Switching observers fo continuous-time and discrete-time linear systems[END_REF] it is shown that the observer gain matrices can be selected by solving a set of linear matrix inequalities. In [START_REF] Bejarano | Switched observers for switched linear systems with unknown inputs[END_REF], the approach is generalized to cover linear switched systems with unknown exogenous inputs. In [START_REF] Tanwani | Observability implies observer design for switched linear systems[END_REF], by adopting the notion of observability over an interval, borrowed from [START_REF] Xie | Necessary and sufficient conditions for controllability and observability of switched impulsive control systems[END_REF], an observer is designed for switched systems whose subsystems are not even required to be separately observable. However, in some situations the active mode is unknown and needs to be estimated, along with the continuous state, by relying only on the continuous output measurements. Usually, in such case, the observer consists of two parts: a discrete state (or location) observer, estimating the active mode of operation, and a continuous observer that estimates the continuous state of the switched system. In [START_REF] Balluchi | Design of observers for hybrid systems[END_REF], the architecture of a hybrid observer consisting of both a discrete and continuous state identification part is presented, assuming partial knowledge of the discrete state, i.e. dealing with the case in which some discrete events causing the switchings are supposed to be measurable. When such a discrete output is not sufficient to identify the mode location, the information available from the continuous evolution of the plant is used to estimate the current mode. However, the " distinguishability" of the different modes, i.e. the property concerning the possibility to reconstruct univocally the discrete state, was not analysed. The present work intrinsically differs from [START_REF] Balluchi | Design of observers for hybrid systems[END_REF] in that we consider the case of completely unknown discrete state. In such a case the possibility to obtain an estimate of the current mode in a finite time is clearly important, not to say crucial. This is clear for instance from [START_REF] Pettersson | Designing switched observers for switched systems using multiple lyapunov functions and dwell-time switching[END_REF], where the authors focus on the continuous-time estimation problem and show that a bound to the estimation error can be given if the discrete mode is estimated correctly within a certain time. Additionally, for those switched systems admitting a dwell time, a guaranteed convergence of the discrete mode estimation taking place "sufficiently faster" that the dwell time is needed. In view of these considerations sliding mode-based observers seem to be a suitable tool due to the attractive feature of finite-time convergence charac-terizing the sliding motions [START_REF] Davila | Second-order sliding-mode observer for mechanical systems[END_REF][START_REF] Floquet | On sliding mode observers for systems with unknown inputs[END_REF][START_REF] Orlov | Finite time stabilization of a perturbed double integrator -part i: Continuous sliding modebased output feedback synthesis[END_REF][START_REF] Pisano | Sliding mode control: a survey with applications[END_REF]. As a matter of fact, sliding mode observers have been successfully implemented to deal with the problem of state reconstruction for switched systems. In [START_REF] Bejarano | State exact reconstruction for switched linear systems via a super-twisting algorithm[END_REF], an observer is proposed ensuring the reconstruction of the continuous and discrete state in finite time. In [START_REF] Davila | Continuous and discrete state reconstruction for nonlinear switched systems via high-order sliding-mode observers[END_REF], the authors present an observer, based on the high-order sliding mode approach, for nonlinear autonomous switched systems. However in the above works, though guaranteeing the finite-time convergence, the convergence time depends on the initial conditions mismatch, and, as a consequence, the estimation convergence in a certain pre-specified time can be guaranteed only upon the existence of an a-priori known admissible domain for the system initial conditions. Main contribution and structure of the paper In the present paper we propose a stack of observers whose output injection is computed by relying on the modified Super-Twisting Algorithm, introduced in [START_REF] Cruz-Zabala | Uniform robust exact differentiator[END_REF], that guarantees the so-called "uniform convergence" property, i.e. convergence is attained in finite-time and an upper bound to the transient time can be computed which does not depend on the initial conditions. We also show that, under some conditions, the discrete mode can be correctly reconstructed in finite-time after any switch independently of the observation error at the switching times. Using the continuous output of the switched system, the observer estimates the continuous state and, at the same time, produces suitable residual signals allowing the estimation of the current mode. We propose a residual "projection" methodology by means of which the discrete state can be reconstructed after a switching instant with a finite and pre-specified estimation transient time, allowing a more quick and reliable reconstruction of the discrete state. Additionally, we give structural "distinguishability" conditions in order to guarantee the possibility to reconstruct the discrete state univocally by processing the above mentioned residuals. The paper structure is as follows. Section 2 formulates the problems under analysis and outlines the structure of the proposed scheme. Section 3 illustrates the design of the continuus state observers' stack by providing the underlying Lyapunov based convergence analysis. Section 4 deal with the discrete state estimation problem. Two approaches are proposed, one using the " asymptotically vanishing residuals" (Subsection 4.1) and another one, taking advantage of the above mentioned residuals' "projection" methodology ("uniform-time zeroed residuals", Subsection 4.2). Section 4.3 outlines the structural conditions addressing the "distinguishability" issues. Section 5 summarizes the proposed scheme and main results of this paper. Section 6 illustrates some simulation results and Section 7 gives some concluding remarks. Notation For a vector v = [v 1 , v 2 , . . . , v n ] T ∈ R n denote sign(v) = [sign(v 1 ), sign(v 2 ), . . . , sign(v n )] T . (1) Given a set D ⊆ R n , let vj (D) be the maximum value that can assume the j-th element of v on D. Denote as M the 2-norm of a matrix M. For a square matrix M, denote as σ(M) the spectrum of M, i.e. the set of all eigenvalues of M. Finally, denote as N ← (M) the left null space of a matrix M. PROBLEM STATEMENT Consider the linear autonomous switched dynamics ẋ (t) = A σ x (t) y(t) = C σ x(t) (2) where x (t) ∈ R n represents the state vector, and y(t) ∈ R p represents the output vector. The so-called switching law or discrete state σ(t) : [0, ∞) → {1, 2, ..., q} determines the actual system dynamics among the possible q "operating modes" which are represented, for system (2), by the set of matrix pairs {A 1 , C 1 }, {A 2 , C 2 }, ...{A q , C q }. Without loss of generality, it is assumed that the output matrices C i , ∀i = 1, 2, ..., q are full row rank matrices. The switching law is a piecewise constant function with discontinuities at the switching time instants: σ(t) = σ k , t k ≤ t < t k+1 , k = 0, 1, ..., ∞ (3) where t 0 = 0 and t k (k = 1, 2, . . .) are the time instants at which the switches take place. Definition 1 The dwell time is a constant ∆ > 0 such that the switching times fulfill the inequality t k+1t k ≥ ∆ for all k = 0, 1, .... The objective is to design an observer able to simultaneously estimate the discrete state σ(t) and the continuous state x(t) of system (2), by relying on the availability for measurements of the output vector y(t). We propose a design methodology based on a stack of sliding mode observers producing estimates of the evolution of the continuous state of the switched system. At the same time, suitable residuals are provided for identifying the actual value of the discrete state and, consequently, the specific observer of the stack that is producing an asymptotically converging estimate of the continuous state. The observer structure, depicted in Fig. 1, mainly consists of two parts: a location observer and a continuous state observer. The location observer is devoted to the identification of the discrete state, i.e. the active mode of operation of the switched system, on the basis of some residuals signals produced by the continuous observer. The continuous state observer receives as input the output vector y(t) of the switching system and, using the location information provided by the location observer, produces an estimation of the continuous state of the system. Finally, the proposed approach requires each subsystem to be observable: Assumption 3 The pairs (A i , C i ) are observable ∀i = 1, 2, ..., q. 3 Continuous state observer design Let us preliminarily consider, as suggested in [START_REF] Edwards | Sliding Mode Control: Theory and applications[END_REF], a family of nonsingular coordinate transformations such that the output vector y(t) is a part of the transformed state z(t), i.e. z(t) =    ξ(t) y(t)    = T σ x(t) (4) where ξ(t) ∈ R (n-p) and the transformation matrix is given by T σ =    (N σ ) T C σ    , (5) where the columns of N i ∈ R n×(n-p) span the null space of C i , i = 1, 2, ..., q. By Assumption 2, the trajectories z(t) will evolve into some known compact domain D z . The transformation ( 4) is always nonsingular, and the switched system dynamics (2) in the transformed coordinates are: ż (t) = Āσ z (t) (6) where Āσ = T σ A σ (T σ ) -1 =    Āσ11 Āσ12 Āσ21 Āσ22    (7) A stack of q dynamical observers, each one associated to a different mode of the switched system, is suggested as follows: żi (t) = Āi ẑi (t) + Li ν i (t), if ẑi ∈ D z i = 1, 2, . . . , q, ẑij (t) = zj (D z ), if ẑij ≥ zj (D z ) j = 1, 2, . . . , n (8) where ẑi = [ ξi , ŷi ] T is the state estimate provided by the i-th observer, ν i ∈ R p represents the i-th observer injection input, yet to be designed, and Li ∈ R n×p takes the form Li =    L i -I    (9) where L i ∈ R (n-p)×p are observer gain matrices to be designed and I is the identity matrix of dimension p. In the second of [START_REF] Chevallereau | Asymptotically stable running for a five-link, four-actuator, planar bipedal robot[END_REF], which corresponds to a saturating projection of the estimated state, according to the a-priori known domain of evolution D z of vector z(t), the notation ẑij denotes the j-th element of the vector ẑi . By introducing the error variables e i ξ (t) = ξi (t) -ξ(t), e i y (t) = ŷi (t) -y(t), i = 1, 2, ..., q, (10) in view of ( 6) and of the first equation of ( 8) the next error dynamics can be easily obtained: ėi ξ (t) = Āi11 e i ξ (t) + Āi12 e i y (t) + ( Āi11 -Āσ11 )ξ(t) + ( Āi12 -Āσ12 )y(t) + L i ν i (t) (11) ėi y (t) = Āi21 e i ξ (t) + Āi22 e i y (t) + ( Āi21 -Āσ21 )ξ(t) + ( Āi22 -Āσ22 )y(t) -ν i (t) (12) Then, by defining ϕ iσ (e i ξ , e i y , ξ, y) = Āi21 e i ξ (t) + Āi22 e i y (t) + ( Āi21 -Āσ21 )ξ(t) + ( Āi22 -Āσ22 )y(t) (13) one can rewrite (12) as ėi y (t) = ϕ iσ (e i ξ , e i y , ξ, y) -ν i (t) (14) Let us consider now the second equation of [START_REF] Chevallereau | Asymptotically stable running for a five-link, four-actuator, planar bipedal robot[END_REF]. We know that the trajectories z(t) evolve inside a compact domain D z and the "reset" relation ẑij (t) = zj (D z ) , applied when the corresponding estimate leaves the domain D z , forces them to remain in the set D z . Such a "saturation" mechanism in the observer guarantees that the estimation errors e i ξ and e i y are always bounded. Consequently the functions ϕ ij are smooth enough, that is there exist a constant Φ such that the following inequality is satisfied d dt ϕ ij (e i ξ , e i y , ξ, y) ≤ Φ, ∀i, j = 1, 2, ..., q (15) Following [START_REF] Cruz-Zabala | Uniform robust exact differentiator[END_REF], the observer injection term ν i is going to be specified within the next Theorem which establishes some properties of the proposed observer stack that will be instrumental in our next developments. Theorem 1 Consider the linear switched system (2), satisfying the Assumptions 1-3 along with the stack of observers [START_REF] Chevallereau | Asymptotically stable running for a five-link, four-actuator, planar bipedal robot[END_REF], and the observer injection terms set according to ν i = k 1 φ 1 (e i y ) -ν 2i (16) ν2i = -k 2 φ 2 (e i y ) (17) where φ 1 (e i y ) = |e i y | 1/2 sign(e i y ) + µ|e i y | 3/2 sign(e i y ) (18) φ 2 (e i y ) = 1 2 sign(e i y ) + 2µe i y + 3 2 µ 2 |e i y | 2 sign(e i y ) (19) with µ > 0 and the tuning coefficients k 1 and k 2 selected in the set: K = (k 1 , k 2 ) ∈ R 2 |0 < k 1 < 2 √ Φ, k 2 > k 2 1 4 + 4Φ 2 k 2 1 ∪ (k 1 , k 2 ) ∈ R 2 |k 1 < 2 √ Φ, k 2 > 2Φ (20) Let L i be chosen such that Re σ Āi11 + L i Āi21 ≤ -γ, γ > 0 (21) Then, for sufficiently large Φ and γ, there exists an arbitrarily small time T * << ∆ independent of e i y (t k ) such that, for all k = 0, 1, ..., ∞ and for some α > 0, the next properties hold along the time intervals t ∈ [t k + T * , t k+1 ): e i y (t) = 0 ∀i (22) ν i (t) = ϕ iσ (e i ξ , 0, ξ, y) ∀i (23) e σ k ξ (t) ≤ αe -γ(t-t k -T * ) (24) Proof. The theorem can be proven by showing the uniform (i.e. independent of the initial condition) time convergence of e i y to zero for all the q observers, after each switching, and analyzing the dynamics of the error variables e i ξ once the trajectories are restricted on the surfaces S i o = { e i ξ , e i y : e i y = 0}. ( 25 ) Considering the input injection term ( 16) into ( 14) the output error dynamics are given by ėi y = ϕ iσ -k 1 φ 1 (e i y ) + ν 2i (26) By introducing the new coordinates ψ i = ν 2i + ϕ iσ (27) and considering (17) one obtains the system ėi y = -k 1 φ 1 (e i y ) + ψ i (28) ψi = -k 2 φ 2 (e i y ) + d dt ϕ iσ (29) In light of [START_REF] Engell | Modelling and analysis of hybrid systems[END_REF], system (28)-( 29) is formally equivalent to that dealt with in [START_REF] Cruz-Zabala | Uniform robust exact differentiator[END_REF], where suitable Lyapunov analysis was used to prove the uniform-time convergence to zero of e i y and ψ i , i.e. e i y = 0 and ψ i = 0 on the interval t ∈ [t k + T * , t k+1 ), where T * is an arbitrarily small transient time independent of e i y (t k ). Consequently ėi y = 0 and, from equation ( 14), condition ( 23) is satisfied too. By substituting ( 23) into ( 11) with e i y = 0, it yields the next equivalent dynamics of the error variables: ėi ξ (t) = Āi11 e i ξ (t)+( Āi11 -Āσ11 )ξ(t)+( Āi12 -Āσ12 )y(t)+L i ϕ iσ (e i ξ , 0, ξ, y) (30) where ϕ iσ (e i ξ , 0, ξ, y) = Āi21 e i ξ (t) + ( Āi21 -Āσ21 )ξ(t) + ( Āi22 -Āσ22 )y(t) (31) Finally, by defining the following matrices Ãi = ( Āi11 + L i Āi21 ) ∆A ξ iσ = ( Āi11 -Āσ11 ) + L i ( Āi21 -Āσ21 ) ∆A y iσ = ( Āi12 -Āσ12 ) + L i ( Āi22 -Āσ22 ) (32) equation ( 30) can be rewritten as ėi ξ (t) = Ãi e i ξ (t) + ∆A ξ iσ ξ(t) + ∆A y iσ y(t) It is worth noting that for the correct observer (i.e., that having the index i = σ k which matches the current mode of operation σ(t)) one has ∆A ξ iσ = ∆A y iσ = 0. Hence, along every time intervals t ∈ [t k + T * , t k+1 ), with k = 0, 1, ..., ∞, the error dynamics of the correct observer are given by ėσ k ξ (t) = Ãσ k e σ k ξ (t) (34) which is asymptotically stable by [START_REF] Liberzon | Switching in Systems and Control. Systems and Control: Foundations and Applications[END_REF]. Since by Assumption 3 the pairs (A i , C i ) are all observable, the pairs ( Āi11 , Āi21 ) are also observable, which motivates the tuning condition [START_REF] Liberzon | Switching in Systems and Control. Systems and Control: Foundations and Applications[END_REF]. The solution of (34) fulfills the relation [START_REF] Narendra | Adaptation and learning using multiple models, switching, and tuning[END_REF]. Remark 1 Equation (34) means that the estimation ξi provided by the "correct" observer, i.e. the observer associated to the mode σ k activated during the interval t ∈ [t k , t k+1 ), at time t k + T * starts to converge exponentially to the real continuous state ξ. By appropriate choice of L i the desired rate of convergence can be obtained. Remark 2 It is worth stressing that the time T * is independent of the initial error at each time t k and can be made as small as desired (and in particular such that T * << ∆) by tuning the parameters of the observers properly. Discrete state estimation In the previous section it was shown that there is one observer in the stack that provides the asymptotic reconstruction of the continuous state of the switched system (2). However, the index of such "correct" observer is the currently active mode, which is still unknown to the designer, hence the scheme needs to be complemented by a discrete mode observer. In the next subsections we present two methods for reconstructing the discrete state of the system by suitable processing of the observers' output injections. Asymptotically vanishing residuals Along the time intervals [t k + T * , t k+1 ) the observers' output injection vectors [START_REF] Morse | Supervisory control of families of linear set-point controllers -part 1: Exact matching[END_REF] satisfy the following relationship: ν i (t) =      Āσ k 21 σ k ξ (t) for i = σ k Āi21 e i ξ (t) + ( Āi i21 -Āσ k 21 )ξ(t) + ( Āi22 -Āσ k 22 )y(t) for i = σ k (35) It turns out that along the time intervals [t k +T * , t k+1 ) the norm of the injection term of the correct observer will be asymptotically vanishing in accordance with ν σ k (t) ≤ A M 21 αe -γ(t-t k -T * ) → 0, (36) where A M 21 = sup i∈{1,2,...,q} Āi21 . The asymptotic nature of the convergence to zero of the residual vector corresponding to the correct observer is due to the dynamics (34) of the error variable e σ k ξ (t), which in fact tends asymptotically to zero. Uniform-time zeroed residuals By making the injection signals insensitive to the dynamics of e i ξ (t) it is possible to obtain a uniform-time zeroed residual for the correct observer, i.e. a residual which is exactly zeroed after a finite time T * following any switch, independently of the error at the switching time. Let us make the next assumption. Assumption 4 For all i = 1, 2, ..., q, the submatrices Āi21 are not full row rank. The major consequence of Assumption 4 is that N ← Āi21 is not trivial. Let U i be a basis for N ← Āi21 (i.e. U i Āi21 = 0) and denote νi (t) = U i ν i (t) (38) Clearly, by (35), on the interval [t k + T * , t k+1 ) one has that νi (t) =      0 for i = σ k -U i A σ k 21 ξ(t) + U i ( Āi22 -Āσ k 22 )y(t) for i = σ k (39) It turns out that starting from the time t k + T * the norm of the injection term of the correct observer will be exactly zero, i.e. νσ k (t) = 0 ∀t ∈ [t k +T * , t k+1 ). In order to reconstruct univocally the discrete state, it must be guaranteed that the "wrong" residuals cannot stay identically at zero. In the following section we shall derive a structural condition on the system matrices guaranteeing that the uniform-time zeroed residuals associated to the "wrong" observers stay at the origin. Uniqueness of the reconstructed switching law The main property allowing the discrete mode reconstruction is that, ter a finite time T * following any switch, the residual corresponding to the "correct" observer converges to zero, according to (35), or it is exactly zero if the uniform-time zeroed residuals (38) are used. In order to reconstruct the discrete state univocally, all the other residuals (i.e. those having indexes corresponding to the non activated modes) must be separated from zero. In what follows the uniform-time zeroed residuals (38) are considered as they provide better performance and faster reconstruction capabilities. Nevertheless, analogue considerations can be made in the case of the asymptotically vanishing residuals (35). Definition 2 Given the switched system (2), a residual νi (t) is said to be non vanishing if x(t) = 0 almost everywhere implies νi (t) = 0 almost everywhere ∀i = σ k , that is the residuals corresponding to the "wrong" observers cannot be identically zero on a time interval unless the state of the system is identically zero in the considered interval. Next Lemma establishes an "observability-like" requirement guaranteeing that that the uniform-time zeroed residuals (39) are non vanishing. Lemma 1 The uniform-time zeroed residuals νi (t) in (39) are non vanishing if and only if the pairs ( Āj , Cij ) are observable ∀i = j, where Āj (j = 1, 2, ..., q) are the state matrices of system ( 6) and Cij = -U T i Āj21 U T i ( Āi22 -Āj22 ) . Proof. Along the time interval [t k + T * , t k+1 ), the "wrong" residuals νi (t) in (39), i.e. those with index i = σ k , are related to the state z(t) of ( 6) as νi (t) = Ciσ k z(t) (40) where z(t) is the solution of ż (t) = Āσ k z (t) (41) It is well known that if the pair ( Āσ k , Ciσ k ) of system (40)-( 41) is observable, then νi (t) is identically zero if and only if z(t) (and, thus, x(t)) is null. Therefore, to extend this property over all the intervals t k + T * ≤ t < t k+1 with k = 0, 1, ..., ∞, all the pairs ( Āj , Cij ) ∀i = j have to be observable. Assumption 5 The state trajectories x(t) in (2) are not zero almost everywhere and all the residuals νi (t) in (38) are non vanishing according to Definition (2). Thus if Assumption 5 is fulfilled, only the residual associated to the correct observer will be zero, while all the others will have a norm separated from zero. It is clear that if such a situation can be guaranteed, then the discrete mode can be univocally reconstructed by means of simple decision logic, as discussed in Theorem 2 of the next Section. Remark 3 If Assumption 4 is not satisfied the residual (38) cannot be considered. Nevertheless, by considering the asymptotically vanishing residuals (35), similar structural conditions guaranteeing that the "wrong" residuals cannot be identically zero can be given. Consider the following extended vector: z i e (t) =        e i ξ (t) ξ(t) y(t)        (42) From ( 6), ( 33) and (35) the following system can be considered on the interval [t k + T * , t k+1 ): żi e (t) = A iσ k z i e (t) ν i (t) = C iσ k z i e (t) (43) where A iσ k =    Ãi ∆A ξ iσ k ∆A y iσ k 0 Āσ k    C iσ k =        Āi21 Āi21 -Āσ k 21 Āi22 -Āσ k 22        T (44) The asymptotically vanishing residuals (35) are non vanishing if the pairs (A ij , C ij ) are observable ∀i = j. Continuous and discrete state observer The proposed methodology of continuous and discrete state estimation is summarized in the next Theorem 2 Consider the linear switched system (2), fulfilling the Assumption 1-5, and the observer stack [START_REF] Chevallereau | Asymptotically stable running for a five-link, four-actuator, planar bipedal robot[END_REF] described in the previous Theorem 1. Consider the next evaluation signal ρi (t) = t t-ǫ νi (τ ) dτ . ( 45 ) where ǫ, is a small time delay, and the next active mode identification logic: σ(t) = argmin i∈{1,2,...,q} ρi (t) (46) Then, the discrete state estimation will be such that σ(t) = σ(t), t k + T * + ǫ ≤ t ≤ t k+1 , k = 0, 1, ..., ∞ (47) and the continuous state estimation given by x(t) = (T σ ) -1    ξσ (t) ŷσ (t)    (48) will be such that x(t) -x(t) ≤ αe -γ(t-t k -T * ) ∀t ∈ [t k + T * , t k+1 ) (49) Proof. By considering (39) which, specified for the correct observer (i = σ k ), guarantees that νσ k (t) = 0, t k + T * ≤ t ≤ t k+1 (50) along with the Assumption 5 , whose main consequence is that νi (t) cannot be identically zero over an interval when i = σ(t), it follows that it is always possible to find a threshold η such that for the evaluation signals ρi (t) in (45) one has ρi (t) > η, t k + T * + ǫ ≤ t < t k+1 , i = σ k (51) ρσ k (t) ≤ η, t k + T * + ǫ ≤ t < t k+1 . (52) Thus the mode decision logic (46) provides the reconstruction of the discrete state after the finite time T * + ǫ to any switching time instant, i.e. σ(t) = σ k , t k + T * + ǫ ≤ t < t k+1 (53) The second part of the Theorem 2 concerning the continuous state estimation can be easily proven by considering the coordinate transformation (4) and Theorem 1, which imply (49). Remark 4 We assumed that the state trajectories are not zero almost everywhere (Assumption 5). As a result the wrong residuals can occasionally cross the zero value. This fact motivates the evaluation signal introduced in (45): considering a window of observation for the residuals, all the wrong residuals will be separated from zero, while only the correct one can stay close to zero during a time interval of nonzero length. The architecture of the observer is shown in Fig. 2. Remark 5 It is possible to develop the same methodology if the residual (35) is considered instead of (38). The evaluation signal can be used to identify the discrete state. However, the time to identify the discrete mode will be bigger than the one of Theorem 2 since the vanishing transient of the error variable e i ξ is needed to last for a while, starting from the finite time t k + T * . ρ i (t) = t t-ǫ ν i (τ ) dτ . (54) SIMULATION RESULTS In this section, we discuss a simulation example to show the effectiveness of our method. Consider a switched linear system as in (2) with q = 3 modes by the matrices A 1 =        0.1 0.6 -0.4 -0.5 -0.8 1 0.1 0.4 -0.7        , A 2 =        -0.2 0.3 -0.8 -0.2 -0.4 0.8 1 0.6 -0.3        , A 3 =        -0.8 -0.5 0.2 -0.5 -0.1 -0.5 -0.3 -0.2 0.3        (55) C 1 =    1 0 0 0 1 0    , C 2 =    1 0 0 0 0 1    , C 3 =    0 0 1 0 1 0    (56) The system starts from the mode 1 with the initial conditions x(0) = [-3, -1, 6] T and evolves switching between the three modes according to the switching law shown in Fig. 3. After the coordinate transformation (4) obtained by the transformation matrices T 1 =        0 0 1 1 0 0 1 0        , T 2 =        0 -1 0 1 0 0 0 0 1        , T 3 =        -1 0 0 0 0 1 0 1 0        ( 57 ) the system is in the proper form to apply our estimation procedure. Since the pairs (A i , C i ) are all observable for i = 1, 2, 3, the stack of observers ( 8) can be implemented. By properly tuning the parameters of the STA-based observers according to [START_REF] Liberzon | Basic problems in stability and design of switched systems[END_REF], the components of the vector error e y are exactly zero for each observer after a time T * subsequent to any switch (Fig. 4) as proven in Theorem 1. On the contrary, the error e ξ at time t k + T * starts to converge exponentially to zero only for the correct observer, as shown in Fig. 5. Notice that the three signals occasionally cross the zero, but only the one corresponding to the correct observer remains zero on a time interval. The gains of the observers L i are chosen such that the eigenvalues of the matrices Ãi in [START_REF] Vries | Hybrid system modeling and identification of cell biology systems: perspectives and challenges[END_REF] governing the error dynamics ( 33) are located at -5. Since Assumption 5 is satisfied, the discrete mode can be univocally identified. To this end let us consider the asymptotically vanishing residuals (35) and the uniform-time zeroed residuals (39). The simulations confirm that both signals stay at zero only for the correct observer. Moreover, the signal corresponding to the asymptotically vanishing residual is in general "slower" than the signal corresponding to the uniform-time zeroed residual. In order to highlight the different behaviours of the two signals, we reported in Fig. 6 the dynamics of the two residuals provided by the third observer when the mode 3 becomes active at t = 20. The evaluation signal obtained with the uniform-time zeroed residuals allows us a faster estimation of the switching law, as compared to their asymptotic counterparts. In Fig. 7 the actual and the reconstructed switching law are depicted by using the two different evaluation signals. CONCLUSIONS The problem of simultaneous continuous and discrete state reconstruction has been tackled for linear autonomous switched systems. The main ingredient of the proposed approach is an appropriate stack of high-order sliding mode observers used both as continuous state observers and as residual generators for discrete mode identification. As a novelty, a procedure has been devised to algebraically process the residuals in order to reconstruct the discrete state after a finite time that can be arbitrarily small, and, additionally, conditions ensuring the identifiability of the system modes are derived in terms of the original system matrices. An original "projection" procedure has been proposed, leading to the so-called uniform-time zeroed residuals, which allows a faster reconstruction of the active mode as compared to the (however feasible) case when such projection is no adopted. The same observer can be designed in the case of forced switched systems, too. However, further investigation is needed concerning the underlying conditions to univocally reconstruct the discrete state, which would be affected by the chosen input too. ACKOWLEDGMENTS A. Pisano gratefully acknowledges the financial support from the Ecole Centrale de Lille under the 2013 ECL visiting professor program. Fig. 1 .Assumption 2 12 Fig. 1. Observer structure Fig. 2 . 2 Fig. 2. Continuous and discrete state observer Fig. 3 . 3 Fig. 3. Actual switching signal. 3 y2Fig. 4 . 1 ξFig. 5 . 3415 Fig. 4. Estimation vector error e y corresponding to the three observers. 17 17 3 Fig. 6 . 36 Fig. 6. Evaluation signal used to estimate the discrete state. Fig. 7 . 7 Fig. 7. Actual and reconstructed switching signals.
34,063
[ "914410", "170108" ]
[ "432039", "128842", "432039", "128842" ]
01753807
en
[ "sdv" ]
2024/03/05 22:32:10
2018
https://amu.hal.science/hal-01753807/file/Version%20Finale%20favier.pdf
Favier M Pd Bordet Jc Phd MD Alessi Mc Nurden P Md PhD Nurden At Remi Favier email: remi.favier@aphp.fr Heterozygous mutations of the integrin αIIbR995/β3D723 intracytoplasmic salt bridge cause macrothrombocytopenia, platelet functional defects and enlarged α-granules Keywords: inherited macrothrombocytopenia, integrin a2IIbb3, platelet function defects, enlarged a -granules Rare gain-of-function mutations within the ITGA2B or ITGB3 genes have been recognized to cause macrothrombocytopenia (MTP). Here we report three new families with autosomal dominant (AD) MTP, two harboring the same mutation of ITGA2B, αIIb R995W, and a third family with an ITGB3 mutation, β3D723H. The two mutated amino-acids are directly involved in the salt bridge linking the intra-cytoplasmic part of IIb to 3 of the integrin αIIbβ3. For all affected patients, the bleeding syndrome and MTP was mild to moderate. Platelet aggregation tended to be reduced but not absent. Electron microscopy associated with a morphometric analysis revealed large round platelets; a feature being the presence of abnormal large -granules with some giant forms showing signs of fusion. Analysis of the maturation and development of megakaryocytes reveal no defect in their early maturation but abnormal proplatelet formation was observed with increased size of the tips. Interestingly, this study revealed that in addition to the classical phenotype of patients with αIIbβ3 intracytoplasmic mutations, an abnormal maturation of -granules. It will be interesting to determine if this feature is a characteristic of mutations disturbing the IIb R995/3 D723 salt bridge. INTRODUCTION Integrin αIIbβ3 is the platelet receptor for fibrinogen (Fg) and other adhesive proteins and mediates platelet aggregation playing a key role in hemostasis and thrombosis. It circulates on platelets in a low-affinity state becoming ligand-competent as a result of conformational changes induced by "inside-out" signaling following platelet activation [START_REF] Coller | The GPIIb/IIIa (integrin αIIbβ3) odyssey: a technology-driven saga of a receptor with twists, turns, and even a bend[END_REF]. Inherited defects of αIIbβ3 with loss of expression and/or function are causal of Glanzmann thrombasthenia (GT), an autosomal recessive bleeding disorder [START_REF] George | Glanzmann's thrombasthenia: The spectrum of clinical disease[END_REF][START_REF] Nurden | Glanzmann thrombasthenia: a review of ITGA2B and ITGB3 defects with emphasis on variants, phenotypic variability, and mouse models[END_REF]. Rare gain-of-function mutations of the ITGA2B or ITGB3 genes encoding αIIbβ3 also cause macrothrombocytopenia (MTP) with a low platelet count and platelets of increased size [START_REF] Nurden | Glanzmann thrombasthenia: a review of ITGA2B and ITGB3 defects with emphasis on variants, phenotypic variability, and mouse models[END_REF][START_REF] Nurden | Glanzmann thrombasthenia-like syndromes associated with macrothrombocytopenias and mutations in the genes encoding the αIIbβ3 integrin[END_REF]. Mostly heterozygous with autosomal dominant (AD) expression these include D621_E660del*, L718P, L718del and D723H mutations in β3, and G991C, G993del, R995Q or W in αIIb (Table I) [START_REF] Hardisty | A defect of platelet aggregation associated with an abnormal distribution of glycoprotein IIb-IIIa complexes within the platelet: The cause of a lifelong bleeding disorder[END_REF][START_REF] Peyruchaud | R to Q amino acid substitution in the GFFKR sequence of the cytoplasmic domain of the integrin αIIb subunit in a patient with a Glanzmann's thrombasthenia-like syndrome[END_REF][START_REF] Ghevaert | A nonsynonymous SNP in the ITGB3 gene disrupts the conserved membrane-proximal cytoplasmic salt bridge in the αIIbβ3 integrin and cosegregates dominantly with abnormal proplatelet formation and macrothrombocytopenia[END_REF][START_REF] Jayo | L718P mutation in the membrane-proximal cytoplasmic tail of β3 promotes abnormal αIIbβ3 clustering and lipid microdomain coalesce, and associates with a thrombasthenia-like phenotype[END_REF][START_REF] Kunishima | Heterozygous ITGA2B R995W mutation inducing constitutive activation of the αIIbβ3 receptor affects proplatelet formation and causes congenital macrothrombocytopenia[END_REF][START_REF] Kashiwagi | Demonstration of novel gain-of-function mutations of αIIbβ3: association with macrothrombocytopenia and Glanzmann thrombasthenia-like phenotype[END_REF][START_REF] Kobayashi | Identification of the integrin β3 L718P mutation in a pedigree with autosomal dominant thrombocytopenia with anisocytosis[END_REF][START_REF] Nurden | An intracytoplasmic β3 Leu718 deletion in a patient with a novel platelet phenotype[END_REF]. While Asp621_Glu660del affects the extracellular cysteine-rich βA domain of β3, the others affect transmembrane or intracellular cytoplasmic domains and in particular the salt bridge linking the negatively charged D723 of β3 with the positively charged R995 of the much studied GFFKR sequence of αIIb [START_REF] Hughes | The conserved membraneproximal region of an integrin cytoplasmic domain specifies ligand-binding affinity[END_REF][START_REF] Hughes | Breaking the integrin hinge. A defined structural constraint regulates integrin signaling[END_REF][START_REF] Kim | Interactions of platelet integrin αIIb and β3 transmembrane domains in mammalian cell membranes and their role in integrin activation[END_REF]. These mutations permit residual or even total αIIbβ3 expression but give rise to conformation changes that propagate through the integrin and which are recognized by binding of the monoclonal antibody, PAC-1 [START_REF] Shattil | Changes in the platelet membrane glycoprotein IIb.IIIa complex during platelet activation[END_REF]. The MTP appears related to cytoskeletal changes during the late stages of megakaryocyte (MK) development and altered proplatelet formation [START_REF] Bury | Outside-in signaling generated by a constitutively activated integrin αIIbβ3 impairs proplatelet formation in human megakaryocytes[END_REF][START_REF] Bury | Cytoskeletal perturbation leads to platelet dysfunction and thrombocytopenia in variant forms of Glanzmann thrombasthenia[END_REF][START_REF] Hauschner | Abnormal cytoplasmic extensions associated with active αIIbβ3 are probably the cause for macrothrombocytopenia in Glanzmann thrombasthenia-like syndrome[END_REF]). Yet, while most of the above variants combine MTP with a substantial loss of platelet aggregation and a GT-like phenotype, the D723H β3 substitution had no effect on platelet aggregation and was called a non-synonymous single nucleotide polymorphism (SNP) by the authors [START_REF] Peyruchaud | R to Q amino acid substitution in the GFFKR sequence of the cytoplasmic domain of the integrin αIIb subunit in a patient with a Glanzmann's thrombasthenia-like syndrome[END_REF]. This was surprising as another cytoplasmic domain mutation involving a near-neighbour Arg724Ter truncating mutation in β3, while not preventing αIIbβ3 expression gave a full GT phenotype [START_REF] Wang | Truncation of the cytoplasmic domain of beta3 in a variant form of Glanzmann thrombasthenia abrogates signaling through the integrin alphaIIbbeta3 complex[END_REF]. We recently reported a heterozygous intracytoplasmic β3 Leu718del that resulted in loss of synchronization between the cytoplasmic tails of β3 and αIIb; changes that gave moderate MTP, a reduced platelet aggregation response and, unexpectedly, enlarged α-granules [START_REF] Nurden | An intracytoplasmic β3 Leu718 deletion in a patient with a novel platelet phenotype[END_REF]. It is in this context that we now report our studies on a second European family with a heterozygous β3 D723H variant as well as the first two families to be described outside of Japan with a heterozygous αIIb R995W substitution. Significantly, not only both of these variants of the αIIbR995/β3D723 salt bridge give rise to moderate MTP and platelet function defects; their platelets also contained enlarged α-granules. CASE HISTORIES We now report three families (A from Reunion island; B and C from France) with inherited MTP transmitted across 2 or 3 generations suggestive of autosomal dominant (AD) inheritance. The family pedigrees are shown in Fig. 1 and the three index cases (AII.1 an adult female, BI.1 and CI.1 adult males) identified. Other family members known to have MTP and significant subpopulations of enlarged platelets are also highlighted. All showed moderate to mild thrombocytopenia and often a higher proportion of immature platelets when analyzed with the Sysmex XE-5000 automat (Sysmex,Villepinte, France) (Table I). Increased mean platelet volumes were observed for BI.1, CI.1, and C1.2 (Table I) but values are not given when the large diameter of many of the platelets meant that they were not taken into account by the machine (particularly so for members of family A with MTP). Other blood cell lineages were usually present for affected family members and all routinely tested coagulation parameters were normal. As quantitated by the ISTH-BAT bleeding score, members of family A with MTP were the most affected (Table I). For example, AII.1 suffered from severe menorrhagia and severe post-partum bleeding requiring platelet and red blood cell transfusions after her second childbirth although two other children were born without problems (including a cesarean section). AII.1 also experienced occasional spontaneous bruising and episodes of iron-deficient anemia of unknown cause. An affected sister also has easy bruising and childbirth was under the cover of platelet transfusion. In family B the index case BI.1 suffered epistaxis but no bleeding has been reported for other family members. No bleeding was seen for the index case (CI.1) in family C despite major surgery following a bomb explosion while working as a war photographer. His daughter (CII.1) however experiences mild bleeding with frequent hematomas. Our study was performed in accordance with the declaration of Helsinki after written informed consent, and met with the approved protocol from INSERM (RBM-04-14). METHODS Platelet aggregation Platelet aggregation was tested in citrated platelet-rich plasma (PRP) according to our standard protocols [START_REF] Bluteau | Thrombocytopenia-associated mutations in the ANKRD26 gene regulatory region induce MAPK hyperactivation[END_REF] and compared to PRP from healthy control donors without adjustment of the platelet count. The following agonists were used: 10µM adenosine diphosphate (ADP); 1mM arachidonic acid (AA); 1M U46619, (all from Sigma Aldrich, L'isle d'Abeau, Chesnes, France); 20 µM thrombin receptor activating peptide (TRAP) (Polypeptide Group, Strasbourg, France); 1µg/mL collagen (COL) (Chronolog Corporation, Havertown, USA); 5 M, epinephrine (Sigma), 1.5 mg/mL, and 0.6 mg/mL ristocetin (Helena Biosciences Europe, Elitech, Salon-en-Provence, France). Results were expressed as percentage maximal intensity. Flow cytometric analysis (FCM) Glycoprotein expression on unstimulated platelets was assessed using citrated PRP according to our standard protocols [START_REF] Nurden | An intracytoplasmic β3 Leu718 deletion in a patient with a novel platelet phenotype[END_REF][START_REF] Bluteau | Thrombocytopenia-associated mutations in the ANKRD26 gene regulatory region induce MAPK hyperactivation[END_REF]. On occasion, platelet surface labelling for αIIb, β3, GPIbα and P-selectin was quantified using the PLT Gp/Receptors kit (Biocytex, Marseille, France) at room temperature before and after stimulation with 10µM ADP and 50µM TRAP using the Beckman Coulter Navios flow cytometer (Beckman Coulter, Villepinte, France). Platelets were identified by their light scatter characteristics and their positivity for a PC5 conjugated plateletspecific monoclonal antibody (MoAb) (CD41). An isotype antibody was used as negative control. To study platelet αIIbβ3 activation by flow cytometry, platelets were activated with either 10 μΜ ADP or 20 μΜ TRAP in the presence of FITC-conjugated PAC-1. A fluorescence threshold was set to analyze only those platelets that had bound FITC-PAC1. In brief, an antibody mixture consisting of 40 μl of each MoAb (PAC-1 and CD41) was diluted with 280 μl of PBS. Subsequently 5μl of PRP were mixed with 40μl of the antibody mixture and with 5 μl of either saline or platelet activator. After incubating for 15 min at room temperature in the dark, 1ml of isotonic PBS buffer was added and samples were analyzed. Antibody binding was expressed either as the mean fluorescence intensity or as the percentage of platelets positive for antibody. Transmission electron microscopy (EM) PRP from blood taken into citrate or ACDA anticoagulant was diluted and fixed in PBS, pH 7.2, containing 1.25 %(v/v) glutaraldehyde for 1h as described [START_REF] Nurden | An intracytoplasmic β3 Leu718 deletion in a patient with a novel platelet phenotype[END_REF]. After centrifugation and two PBS washings, they were post-fixed in 150 mM cacodylate-HCl buffer, pH 7.4, containing 1% osmium tetroxide for 30 min at 4°C. After dehydration in graded alcohol, embedding in EPON was performed by polymerization at 60°C for 72 h. Ultrathin sections 70-80 nm thick were mounted on 200-mesh copper grids, contrasted with uranyl acetate and lead citrate and examined using a JEOL JEM1400 transmission electron microscope equipped with a Gatan Orius 600 camera and Digital Micrograph software (Lyon Bio Image, Centre d'Imagerie Quantitative de Lyon Est, France). Morphometric measurements were made using Image J software (National Institutes of Health, USA). Genetic analysis and mutation screening. DNA from AII.1, BI.1 and CI.1 was subjected to targeted exome sequencing (v5-70 Mb) as part of a study of a series of families with MTP due to unknown causes organized within the Paris Trousseau Children's Hospital (Paris, France). Single missense variants known to be pathological for MTP in the ITGA2B and ITGB3 cytoplasmic tails were highlighted and their presence in other family members with MTP was confirmed by Sanger sequencing (primers are available on request). The absence of other potentially pathological variants in genes known to be causal of MTP in the targeted exome sequencing analysis was confirmed. In silico models to investigate αIIbβ3 structural changes induced by the mutations were obtained using the PyMOL Molecular Graphics System, version 1.3, Schrödinger, LLC (www.pymol.org) and 2k9j pdb files for transmembrane and cytosolic domains as described in our previous publications [START_REF] Nurden | Glanzmann thrombasthenia: a review of ITGA2B and ITGB3 defects with emphasis on variants, phenotypic variability, and mouse models[END_REF][START_REF] Nurden | Glanzmann thrombasthenia-like syndromes associated with macrothrombocytopenias and mutations in the genes encoding the αIIbβ3 integrin[END_REF][START_REF] Nurden | An intracytoplasmic β3 Leu718 deletion in a patient with a novel platelet phenotype[END_REF]. Amino acid changes are visualized in the rotamer form showing side change orientations incorporated from the Dunbrack Backbone library with maximum probability. In vitro MK differentiation, ploidy analyses, quantification of proplatelets and immunofluorescence analysis Plasma thrombopoietin (TPO) levels were measured as previously described [START_REF] Bluteau | Thrombocytopenia-associated mutations in the ANKRD26 gene regulatory region induce MAPK hyperactivation[END_REF]. Patient or control CD34 + cells were isolated using an immunomagnetic beads technique (Miltenyi, Biotec, France) and grown supplemented with 10 ng/mL TPO (Kirin Brewery, Tokyo, Japan) and 25 ng/mL Stem Cell Factor (SCF; Biovitrum AB, Stockholm, Sweden). (i) Ploidy analyses. At day 10, Hoechst 33342 dye (10 μg/mL; Sigma-Aldrich, Saint Quentin Fallavier, France) was added to the medium of cultured MKs for 2 h at 37°C. Cells were then stained with directly coupled MoAbs: anti-CD41-phycoerythrin and anti-CD42aallophycocyanin (BD Biosciences, Le Pont de Claix, France) for 30 min at 4°C. Ploidy was measured in the CD41 + CD42 + cell population by means of an Influx flow cytometer (BD; Mountain View,USA) and calculated as previously described [START_REF] Bluteau | Thrombocytopenia-associated mutations in the ANKRD26 gene regulatory region induce MAPK hyperactivation[END_REF]. (ii) Quantification of proplatelet-bearing MKs. To evaluate the percentage of MKs forming proplatelets (PPTs) in liquid medium, CD41 + cells were sorted at day 6 of culture and plated in 96-well plates at a concentration of 2000 cells per well in serum-free medium in the presence of TPO (10 ng/mL). MKs displaying PPTs were quantified between day 11 and 13 of culture by enumerating 200 cells per well using an inverted microscope (Carl Zeiss, Göttingen, Germany) at a magnification of ×200. MKs displaying PPTs were defined as cells exhibiting ≥1 cytoplasmic process with constriction areas and were analyzed in triplicate in two independent experiments for each individual. (iii) Fluorescence microscopy. Primary MKs grown in serum-free medium were allowed to spread for 1 h at 37 °C on 100 g/mL fibrinogen (Sigma Aldrich, Saint Quentin Fallavier, France) coated slides, then fixed in 4% paraformaldehyde (PFA), washed and permeabilized for 5 min with 0.2% Triton-X100 and washed with PBS prior to being incubated with rabbit anti-VWF antibody (Dako, Les Ulis, France) for 1 h, followed by incubation with Alexa 546-conjugated goat anti-rabbit immunoglobulin G (IgG) for 30 min and Phalloidin-FITC (Molecular Probes, Saint Aubin, France). Finally, slides were mounted using Vectashield with 4, 6 diamidino-2-phenylindole (Molecular Probes, Saint Aubin, France). The PPT-forming MKs (cells expressing VWF) were examined under a Leica DMI 4000, SPE laser scanning microscope (Leica Microsystems, France) with a 63×/1.4 numeric aperture oil objective. RESULTS Molecular genetic analysis: We describe 3 previously unreported families, one based in Reunion Island and the others in France, with inherited MTP and mild to moderate bleeding. Targeted exome sequencing revealed heterozygous missense mutations of residues that compose the platelet αIIbR995/β3D723 intracytoplasmic salt bridge whose loss is integral to integrin signaling. Probands AII.1 and BI.1 have the αIIbR955W variant previously identified in Japanese families with MTP [START_REF] Kunishima | Heterozygous ITGA2B R995W mutation inducing constitutive activation of the αIIbβ3 receptor affects proplatelet formation and causes congenital macrothrombocytopenia[END_REF]. In contrast CI.1 possesses β3D723H originally described as a nonsynonymous SNP and associated with MTP in a UK family [START_REF] Ghevaert | A nonsynonymous SNP in the ITGB3 gene disrupts the conserved membrane-proximal cytoplasmic salt bridge in the αIIbβ3 integrin and cosegregates dominantly with abnormal proplatelet formation and macrothrombocytopenia[END_REF]. Sanger sequencing confirmed the presence of both variants and showed that their expression segregated with MTP in the family members available for genetic analysis (see Fig. Platelet aggregation and flow cytometry analysis: Citrated PRP from each index case was stimulated with ADP, TRAP, AA and collagen and platelet aggregation measured using standard procedures (Fig. 2A). Results were variable, and taking the curves obtained for ristocetin as a control of the low platelet count for each index case platelets from each family with αIIbR995 and β3D723 variants retained at least a partial aggregation response with family AII.1 showing the largest lossparticularly for TRAP. The response to epinephrine was also much reduced or absent for all samples (data not illustrated). Striking was the low response to AA for patients CI.1and CII.2, a finding reversed on addition of the thromboxane A2 analog U46619 (data not shown). Otherwise, the platelets retained a rapid response to ADP and collagen. Flow cytometry and MoAbs recognizing determinants specific for αIIb, β3 or the αIIbβ3 complex (data not shown) gave comparable results for each index case with surface levels for the 3 index cases ranging from 48 to 75% of those on normal platelets (Fig. 2B). Taking into account the increased platelet size, such intermediate levels would suggest that both mutations have a direct influence on αIIbβ3 expression. Enigmatically, the platelet expression of GPIb was particularly increased for the 4 tested family members (AII.1, AII.2, BI.1, BII.2) with the αIIbR995W mutation a finding only partially explained by the increased platelet volume of these patients with MTP. Binding of PAC-1 recognizing an activation-dependent epitope on αIIbβ3 was analyzed as a probe of the activation state of the integrin. Spontaneous binding of PAC-1 was seen for the platelets of index case AII.1 with the αIIbR995W mutation suggesting signs of activation but was not seen for the index case of the second family with this mutation or for the index case of the family with β3D723H (Fig. 2C). Studies were extended to platelets stimulated with high doses of ADP and TRAP; increased binding was seen for AII.1 consistent with further activation of the residual surface αIIbβ3 of the platelets. However, no binding was seen for BI.1 or C.I.1. suggesting that for these patients the residual αIIbβ3 was refractory to stimulation under the non-stirred conditions of this set of experiments (Fig. 2B). Fig. 2. Selected biological platelet findings for the three index cases. In A) Light transmission aggregometry performed in citrated platelet-rich plasma (PRP) compares typical responses of platelets from the index cases (AII.1, BI.1, CI.1) to that of a typical control donor. For AII.1 aggregation with high doses of ADP, Col, TRAP and AA were reduced compared to ristocetin-induced platelet agglutination whose intensity reflected the low platelet count of the patient. For BI.1 platelet aggregation was moderately reduced with Col and TRAP while for CI.1 platelet aggregation was reduced essentially for TRAP and AA (it should be noted that it was restored with the thromboxane receptor agonist U46619, not shown). In B) Spontaneous PAC1 binding evaluated by flow cytometry on resting platelets was marginally increased for AII.1 but not for the other index cases. Binding increased for AII.1 after platelet activation with ADP and TRAP but remained low compared to the control. In contrast, PAC-1 binding was basal for BI.1 and CI.1 even after addition of ADP or TRAP. In C) we illustrate the levels of GPIb and αIIbβ3 receptors evaluated by flow cytometry not only for the probands but also for other selected family members. A decreased surface expression of αIIb and β3were found for all affected patients, values ranging between 43% (AII.2) and 70% (BII.2) of the control mean. Interestingly, levels of GPIb were increased for the patients and particularly so for families A and B with sometimes values beyond 150% of normal values. Electron microscopy: Platelets from the index cases of all 3 families were examined by transmission EM and for each subject a significant subpopulation of the platelets were larger than normal (Fig 3). Overall, the platelets showed wide size variations; many tended to be round in contrast to the discoid shape of controls (control platelets are illustrated by us in ref [START_REF] Nurden | An intracytoplasmic β3 Leu718 deletion in a patient with a novel platelet phenotype[END_REF]. Patients from all index cases possessed platelets with large variations in the numbers of vacuoles. Striking was a heterogeneous distribution of α-granules with the presence of giant forms particularly evident for patient AII.1 and what appears to represent granule fusion was seen (highlighted in panel 3b). All of the morphological changes were analyzed quantitatively; statistical significance was achieved for all measurements except for those concerning α-granule numbers of patient CI.1 with the β3D723H substitution (Fig. 3). Please note that the platelets of the patients with the αIIbR995W mutation tended to be larger and show more ultrastructural changes. Interestingly, the greater the frequency of giant granules, the lower is their concentration/µm 2 . The presence of giant α-granules repeats what we have recently seen for a patient with a heterozygous β3718del and is reminiscent of the feature that we have much earlier described in Paris Trousseau syndrome (12, 22, and 23). Targeted exome sequencing failed to show mutations in the FlI1 gene of the three index cases. Megakaryopoiesis: Plasma TPO levels were within normal range for each of the index cases (Table I). Analysis of MK maturation and development did not reveal any defect in early megakaryocyte maturation. Ploidy measured in theCD41 + CD42 + cell population at day 10 of culture by means of an Influx flow cytometer with the proportion of 2N-32N MKs being within the normal range (Fig. 4). Proplatelet formation was examined on days 11 and 12 of culture using an inverted microscope and no difference in percentage of proplatelet bearing MKs was detected. Proplatelet morphology was analyzed at the same time using a SPE laser scanning microscope after dual fluorescent labeling of PFA-permeabilized cells with Phalloidin and antibody to VWF (green) (Fig. 4). While the mature MKs basically showed normal morphology proplatelet numbers tended to be lower and some extensions appeared swollen and with decreased branching. Another finding was that the size of the tips and bulges occurring at intervals along the proplatelets tended to be larger than for control MKs and especially so for the two index cases with αIIbR995W (AII.1 and BI.1); an image of a giant granule can be observed in an illustrated extension of AII. DISCUSSION In the resting state, the trans-membrane and intra-cytoplasmic segments of the two subunits of αIIbβ3 interact, an interaction that is key to maintaining the extracellular domain of the integrin in its bent resting state [START_REF] Kim | Interactions of platelet integrin αIIb and β3 transmembrane domains in mammalian cell membranes and their role in integrin activation[END_REF][START_REF] Litinov | Activation of individual alphaIIbbeta3 integrin molecules by disruption of transmembrane domain interactions in the absence of clustering[END_REF]. One area of contact between the cytoplasmic tails involves π interactions and aromatic cycle stacking of consecutive F residues within the highly conserved αIIb GFFKR (aa991-955) sequence with W713 of β3 (shown in Fig. 1). A second interaction principally involves a salt bridge between the positively charged αIIbR995 and negatively charged β3D723 [START_REF] Hughes | The conserved membraneproximal region of an integrin cytoplasmic domain specifies ligand-binding affinity[END_REF][START_REF] Hughes | Breaking the integrin hinge. A defined structural constraint regulates integrin signaling[END_REF][START_REF] Kim | Interactions of platelet integrin αIIb and β3 transmembrane domains in mammalian cell membranes and their role in integrin activation[END_REF]. Early studies including site-directed mutagenesis, truncation models and charge reversal mutations showed that loss of this intra-molecular clasp led to integrin activation and modified function [START_REF] Hughes | The conserved membraneproximal region of an integrin cytoplasmic domain specifies ligand-binding affinity[END_REF][START_REF] Hughes | Breaking the integrin hinge. A defined structural constraint regulates integrin signaling[END_REF]. Hence the mutations described in our patients are of high significance for integrin biology. Enigmatically, the β3D723H change has the more pronounced structural effect resulting in (i) repulsive electrical charge forces with the positively charged H now facing the positively charged R995 and (ii) steric encumbrance due to the larger H. The net result is a widening of the interval between R995 and H723 and a weakening of the salt bridge, changes that accompany the acquisition of a higher affinity state [START_REF] Adair | Three-dimensional model of the human platelet integrin alphaIIbbeta3 based on electron cryomicroscopy and x-ray crystallography[END_REF]. Of similar consequence but milder in nature is the replacement of αIIb995R by the neutral W while both mutations potentially also interfere with π interactions involving αIIbF992. A novel feature of our study is the presence of enlarged α-granules in the platelets of all 3 index cases. This is of interest for we have recently reported enlarged α-granules for a patient with MTP associated with a β3 L718del resulting in loss of synchronization between opposing amino acids of the αIIb and β3 cytoplasmic tails and a weakening of the αIIbR995/β3D723 salt bridge (Table SI) [START_REF] Nurden | An intracytoplasmic β3 Leu718 deletion in a patient with a novel platelet phenotype[END_REF]. The presence of enlarged α-granules in platelets of patients from 3 unrelated families with MTP linked to cytoplasmic tail mutations in ITGA2B or ITGB3 in our current study strongly suggests that they have a role in the ultrastructural changes with emphasis on the αIIbR995W mutation. But further studies will be required to define this role and rule out secondary genetic variants in linkage disequilibrium with the primary mutations. Classically, enlarged α-granules are a consistent feature of the Paris-Trousseau syndrome first being seen on stained blood smears and then confirmed by EM [START_REF] Breton-Gorius | A new congenital dysmegakaryopoietic thrombocytopenia (Paris-Trousseau) associated with giant platelet α-granules and chromosome 11 deletion at 11q23[END_REF][START_REF] Favier | Paris-Trousseau syndrome: clinical, hematological, molecular data of ten cases[END_REF]. Paris-Trousseau syndrome results from genetic variants and haplodeficiency of the FLI1 transcription factor [START_REF] Favier | Progress in understanding the diagnosis and molecular genetics of macrothrombocytopenia[END_REF], variants that were absent from our families when studied by targeted exome sequencing. For our previously studied patient with the L718del, immunogold labeling and EM clearly showed the association of P-selectin, αIIbβ3 and fibrinogen with the giant α-granules suggesting a normal initial granule biosynthesis [START_REF] Nurden | An intracytoplasmic β3 Leu718 deletion in a patient with a novel platelet phenotype[END_REF]. Whether the giant granules are formed as part of the secretory pathway, as has been proposed for normal platelets [START_REF] Eckly | Respective contributions of single and compound granule fusion to secretion by activated platelets[END_REF], or perhaps are the consequences of premature apoptosis remain subjects for further study. In this context it would be interesting to know if they are more abundant in aging platelets and at what stage they appear in maturing MK and/or during platelet biogenesis. Preliminary immunofluorescence studies showed what appeared to giant granules in the proplatelets of cultured megakaryocytes from AII.1. It is also important to know if enlarged α-granules have been overlooked in previously published cases of cytoplasmic tail mutations affecting αIIbβ3 or are restricted to certain cases. In parallel, we also keep in mind that FLI-1 with others transcription factors coregulate directly ITGA2B and ITGB3 [START_REF] Tijssen | Genome-wide analysis of simultaneous GATA1/2, RUNX1, FLI1, and SCL binding in megakaryocytes identifies hematopoietic regulators[END_REF]. The genotypes and phenotypes of previously published cases associating cytoplasmic tail mutations of αIIb or β3 and MTP are compared in Table S1. There is much phenotypic variability but as in our cases all give rise to a mild to moderate thrombocytopenia and platelet size variations including giant forms. Inheritance is AD when family studies permit this conclusion, although two reports associate single allele missense mutations of the αIIb cytoplasmic tail with a second and different mutation causing loss of expression of the second allele [START_REF] Peyruchaud | R to Q amino acid substitution in the GFFKR sequence of the cytoplasmic domain of the integrin αIIb subunit in a patient with a Glanzmann's thrombasthenia-like syndrome[END_REF][START_REF] Kashiwagi | Demonstration of novel gain-of-function mutations of αIIbβ3: association with macrothrombocytopenia and Glanzmann thrombasthenia-like phenotype[END_REF]. Such a loss may exaggerate the effect of single allele missense mutations in these cases. In all but one of the published cases bleeding was mild to moderate or was absent and our cases follow this pattern. Platelet aggregation was never totally abrogated but tended to occur more slowly with a reduced final intensity as was largely the case for families A and B in our study. Platelets of family C (β3D723H) retained a good aggregation response, a finding in agreement with the report on the UK family with the same mutation [START_REF] Ghevaert | A nonsynonymous SNP in the ITGB3 gene disrupts the conserved membrane-proximal cytoplasmic salt bridge in the αIIbβ3 integrin and cosegregates dominantly with abnormal proplatelet formation and macrothrombocytopenia[END_REF]. For families A, B and C intermediate levels of αIIbβ3 were shown at the surface, results again consistent with many of the literature reports (Table S1). It is noteworthy that the platelet aggregation response of obligate heterozygotes for classic type I GT is normal [START_REF] George | Glanzmann's thrombasthenia: The spectrum of clinical disease[END_REF] suggesting that despite the influence of the low platelet count, the cytoplasmic domain mutations have a direct effect on the platelet aggregation response. Interestingly, a low platelet surface αIIbβ3 expression was shown to be associated with normal internal pools of αIIbβ3 in patients with αIIbR995Q and αIIbR99W substitutions suggesting defects in integrin recycling [START_REF] Hardisty | A defect of platelet aggregation associated with an abnormal distribution of glycoprotein IIb-IIIa complexes within the platelet: The cause of a lifelong bleeding disorder[END_REF][START_REF] Kashiwagi | Demonstration of novel gain-of-function mutations of αIIbβ3: association with macrothrombocytopenia and Glanzmann thrombasthenia-like phenotype[END_REF]. Our results for family C with intermediate platelet surface levels of αIIbβ3 differed from the results for the UK family where αIIbβ3 expression was normal [START_REF] Ghevaert | A nonsynonymous SNP in the ITGB3 gene disrupts the conserved membrane-proximal cytoplasmic salt bridge in the αIIbβ3 integrin and cosegregates dominantly with abnormal proplatelet formation and macrothrombocytopenia[END_REF]. Expression of the adhesion receptor GPIb was increased on the platelets of our index cases and especially so for the two with the αIIbR995W variant a finding that was previously observed for Japanese cases with the same mutation [START_REF] Kunishima | Heterozygous ITGA2B R995W mutation inducing constitutive activation of the αIIbβ3 receptor affects proplatelet formation and causes congenital macrothrombocytopenia[END_REF]. The reason for this is not known but could reflect altered megakaryopoiesis. A feature of cytoplasmic domain mutations causal of MTP is that long-range conformational changes extend to the functional domains of the integrin and give what is often termed a partially activated state (6-12) (Table S1). This was indeed shown by Hughes et al [START_REF] Hughes | The conserved membraneproximal region of an integrin cytoplasmic domain specifies ligand-binding affinity[END_REF][START_REF] Hughes | Breaking the integrin hinge. A defined structural constraint regulates integrin signaling[END_REF] who expressed αIIbβ3 in CHO cells after modifying residues of the salt bridge through site-directed mutagenesis. While the changes permit binding of the activation-dependent IgM MoAb PAC-1, only for one report has spontaneous binding of Fg been observed for this class of mutation [START_REF] Kobayashi | Identification of the integrin β3 L718P mutation in a pedigree with autosomal dominant thrombocytopenia with anisocytosis[END_REF]. These results therefore differ from the C560R mutation in the β3 cysteinerich β (A) extracellular domain as reported for a French patient whose platelets circulated with αIIbβ3-bound Fg [START_REF] Ruiz | A point mutation in the cysteine-rich domain of glycoprotein (GP) IIIa results in the expression of a GPIIb-IIIa (alphaIIbbeta3) integrin receptor locked in a high-affinity state and a Glanzmann thrombasthenia-like phenotype[END_REF]. The conformational changes permitting spontaneous PAC-1 binding but only rarely Fg binding remain to be defined although αIIbβ3 clustering remains a potential explanation [START_REF] Jayo | L718P mutation in the membrane-proximal cytoplasmic tail of β3 promotes abnormal αIIbβ3 clustering and lipid microdomain coalesce, and associates with a thrombasthenia-like phenotype[END_REF]. The activation state of αIIbβ3 is also often greater in transfected heterologous cells than for platelets of the patients themselves perhaps due to abnormal recycling and concentration of the mutated integrin in internal pools. Unexpectedly, variable or no PAC-1 binding in our patients was seen after stimulation with TRAP despite these patients showing a residual aggregation response in citrated PRP. This apparent contradiction is possible related to the non-stirred conditions of the in vitro PAC-1 binding experiments. As patients from family C showed a markedly abnormal response to AA, a role for thromboxane A2 generation in αIIbβ3 activation merits investigation. Previous studies have examined MK maturation in culture but have largely been performed for patients with a 40 amino acid del (p.647-686) in the β3 extracellular β-tail domain causal of MTP [START_REF] Bury | Outside-in signaling generated by a constitutively activated integrin αIIbβ3 impairs proplatelet formation in human megakaryocytes[END_REF][START_REF] Bury | Cytoskeletal perturbation leads to platelet dysfunction and thrombocytopenia in variant forms of Glanzmann thrombasthenia[END_REF][START_REF] Hauschner | Abnormal cytoplasmic extensions associated with active αIIbβ3 are probably the cause for macrothrombocytopenia in Glanzmann thrombasthenia-like syndrome[END_REF]. Among the changes that were noted were (i) fewer proplatelets and (ii) tips of larger size; changes associated with abnormal MK spreading on Fg and a disordered actin distribution and cytoskeletal defects seemingly linked to a sustained "outside-in" signaling induced by the constitutively active αIIbβ3 [START_REF] Bury | Outside-in signaling generated by a constitutively activated integrin αIIbβ3 impairs proplatelet formation in human megakaryocytes[END_REF][START_REF] Bury | Cytoskeletal perturbation leads to platelet dysfunction and thrombocytopenia in variant forms of Glanzmann thrombasthenia[END_REF][START_REF] Hauschner | Abnormal cytoplasmic extensions associated with active αIIbβ3 are probably the cause for macrothrombocytopenia in Glanzmann thrombasthenia-like syndrome[END_REF]. Analysis of megakaryopoiesis for our patients did not reveal a defect in MK maturation or in ploidy but confirmed the above studies and previous studies on MKs from Japanese families with αIIbR995W (9) or the UK family with β3D723H defect in that abnormal proplatelet formation with decreased branching and with bulges of increased size at their tips. This defect was quite similar between our patients even if the size of tips seemed larger for the patients with αIIbR995W. The relationship between defects in αIIbβ3 complexes to changes in α-granule size remains to be determined. VWF-labelled granules with increased size were detected already in proplatelets in AII.1 and interestingly for the three patients studied, the larger the platelet surface area the larger the α-granule diameter suggesting that the defect responsible for increased platelet size contributes also to determination of α-granule size. The cause of the apparent fusion or coalescing of granules as a mechanism of forming the giant granules not only observed for this patient but also for the β3 Leu718 deletion ( 12) merits further study. The fact that mutations modifying the salt bridge between the positively charged αIIbR995 and negatively charged β3D723 is particularly intriguing. Our results therefore support a generalized hypothesis where mutations within αIIb or β3 cytoplasmic domains somehow lead to a facilitated MK surface interaction with stromal proteins in the marrow medullary compartment that in turn promote cytoskeletal changes that not only lead to altered proplatelet formation and platelet biogenesis but also, at least on occasion, an altered α-granule maturation. Footnote * Human Genome Variation Society nomenclature for the αIIb and β3 mature proteins are used in this study. Acknowledgment: the authors thank N.Saut for her technical expertise for sequencing. 1 for families A, B and C) and absent from the subjects AIII.1, BI.2, CII.2 A who have a normal platelet count. The structural effect of the mutations was studied using the sculpting function incorporated in the PyMol in silico modeling program (see Methods); the images in Fig. 1 show the transmembrane and cytoplasmic domain segments of αIIb (blue) and β3 (green). The interactions creating the inner membrane association clasp are highlighted for wild type αIIbβ3 in dashed circles with the positive αIIbR995 and negative β3D723 represented as sticks. Both substitutions result in steric interference, especially when β3D723 is replaced by the larger H. The substitutions of αIIbR995 with neutral W or β3D723 with the positive H necessarily weaken or abrogate the salt bridge potentially leading to a separation of the subunit tails. Secondary influences also extend to other membrane proximal amino acids in π interactions shown as sticks and transparent spheres (see Discussion). Fig. 1 . 1 Fig. 1. Genetic analysis and structural in silico modeling of the αIIb R995W and the β3 Fig. 3 . 3 Fig. 3. Transmission electron microscopy of platelets from the index cases of each family 1 ( 1 Fig 4, yellow arrow). Fig. 4 . 4 Fig. 4. In vitro derived MK differentiation. MK differentiation was induced from control (Cont1,
42,570
[ "12718", "776518" ]
[ "180118", "456973", "527021", "205264", "456973", "82929", "457279", "456973", "180118", "527021", "206663", "456973", "171298" ]
01577826
en
[ "phys" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01577826/file/main.pdf
Physics of muscle contraction In this paper we report, clarify and broaden various recent efforts to complement the chemistry-centered models of force generation in (skeletal) muscles by mechanics-centered models. The physical mechanisms of interest can be grouped into two classes: passive and active. The main passive effect is the fast force recovery which does not require the detachment of myosin cross-bridges from actin filaments and can operate without a specialized supply of metabolic fuel (ATP). In mechanical terms, it can be viewed as a collective folding-unfolding phenomenon in the system of interacting bi-stable units and modeled by near equilibrium Langevin dynamics. The parallel active force generation mechanism operates at slow time scales, requires detachment and is crucially dependent on ATP hydrolysis. The underlying mechanical processes take place far from equilibrium and are represented by stochastic models with broken time reversal symmetry implying non-potentiality, correlated noise or multiple reservoirs. The modeling approaches reviewed in this paper deal with both active and passive processes and support from the mechanical perspective the biological point of view that phenomena involved in slow (active) and fast (passive) force generation are tightly intertwined. They reveal, however, that biochemical studies in solution, macroscopic physiological measurements and structural analysis do not provide by themselves all the necessary insights into the functioning of the organized contractile system. In particular, the reviewed body of work emphasizes the important role of long-range interactions and criticality in securing the targeted mechanical response in the physiological regime of isometric contractions. The importance of the purely mechanical micro-scale modeling is accentuated at the end of the paper where we address the puzzling issue of the stability of muscle response on the so called " descending limb" of the isometric tetanus. Introduction In recent years considerable attention has been focused on the study of the physical behavior of cells and tissues. Outside their direct physiological functionality, these biological systems are viewed as prototypes of new artificially produced materials that can actively generate stresses, adjust their rheology and accommodate loading through remodeling and growth. The intriguing mechanical properties of these systems can be linked to hierarchical structures which bridge a broad range of scales, and to expressly nonlocal interactions which make these systems reminiscent more of structures and mechanisms than of a homogeneous matter. In contrast with traditional materials, where microscopic dynamics can be enslaved through homogenization and averaging, diverse scales in cells and tissues appear to be linked by complex energy cascades. To complicate matters further, in addition to external loading, cells and tissues are driven internally by endogenous mechanisms supplying energy and maintaining non-equilibrium. The multifaceted nature of the ensuing mechanical responses makes the task of constitutive modeling of such distributed systems rather challenging [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15]. While general principles of active bio-mechanical response of cells and tissues still remain to be found, physical understanding of some specific sub-systems and regimes has been considerably improved in recent years. An example of a class of distributed biological systems whose functioning has been rather thoroughly characterized on both physiological and bio-chemical levels is provided by skeletal (striated) muscles [16][17][18][START_REF] Mcmahon | Muscles, reflexes and locomotion[END_REF][START_REF] Nelson | Biological Physics Energy, Information, Life[END_REF][START_REF] Epstein | Theoretical models of skeletal muscle: biological and mathematical considerations[END_REF][START_REF] Rassier | Striated Muscles: From Molecules to Cells[END_REF][START_REF] Sugi | Muscle Contraction and Cell Motility Fundamentals and Developments[END_REF][START_REF] Morel | Molecular and Physiological Mechanisms of Muscle Contraction[END_REF]. The narrow functionality of skeletal muscles is behind their relatively simple, almost crystalline geometry which makes them a natural first choice for systematic physical modeling. The main challenge in the representation of the underlying microscopic machinery is to strike the right balance between chemistry and mechanics. In this review, we address only a very small portion of the huge literature on force generation in muscles and mostly focus on recent efforts to complement the chemistry-centered models by the mechanics-centered models. Other perspectives on muscle contraction can be found in a number of comprehensive reviews [START_REF] Close | [END_REF][26][START_REF] Burke | Motor Units: Anatomy, Physiology, and Functional Organization[END_REF][START_REF] Eisenberg | [END_REF][29][30][31][START_REF] Aidley | The physiology of excitable cells 4th ed[END_REF][START_REF] Geeves | [END_REF][34][35][36][37][38]. The physical mechanisms of interest for our study can be grouped into two classes: passive and active. The passive phenomenon is the fast force recovery which does not require the detachment of myosin cross-bridges from actin filaments and can operate without a specialized supply of ATP. It can be viewed as a collective folding-unfolding in the system of interacting bi-stable units and modeled by near equilibrium Langevin dynamics. The active force generation mechanism operates at much slower time scales, requires detachment from actin and is fueled by continuous ATP hydrolysis. The underlying processes take place far from equilibrium and are represented by stochastic models with broken time reversal symmetry implying non-potentiality, correlated noise, multiple reservoirs and other non-equilibrium mechanisms. The physical modeling approaches reviewed in this paper support the biochemical perspective that phenomena involved in slow (active) and fast (passive) force generation are tightly intertwined. They reveal, however, that biochemical studies of the isolated proteins in solution, macroscopic physiological measurements of muscle fiber energetics and structural studies using electron microscopy, X-ray diffraction and spectroscopic methods do not provide by themselves all the necessary insights into the functioning of the organized contractile system. The importance of the microscopic physical modeling that goes beyond chemical kinetics is accentuated by our discussion of the mechanical stability of muscle response on the descending limb of the isometric tetanus (segment of the tension-elongation curve with negative stiffness) [17-19; 39]. An important general theme of this review is the cooperative mechanical response of muscle machinery which defies thermal fluctuations. To generate substantial force, individual contractile elements must act collectively and the mechanism of synchronization has been actively debated in recent years. We show that the factor responsible for the cooperativity is the inherent non-locality of the system ensured by a network of cross-linked elastic backbones. The cooperation is amplified because of the possibility to actively tune the internal stiffness of the system towards a critical state where correlation length diverges. The reviewed body of work clarifies the role of non-locality and criticality in securing the targeted mechanical response of muscle type systems in various physiological regimes. It also reveals that the "unusual" features of muscle mechanics, that one can associate with the idea of allosteric regulation, are generic in biological systems [40][41][42][43] and several non-muscle examples of such behavior are discussed in the concluding section of the paper. Background We start with recalling few minimally necessary anatomical and biochemical facts about muscle contraction. Skeletal muscles are composed of bundles of non ramified parallel fibers. Each fiber is a multi-nuclei cell, from 100 µm to 30 cm long and 10 µm to 100 µm wide. It spans the whole length of the tissue. The cytoplasm of each muscle cell contains hundreds of 2 µm wide myofibrils immersed in a network of transverse tubules whose role is to deliver the molecules that fuel the contraction. When activated by the central nervous system the fibers apply tensile stress to the constraints. The main goal of muscle mechanics is to understand the working of the force generating mechanism which operates at submyofibril scale. The salient feature of the skeletal muscle myofibrils is the presence of striations, a succession of dark an light bands visible under transmission electron microscope [16]. The 2 µm regions between two Z-disks, identified as half-sarcomeres in Fig. 1, are the main contractile units. As we see in this figure, each half-sarcomere contains smaller structures called myofilaments. The thin filaments, which are 8 nm wide and 1 µm long, are composed of polymerized actin monomers. Their helix structure has a periodicity of about 38 nm, with each monomer having a 5 nm diameter. The thick filaments contain about 300 myosin II molecules per half-sarcomere. Each myosin II is a complex protein with 2 globular heads whose tails are assembled in a helix [44]. The tails of different myosins are packed together and constitute the backbone of the thick filament from which the heads, known as cross-bridges, project outward toward the surrounding actin filaments. The crossbridges are organized in a 3 stranded helix with a periodicity of 43.5 nm and the axial distance between two adjacent double Schematic representation of a segment of myofibril showing the elementary force generating unit: the half-sarcomere. Zdisks are passive cross-linkers responsible for the crystalline structure of the muscle actin network; M-lines bundle myosin molecules into global active cross-linkers. Titin proteins connect the Z-disks inside each sarcomere. heads of about 14.5 nm [45]. Another important sarcomere protein, whose role in muscle contraction remains ambigous, is titin. This gigantic molecule is anchored on the Z-disks, spans the whole sarcomere structure and passively controls the overstretching; about its potentially active functions see Refs. [46][47][48][49]. A broadly accepted microscopic picture of muscle contraction was proposed by A.F Huxley and H.E. Huxley in the 1950's, see a historical review in Ref [50]. The development of electron myograph and X ray diffraction techniques at that time allowed the researcheres to observe the dynamics of the dark and light bands during fiber contraction [51][52][53]. The physical mechanism of force generation was first elucidated in [54], where contraction was explicitly linked to the relative sliding of the myofilaments and explained by a repeated, millisecond long attachement-pulling interaction between the thick and thin filaments; some conceptual alternatives are discussed in Refs. [START_REF] Pollack | Muscles and molecules-uncovering the principles of biological motion[END_REF][START_REF] Cohen | [END_REF][57] The sliding-filament hypothesis [53; 58] assumes that during contraction actin filaments move past myosin filaments while actively interacting with them through the myosin crossbridges. Biochemical studies in solution showed that actomyosin interaction is powered by the hydrolysis of ATP into ADP and phosphate Pi [59]. The motor part of the myosin head acts as an enzyme which, on one side, increases the hydrolysis reaction rate and on the other side converts the released chemical energy into useful work. Each ATP molecule provides 100 zJ (zepto = 10 -21 ) which is equivalent to ∼ 25 k b T at room temperature, where k b = 1.381 × 10 -23 J K -1 is the Boltzmann constant and T is the absolute temperature in K. The whole system remains in permanent disequilibrium because the chemical potentials of the reactant (ATP) and the products of the hydrolysis reaction (ADP and Pi) are kept out of balance by a steadily operating exterior metabolic source of energy [16; 17; 60]. The stochastic interaction between individual myosin cross bridges and the adjacent actin filaments includes, in addition to cyclic attachment of myosin heads to actin binding sites, concurrent conformational change in the core of the myosin catalytic domain (of folding-unfolding type). A lever arm amplifies this structural transformation producing the power stroke, which is the crucial part of a mechanism allowing the attached cross bridges to generate macroscopic forces [16; 17]. Representation of the Lymn-Taylor cycle, where each mechanical state (1 → 4) is associated with a chemical state (M-ADP-Pi, A-M-ADP-Pi, A-M-ADP and M-ATP). During one cycle, the myosin motor executes one power-stroke and splits one ATP molecule. A basic biochemical model of the myosin ATPase reaction in solution, linking together the attachment-detachment and the power stroke, is known as the Lymn-Taylor (LT) cycle [59]. It incorporates the most important chemical states, known as M-ATP, A-M-ADP-Pi, AM-ADP and AM, and associates them with particular mechanical configurations of the actomyosin complex, see Fig. 2. The LT cycle consists of 4 steps [17; 35; 62; 63]: (i) 1→2 Attachment. The myosin head (M) is initially detached from actin in a pre-power stroke configuration. ATP is in its hydrolyzed form ADP+Pi, which generates a high affinity to actin binding sites (A). The attachment takes place while the conformational mechanism is in prepower stroke state. (ii) 2→3 Power-stroke. Conformational change during which the myosin head executes a rotation around the binding site accompanied with a displacement increment of a few nm and a force generation of a few pN. During the power stroke, phosphate (Pi) is released. (iii) 3→4 Detachment. Separation from actin filament occurs after the power stroke is completed while the myosin head remains in its post power stroke state. Detachment coincides with the release of the second hydrolysis product ADP which considerably destabilize the attached state. As the myosin head detaches, a fresh ATP molecule is recruited. (iv) 4→1 Re-cocking (or repriming). ATP hydrolysis provides the energy necessary to recharge the power stroke mechanism. While this basic cycle has been complicated progressively to match an increasing body of experimental data [64][65][66][67][68], the minimal LT description is believed to be irreducible [69]. However, its association with microscopic structural details and relation to specific micro-mechanical interactions remain a subject of debate [70][71][START_REF] Sugi | Evidence for the essential role of myosin head lever arm domain and myosin subfragment-2 in muscle contraction Skeletal Muscle -From Myogenesis to Clinical Relations[END_REF]. Another complication is that the influence of mechanical loading on the transition rates, that is practically impossible to simulate in experiments on isolated proteins, remains unconstrained by the purely biochemical models. An important feature of the LT cycle, which appears to be loading independent, is the association of vastly different timescales to individual biochemical steps, see Fig. 2. For instance, the power stroke, taking place at ∼1ms time scale, is the fastest step. It is believed to be independent of ATP activity which takes place at the orders of magnitude slower time scale, 30-100 ms [67; 73]. The rate limiting step of the whole cycle is the release of ADP with a characteristic time of ∼ 100 ms, which matches the rate of tension rise in an isometric tetanus. Mechanical response 1.2.1. Isometric force and isotonic shortening velocity. Typical experimental setup for measuring the mechanical response of a muscle fibers involves a motor and a force transducer between which the muscle fiber is mounted. The fiber is maintained in an appropriate physiological solution and is electro stimulated. When the distance between the extremities of the fibers is kept constant (length clamp or hard device loading), the fully activated (tetanized) fiber generates an active force called the isometric tension T 0 which depends on the sarcomere length L [77; 78]. The measured "tension-elongation" curve T 0 (L) , shown in Fig. 3(a), reflects the degree of filament overlap in each half sarcomere. At small sarcomere lengths ( L ∼ 1.8 µm), the isometric tension level increases linearly as the detrimental overlap (frustration) diminishes. Around L = 2.1 µm, the tension reaches a plateau T max , the physiological regime, where all available myosin cross-bridges have a possibility to bind actin filament. The descending limb corresponds to regimes where the optimal filament overlap progressively reduces (see more about this regime in Section 5). One of the main experiments addressing the mechanical behavior of skeletal muscles under applied forces (load clamp or soft loading device) was conducted by A.V. Hill [79], who introduced the notion of "force-velocity" relation. First the muscle fiber was stimulated under isometric conditions producing a force T 0 . Then the control device was switched to the load clamp mode and a load step was applied to the fiber which shortened (or elongated) in response to the new force level. After a transient [80] the system reached a steady state where the shortening velocity could be measured. A different protocol producing essentially the same result was used in Ref. [81] where a ramp shortening (or stretch) was applied to a fiber in length clamp mode and the tension measured at a particular stage of the time response. Note that in contrast to the case of passive friction, the active force-velocity relation for tetanized muscle enters the quadrant where the dissipation is negative, see Fig. 3(b). Fast isometric and isotonic transients. The mechanical responses characterized by the tension-elongation relation and the force-velocity relation are associated with timescales of the order of 100 µs. To shed light on the processes at the millisecond time scale, fast load clamp experiments were performed in Refs. [82][83][84]. Length clamp experiments were first conducted in Ref. [74], where a single fiber was mounted between a force transducer and a loudspeaker motor able to deliver length steps completed in 100 µs. More specifically, after the isometric tension was reached, a length step δL (measured in nanometer per half sarcomere, nm hs -1 ) was applied to the fiber, with a feedback from a striation follower device that allowed to control the step size per sarcomere, see Fig. 4(a). Such experimental protocols have since become standard in the field [76; 85-88]. The observed response could be decomposed into 4 phases: (0 → 1) from 0 to about 100 µs (phase 1). The tension (respectively sarcomere length) is altered simultaneously with the length step (respectively force step) and reaches a level T 1 (respectively L 1 ) at the end of the step. The values T 1 and L 1 depend linearly on the loading (see Fig. 5, circles), and characterize the instant elastic response of the fiber. Various T 1 and L 1 measurements in different conditions allow one to link the instantaneous elasticity with different structural elements of the sarcomere, in particular to isolate the elasticity of the cross bridges from the elasticity of passive structures such as the myofilaments [89][90][91]. (1 → 2) from about 100 µs to about 3 ms (phase 2). In length clamp experiments, the tension is quickly recovered up to a plateau level T 2 close but below the original level T 0 ; see Fig. 4(a) and open squares in Fig. 5. Such quick recovery is too fast to engage the attachment-detachment processes and can be explained by the synchronized power stroke involving the attached heads [74]. For small step amplitudes δL, the tension T 2 is practically equal to the original tension T 0 , see the plateau on the T 2 vs. elongation relation in Fig. 5. In load 1), the processes associated with passive power stroke (2) and the ATP driven approach to steady state (3)(4). Data are adopted from Refs. [74][75][76]. T 0 T 2 , L 2 L 1 T 1 -15 -10 -5 0 5 0 0.5 1 1.5 δL [nm hs -1 ] T T 0 T 1 L 1 T 2 L 2 Figure 5. Tension-elongation relation reflecting the state of the system at the end of phase 1 (circles) and phase 2 (squares) in both length clamp (open symbols) and force clamp (filled symbols). Data are taken from Refs. [80; 85; 87; 93-95]. clamp experiment, the fiber shortens or elongates towards the level L 2 , see filled squares in Fig. 5. Note that on Fig. 5, the measured L 2 points overlap with the T 2 points except that the plateau appears to be missing. In load clamp the value of L 2 at loads close to T 0 has been difficult to measure because of the presence of oscillations [92]. At larger steps, the tension T 2 start to depend linearly on the length step because the power stroke capacity of the attached heads has been saturated. (2 → 3 → 4) In force clamp transients after ∼ 3 ms the tension rises slowly from the plateau to its original value T 0 , see Fig. 4(a). This phase corresponds to the cyclic attachment and detachment of the heads see Fig. 2, which starts with the detachment of the heads that where initially attached in isometric conditions (phase 3). In load clamp transients phase 4 is clearly identified by a shortening at a constant velocity, see Fig. 4(c), which, being plotted against the force, reproduces the Hill's force-velocity relation, see Fig. 3(b). First attempts to rationalize the fast stages of these experiments [74] have led to the insight that we deal here with mechanical snap-springs performing a transition between two configurations. The role of the external loading reduces to biasing mechanically one of the two states. The idea of bistability in the structure of myosin heads has been later fully supported by crystallographic studies [96][97][98]. Based on the experimental results shown in Fig. 5 one may come to a conclusion that the transient responses of muscle fibers to fast loading in hard (length clamp) and soft (load clamp) devices are identical. However, a careful analysis of Fig. 5 shows that the data for the load clamp protocol are missing in the area adjacent to the state of isometric contractions (around T 0 ). Moreover, the two protocols are clearly characterized by different kinetics. Recall that the rate of fast force recovery can be interpreted as the inverse of the time scale separating the end of phase 1 and the end of phase 2. The experimental results obtained in soft and hard device can be compared if we present the recovery rate as a function of the final elongation of the system. In this way, one can compare kinetics in the two ensembles using the same initial and final states; see dashed lines in Fig. 5. A detailed quantitative comparison, shown in Fig. 6, reveals considerably slower response when the system follows the soft device protocol (filled symbols). The dependence of the relaxation rate on the type of loading was first noticed in Ref. [99] and then confirmed by the direct measurements in Ref. [100]. These discrepancies will be addressed in Section 2. We complement this brief overview of the experimental results with an observation that a seemingly natural, purely passive interpretation of the power stroke is in apparent disagreement with the fact that the power stroke is an active force generating step in the Lymn-Taylor cross bridge cycle. The challenge of resolving this paradox served as a motivation for several theoretical developments reviewed in this paper. Modeling approaches 1.3.1. Chemomechanical models. The idea to combine mechanics and chemistry in the modeling of muscle contraction was proposed by A.F. Huxley [54]. The original model was focused exclusively on the attachment-detachment process and the events related to the slow time scale (hundreds of milliseconds). The attachment-detachment process was interpreted as an out-of-equilibrium reaction biased by a drift with a given velocity [67; 73]. The generated force was linked to the occupancy of continuously distributed chemical states and the attempt was made to justify the observed force-velocity relations [see Fig. 3(b)] using appropriately chosen kinetic constants. This approach was brought to full generality by T.L. Hill and collaborators [101][102][103][104][105]. More recently, the chemomechanical modelling was extended to account for energetics, to include the power-stroke activity and to study the influence of collective effects [67; 86; 106-114]. In the general chemo-mechanical approach muscle contraction is perceived as a set of reactions among a variety of chemical states [67; 68; 86; 115; 116]. The mechanical feedback is achieved through the dependence of the kinetic constants on the total force exerted by the system on the loading device. The chemical states form a network which describes on one side, various stages of the enzymatic reaction, and on the other side, different mechanical configurations of the system. While some of crystallographic states have been successfully identified with particular sites of the chemical network (attached and detached [54], strongly and weakly attached [67], pre and post power stroke [74], associated with the first or second myosin head [START_REF] Brunello | Proc. Natl. Acad. Sci[END_REF], etc.), the chemo-mechanical models remain largely phenomenological as the functions characterizing the dependence of the rate constants on the state of the force generating springs are typically chosen to match the observations instead of being derived from a microscopic model. In other words, due to the presence of mechanical elements, the standard discrete chemical states are replaced by continuously parameterized configurational "manifolds". Even after the local conditions of detailed balance are fulfilled, this leads to the functional freedom in assigning the transition rates. This freedom originates from the lack of information about the actual energy barriers separating individual chemical states and the uncertainty was used as a tool to fit experimental data. This has led to the development of a comprehensive phenomenological description of muscle contraction that is Biochemical vs purely mechanistic description of the power stroke in skeletal muscles: (a) The Lymn-Taylor four-state cycle, LT (71) and (b) the Huxley-Simmons two-state cycle, HS (71). Adapted from Ref. [121]. almost fully compatible with available measurements, see, for instance, Ref. [68] and the references therein. The use of phenomenological expressions, however, gives only limited insight into the micro-mechanical functioning of the force generating mechanism, leaves some lagoons in the understanding, as in the case of ensemble dependent kinetics, and ultimately has a restricted predictive power. Power-stroke modes To model fast force recovery A.F. Huxley and R.M. Simmons (HS) [74] proposed to describe it as a chemical reaction between the folded and unfolded configurations of the attached cross-bridges with the reaction rates linked to the structure of the underlying energy landscape. Almost identical descriptions of mechanically driven conformational changes were proposed, apparently independently, in the studies of cell adhesion [117 ; 118], and in the context of hair cell gating [119; 120]. For all these systems the HS model can be viewed as a fundamental meanfield prototype [121]. While the scenario proposed by HS is in agreement with the fact that the power stroke is the fastest step in the Lymn-Taylor (LT) enzymatic cycle [16; 59], there remained a formal disagreement with the existing biochemical picture, see Fig. 7. Thus, HS assumed that the mechanism of the fast force recovery is passive and can be reduced to a mechanically induced conformational change. In contrast, the LT cycle for actomyosin complexes is based on the assumption that the power stroke can be reversed only actively through the completion of the biochemical pathway including ADP release, myosin unbinding, binding of uncleaved ATP, splitting of ATP into ADP and P i , and then rebinding of myosin to actin [59; 68], see Fig. 2. While HS postulated that the power stroke can be reversed by mechanical means, most of the biochemical literature is based on the assumption that the power-stroke recocking cannot be accomplished without the presence of ATP. In particular, physiological fluctuations involving power stroke are almost exclusively interpreted in the context of active behavior [122][123][124][125][126][127][128]. Instead the purely mechanistic approach of HS, presuming that the power-stroke-related leg of the LT cycle can be decoupled from the rest of the biochemical pathway, was pursued in Refs [116; 129], but did not manage to reach the mainstream. 1.3.3. Brownian ratchet models. In contrast to chemomechanical models, the early theory of Brownian motors followed largely a mechanically explicit path [130][131][132][133][134][135][136][137][138]. In this approach, the motion of myosin II was represented by a biased diffusion of a particle on a periodic asymmetric landscape driven by a colored noise. The white component of the noise reflects the presence of a heat reservoir while the correlated component mimics the non-equilibrium chemical environment. Later, such purely mechanical approach was parallelled by the development of the equivalent chemistrycentered discrete models of Brownian ratchets, see for instance, Refs. [38; 139-142]. First direct applications of the Brownian ratchet models to muscle contraction can be found in Refs. [143][144][145], where the focus was on the attachment-detachment process at the expense of the phenomena at the short time scales (power stroke). In other words, the early models had a tendency to collapse the four state Lymn-Taylor cycle onto a two states cycle by absorbing the configurational changes associated with the transitions M-ATP → M-ADP-Pi and A-M-ADP-Pi → A-M-ADP into more general transitions M-ATP → AM-ADP and AM-ADP → M-ATP. Following Ref. [54], the complexity of the structure of the myosin head was reduced to a single degree of freedom representing a stretch of a series elastic spring. This simplification offered considerable analytical transparency and opened the way towards the study of stochastic thermodynamics and efficiency of motor systems, e.g. Refs. [140;146;147]. Later, considerable efforts were dedicated to the development of synthetic descriptions, containing both ratchet and power stroke elements [112; 113; 143; 144; 148-150]. In particular, numerous attempts have been made to unify the attachement-detachment-centered models with the power stroke-centered ones in a generalized chemo-mechanical framework [60; 67; 68; 86; 87; 105; 114; 116; 144; 151-154]. The ensuing models have reached the level of sophistication allowing their authors to deal with collective effects, including the analysis of traveling waves and coherent oscillations [60; 110; 114; 143; 155-159]. In particular, myosin-myosin coupling was studied in models of interacting motors [113; 152] and emergent phenomena characterized by large scale entrainment signatures were identified in Refs. [36; 110; 114; 122; 123; 148]. The importance of these discoveries is corroborated by the fact that macroscopic fluctuations in the groups of myosins have been also observed experimentally. In particular, considerable coordination between individual elements was detected in close to stall conditions giving rise to synchronized oscillations which could be measured even at the scale of the whole myofibril [26; 82; 92; 149; 160-162]. The synchronization also revealed itself through macro-scale spatial inhomogeneities reported near stall force condition [163][164][165][166]. In ratchet models the cooperative behavior was explained without direct reference to the power stroke by the fact that the mechanical state of one motor influences the kinetics of other motors. The long-range elastic interactions were linked to the presence of filamental backbones which are known to be elastically compliant [167; 168]. The fact, that similar cooperative behavior of myosin cross-bridges has been also detected experimentally at short time scales, during fast force recovery [92], suggests that at least some level of synchronization should be already within reach of the power-stroke-centered models disregarding motor activity and focusing exclusively on passive mechanical behavior. Elucidating the mechanism of such passive synchronization will be one of our main goals of Section 2. Organization of the paper In this review, we focus exclusively on models emphasizing the mechanical side of the force generation processes. The mechanical models affirm that in some situations the microscale stochastic dynamics of the force generating units can be adequately represented by chemical reactions. However, they also point to cases when one ends up unnecessarily constrained by the chemo-mechanical point of view. The physical theories, emphasized in this review, are in tune with the approach pioneered by Huxley and Simmons in their study of fast force recovery and with the general approach of the theory of molecular motors. The elementary contractile mechanisms are modeled by systems of stochastic differential equations describing random walk in complex energy landscapes. These landscapes serve as a representation of both the structure and the interactions in the system, in particular, they embody various local and nonlocal mechanical feedbacks. In contrast to fully microscopic molecular dynamical reconstructions of multi-particle dynamics, the reviewed mechano-centered models operate with few collective degrees of freedom. The loading is transmitted directly by applied forces while different types of noises serve as a representation of non-mechanical external driving mechanisms that contain both equilibrium and non-equilibrium components. Due to the inherent stochasticity of such mesoscopic systems [140], the emphasis is shifted from the averaged behavior, favored by chemo-mechanical approaches, to the study of the full probability distributions. In Section 2 we show that even in the absence of metabolic fuel, long-range interactions, communicated by passive crosslinkers, can ensure a highly nontrivial cooperative behavior of interacting muscle cross-bridges. This implies ensemble dependence, metastability and criticality which all serve to warrant efficient collective stroke in the presence of thermal fluctuations. We argue that in the near critical regimes the barriers are not high enough for the Kramers approximation to be valid [169; 170] which challenges chemistry-centered approaches. Another important contribution of the physical theory is in the emphasis on fluctuations as an important source of structural information. A particularly interesting conclusion of this section is the realization that a particular number of cross-bridges in realistic half-sarcomeres may be a signature of an (evolutionary) fine tuning of the mechanical response to criticality. In Section 3 we address the effects of correlated noise on force generation in isometric conditions. We focus on the possibility of the emergence of new noise-induced energy wells and stabilization of the states that are unstable in strictly equilibrium conditions. The implied transition from negative to positive rigidity can be linked to time correlations in the out-of-equilibrium driving and the reviewed work shows that subtle differences in the active noise may compromise the emergence of such "non-equilibrium" free energy wells. These results suggest that ATP hydrolysis may be involved in tuning the muscle system to near-criticality which appears to be a plausible description of the physiological state of isometric contraction. In Section 4 we introduce mechanical models bringing together the attachment-detachment and the power stroke. To make a clear distinction between these models and the conventional models of Brownian ratchets we operate in a framework when the actin track is nonpolar and the bistable element is unbiased. The symmetry breaking is achieved exclusively through the coupling of the two subsystems. Quite remarkably, a simple mechanical model of this type formulated in terms of continuous Langevin dynamics can reproduce all four discrete states of the minimal LT cycle. In particular, it demonstrates that contraction can be propelled directly through a conformational change, which implies that the power stroke may serve as the leading mechanism not only at short but also at long time scales. Finally, in Section 5 we address the behavior of the contractile system on the descending limb of the isometric tetanus, a segment of the force length relation with a negative stiffness. Despite potential mechanical instability, the isometric tetanus in these regimes is usually associated with a quasi-affine deformation. The mechanics-centered approach allows one to interpret these results in terms of energy landscape whose ruggedness is responsible for the experimentally observed history dependence and hysteresis near the descending limb. In this approach both the ground states and the marginally stable states emerge as fine mixtures of short and long half-sarcomeres and the negative overall slope of the tetanus is shown to coexists with a positive instantaneous stiffness. A version of the mechanical model, accounting for surrounding tissues, produces an intriguing prediction that the energetically optimal variation of the degree of nonuniformity with stretch must exhibits a devil's staircase-type behavior. The review part ends with Section 7 where we go over some non-muscle applications of the proposed mechanical models In this Section 7 we formulate conclusions and discuss directions of future research. Passive force generation In this Section, we limit ourselves to models of passive force generation. First of all we need to identify an elementary unit whose force producing function is irreducible. The second issue concerns the structure of the interactions between such units. The goal here is to determine whether the consideration of an isolated force-producing element is meaningful in view of the presence of various feedback loops. The pertinence of this question is corroborated by the presence of hierarchies that undermine the independence of individual units. The schematic topological structure of the force generating network in skeletal muscles is shown in Fig. 8. Here we see that behind the apparent series architecture that one can expect to dominate in crystals, there is a system of intricate parallel connections accomplished by passive cross-linkers. Such elastic elements play the role of backbones linking elements at smaller scales. The emerging hierarchy is dominated by long-range interactions which make the "muscle crystal" rather different from the conventional inert solids. The analysis of Fig. 8 suggests that the simplest nontrivial structural element of the network is a half-sarcomere that can be represented as a bundle of finite number of cross-bridges. The analysis presented below shows that such model cannot be simplified further because for instance the mechanical response of individual cross-bridges is not compatible by itself with observations. The minimal model of this type was proposed by Huxley and Simmons (HS) who described myosin cross-bridges as hard spin elements connected to linear springs loaded in parallel [74]. In this Section, we show that the stochastic version of the HS model is capable of reproducing qualitatively the mechanical response of a muscle submitted to fast external loading in both length clamp (hard device) and force clamp (soft device) settings (see Fig. 6). We also address the question whether the simplest series connection of HS elements is compatible with the idea of an affine response of the whole muscle fiber. Needless to say that the oversimplified model of HS does not address the full topological complexity of the cross-bridge organization presented in Fig. 8. Furthermore, the 3D steric effects that appear to be crucially important for the description of spontaneous oscillatory contractions [148; 162; 164; 166; 171-173], and the effects of regulatory proteins responsible for steric blocking [174][175][176][177][178], are completely outside the HS framework. Hard spin model Consider now in detail the minimal model [74; 99; 121; 179; 180] which interprets the pre-and post-power-stroke conformations of the myosin heads as discrete (chemical) states. Since these states can be viewed as two configurations of a "digital" switch such model belongs to the hard spin category. The potential energy of an individual spin unit can be written in the form u HS (x) = { v 0 if x = 0, 0 if x = -a. (2.1) where the variable x takes two values, 0 and -a, describing the unfolded and the folded conformations, respectively. By a we denoted the "reference" size of the conformational change interpreted as the distance between the two energy wells. With the unfolded state we associate an energy level v 0 while the folded configuration is considered as a zero energy state, see Fig. 9(a). In addition to a spin unit with energy (2.1) we assume that each cross-bridge contains a linear spring with stiffness κ 0 in series with the bi-stable unit; see Fig. 9(b). The attached cross-bridges are connecting myosin and actin filaments which play the role of elastic backbones. Their function is to provide mechanical feedback and coordinate the mechanical state of the individual cross-bridges [167; 168]. There is evidence [89; 95] that a lump description of the combined elasticity of actin and myosin filaments by a single spring is rather adequate, see also Refs. [89; 100; 181-183]). Hence we represent a generic half sarcomere as a cluster of N parallel HS elements and assume that this parallel bundle is connected in series to a linear spring of stiffness κ b . We chose a as the characteristic length of the system, κ 0 a as the characteristic force, and κ 0 a 2 as the characteristic energy. The resulting dimensionless energy of the whole system (per cross bridge) at fixed total elongation z takes the form A A B (a) A B A (b) • • • • • • • • • • • • half-sarcomere • • • elementary unit • A • B • A (c) v(x; z) = 1 N N ∑ i=1 [ (1 + x i ) v 0 + 1 2 (y -x i ) 2 + λ b 2 (z -y) 2 ] , (2.2) where λ b = κ b /(N κ 0 ), y represents the elongation of the cluster of parallel cross bridges and x i = {0, -1}, see Fig. 9(b). Here, for simplicity, we did not modify the notations as we switched to non-dimensional quantities. It is important to note that here we intentionally depart from the notations introduced in Section 1.2. For instance, the length of the half sarcomere was there denoted by L, which is now z. Furthermore, the tension which was previously T will be now denoted by σ while we keep the notation T for the ambient temperature. Soft and hard devices. It is instructive to consider first the two limit cases, λ b = ∞ and λ b = 0. Zero temperature behavior. If λ b = ∞, the backbone is infinitely rigid and the array of cross-bridges is loaded in a hard device with y being the control parameter. Due to the permutational invariance of the energy v(x; y) = 1 N N ∑ i=1 [ (1 + x i ) v 0 + 1 2 (y -x i ) 2 ] , (2.3) each equilibrium state is fully characterized by a discrete order parameter representing the fraction of cross-bridges in the folded (post power stroke) state p = - 1 N N ∑ i=1 x i . At zero temperature all equilibrium configurations with a given p correspond to local minima of the energy (2.3), see Ref. [179]. These metastable states can be viewed as simple mixtures the two states, one fully folded with p = 1, and the energy (1/2)(y + 1) 2 , and the other one fully unfolded with p = 0, and the energy (1/2)y 2 + v 0 . The energy of the mixture reads v(p; y) = p 1 2 (y + 1) 2 + (1 -p) [ 1 2 y 2 + v 0 ] . (2.4) The absence of a mixing energy is a manifestation of the fact that the two populations of cross-bridges do not interact. The energies of the metastable states parameterized by p are shown in Fig. 10 (c-e). Introducing the reference elongation y 0 = v 0 -1/2, one can show that the global minimum of the energy corresponds either to folded state with p = 1, or to unfolded state with p = 0. At the transition point y = y 0 , all metastable states have the same energy, which means that the global switching can be performed at zero energy cost, see Fig. 10(d). The tension-elongation relations along metastable branches parameterized by p can be presented as σ(p; z) = ∂ ∂z v(p; y) = y + p, where σ denotes the tension (per cross-bridge). At fixed p, we obtain equidistant parallel lines, see Fig. 10 [(a) and(b)]. At the crossing (folding) point y = y 0 , the system following the global minimum exhibits a singular negative stiffness. Artificial metamaterial showing negative stiffness has been recently engineered by drawing on the Braess paradox for decentralized globally connected networks [13; 184; 185]. Biological examples of systems with non-convex energy and negative stiffness are provided by RNA and DNA hairpins and hair bundles in auditory cells [120; 186-188]. In the other limit λ b → 0, the backbone becomes infinitely soft (zy → ∞) and if λ b (zy) → σ the system behaves as if it was loaded in a soft device, where now the tension σ is the control parameter. The relevant energy can be written in the form The order parameter p parametrizes again the branches of local minimizers of the energy (2.5), see Ref. [179]. At a given value of p, the energy of a metastable state reads w(x, y; σ) = v(x, y) -σz = 1 N N ∑ i=1 [ (1 + x i ) v 0 + 1 2 (y -x i ) 2 -σy ] , (2.5) ŵ(p; σ) = - 1 2 σ 2 + pσ + 1 2 p(1 -p) + (1 -p)v 0 . (2.6) In contrast to the case of a hard device [see Eq. (2.4)], here there is a nontrivial coupling term p(1p) describing the energy of a regular solution. The presence of this term is a signature of a mean-field interaction among individual cross-bridges. The tension-elongation relations describing the set of metastable states can be now written in the form ẑ(p; σ) = -∂ ∂σ ŵ(p; σ) = σp. The global minimum of the energy is again attained either at p = 1 or p = 0, with a sharp transition at σ = σ 0 = v 0 , which leads to a plateau on the tension-elongation curve, see Fig. 10 (b). Note that even in the continuum limit the stable "material" response of this system in hard and soft devices differ and this ensemble nonequivalence is a manifestation of the presence of long-range interactions. To illustrate this point further, we consider the energetic cost of mixing in the two loading devices at the conditions of the switch between pure states, see Fig. 10 [(d) and(g)]. In the hard device [see (d)] the energy dependence on p in this state is flat suggesting that there is no barrier, while in the soft device [see (g)] the energy is concave which means that there is a barrier. To develop intuition about the observed inequivalence, it is instructive to take a closer look at the minimal system with N = 2, see Fig. 11. Here for simplicity we assumed that v 0 = 0 implying σ 0 = 0 and y 0 = -1/2. The two pure configurations are labeled as A (p = 0) and C (p = 1) at σ = σ 0 and as D (p = 0) and B (p = 1) at y = y 0 . In a hard device, where the two elements do not interact, the transition from state D to state B at a given y = y 0 goes through the configuration B+D, which has the same energy as configurations D and B: the cross-bridges in folded and unfolded states are geometrically compatible and their mixing requires no additional energy. Instead, in a soft device, where individual elements interact, a transition from state A to state C taking place at a given σ = 0 requires passing through the transition state A + C which has a nonzero pre-stress. Pure states in this mixture state have different -1 1 -1 1 B D A C y -y 0 σ C + C A + A B + D A + C B + B D + D Figure 11. Behavior of two cross-bridges. Thick line: global minimum in a soft device (σ 0 = 0). Dashed lines, metastable states p = 0, and p = 1. The intermediate stress-free configuration is obtained either by mixing the two geometrically compatible states B and D in a hard device, which results in a B + D structure without additional internal stress, or by mixing the two geometrically incompatible states A and C in a soft device, which results in a A + C structure with internal residual stress. Adapted from Ref. [179]. values of y, and therefore the energy of the mixed configuration A + C, which is stressed, is larger than the energies of the pure unstressed states A and C. We also observe that in a soft device the transition between the pure states is cooperative requiring essential interaction of individual elements while in a hard device it takes place independently in each element. Finite temperature behavior. We now turn to finite temperature to check the robustness of the observations made in the previous section. Consider first the hard device case λ b = ∞, considered chemo-mechanically in the seminal paper of HS [74], see Ref. [121] for the statistical interpretation. With the variable y serving now as the control parameter, the equilibrium probability density for a micro-state x with N elements takes the form ρ(x; y, β) = Z(y, β) -1 exp [-βv(x; y)] where the partition function is Z(y, β) = ∑ x ∈ {0,-1} N exp [-βNv(x; y)] = [Z 1 (y, β)] N . Here Z 1 represents the partition function of a sin-gle element given by Z 1 (y, β) = exp [ - β 2 (y + 1) 2 ] +exp [ -β ( y 2 2 + v 0 ) ] . (2.7) Therefore one can write ρ(x; y, β) = ∏ N i=1 ρ 1 (x i ; y, β), where we have introduced the equilibrium probability distribution for a single element ρ 1 (x; y, β) = Z 1 (y, β) -1 exp [-βv(x; y)] , (2.8) with v(x; y) -the energy of a single element. The lack of cooperativity in this case is clear if one considers the marginal probability density at fixed p ρ(p; y, β) = ( N N p ) [ρ 1 (-1; y, β)] N p [ρ 1 (0; y, β)] N (1-p) = Z(y, β) -1 exp[-βN f (p; y, β)], where f (p; y, β) = v(y, p)-(1/β) s(p) is the marginal free energy, v is given by Eq. (2.4) and s(p) = 1 N log ( N N p ) , is the ideal entropy, see Fig. 12. In the thermodynamic limit N → ∞ we obtain explicit expression f ∞ (p; y, β) = v(p; y) -(1/β) s ∞ (p), where s ∞ (p) = -[p log(p) + (1 -p) log(1 -p)] . The func- tion f ∞ (p) is always convex since ∂ 2 ∂p 2 f ∞ (p; y, β) = [ β p(1 - p) ] -1 > 0, and therefore the marginal free energy always has a single minimum p * (y, β) corresponding to a microscopic mixture of de-synchronized elements, see Fig. 12(b). These results show that the equilibrium (average) properties of a cluster of HS elements in a hard device can be fully recovered if we know the properties of a single element-the problem studied in [74]. In particular, the equilibrium free energy f (z, β) = f (p * ; σ, β), where p * is the minimum of the marginal free energy f [see Fig. 12(c)] can be written in the HS form f (y, β) = - 1 βN log [Z(y, β)] = 1 2 y 2 + v 0 + y -y 0 2 - 1 β ln { 2 cosh [ β 2 (y -y 0 ) ] } , (2.9) which is also an expression of the free energy in the simplest paramagnetic Ising model [START_REF] Balian | From Microphysics to Macrophysics Methods and Applications of Statistical Physics[END_REF]. Its dependence on elongation is illustrated in Fig. 13(a). We observe that for β ≤ 4 (supercritical temperatures), the free energy is convex while for β > 4 (sub-critical temperatures), it is non-convex. The emergence of an unusual "pseudo-critical" temperature β = β c = 4 in this paramagnetic system is a result of the presence of the quadratic energy associated with the "applied field" y, see Eq. (2.9). The ensuing equilibrium tension-elongation relation (per cross-bridge) is identical to the expression obtained in Ref. [74], ⟨σ⟩ (y, β) = ∂ f ∂ y = σ 0 +y-y 0 - 1 2 tanh [ β 2 (y -y 0 ) ] . (2.10) As a result of the nonconvexity of the free energy, the dependence of the tension ⟨σ⟩ on y can be non-monotone, see Fig. 13(b). Indeed, the equilibrium stiffness In connection with these results we observe that the difference between the quasi-static stiffness of myosin II measured by single molecule techniques, and its instantaneous stiffness obtained from mechanical tests on myofibrils, may be due to the fluctuational term κ F , see Refs. [91; 191; 192]. Note also that the fluctuation-related term does not disappear in the zero temperature limit (producing a delta function type contribution to the affine response at y = y 0 ), which is a manifestation of a (singular) glassy behavior [193; 194]. κ(y, β) = ∂ ⟨σ⟩ (y, β)/∂ y = 1 -(β/4) { sech [β (y -y 0 ) /2] } 2 , ( 2 It is interesting that while fitting their experimental data HS used exactly the critical value β = 4, corresponding to zero stiffness in the state of isometric contraction. Negative stiffness, resulting from non-additivity of the system, prevails at subcritical temperatures; in this range a shortening of an element leads to tension increase which can be interpreted as a meta-material behavior [13; 99; 195]. In the soft device case λ b = 0 , the probability density associated with a microstate x is given by ρ(x, y; σ, β) = Z(σ, β) -1 exp [-βNw(x, y; σ)] where the partition function is now Z(σ, β) = ∫ dy ∑ x ∈ {0,1} N exp { -βN [v(x; y) -σy] } . By integrating out the internal variable x i , we obtain the marginal probability density depending on the two order parameters, y and p, ρ(p, y; σ, β) = Z(σ, β) -1 exp [-βNg(p; y; σ, β)] . (2.12) Here we introduced the marginal free energy g(p, y; σ, β) = f (p, y, β) -σy = v(p, y) -σy -(1/β)s(p), (2.13) which is convex at large temperatures and non-convex (with two metastable wells) at low temperatures, see Fig. 14, signaling the presence of a genuine critical point. By integrating the distribution (2.12) over p we obtain the marginal distribution ρ(y; σ, β) = Z -1 exp [-βNg(y; σ, β)] where g(y; σ, β) = f (y; β) -σy, with f being the equilibrium free energy of the system in a hard device, see Eq. 2.9. This free energy has more than one stable state as long as the equation f ′ (y)-σ = 0 has more than one solution. Since f ′ is precisely the average tension elongation-relation in the hard device case, we find that the critical temperature is exactly β c = 4. The same result could be also obtained directly as a condition of the positive definiteness of the Hessian for the free energy (2.13) (in the thermodynamic limit). The physical origin of the predicted second order phase transition becomes clear if instead of p we now eliminate y and introduce the marginal free energy at fixed p. In the (more transparent) thermodynamic limit we can write g ∞ (p; σ, β) = ŵ(p, σ) -β -1 s ∞ (p), (2.14) where ŵ = -(1/2)σ 2 + p(σ -σ 0 ) + (1/2)p(1 -p) + v 0 , is the zero temperature energy of the metastable states parametrized by p, see Eq. 2.6 and Fig. 10. Since the entropy s ∞ (p) is convex with a maximum at p = 1/2, the convexity of the free energy depends on the competition between the term p(1-p) purely mechanical interaction and the term s ∞ (p)/β, with the the later dominating at low β. reflecting pre post N = 1 -2 -1 0 1 0 0.2 0.4 0.6 0.8 1 y -y 0 v p = 0 p = 1 / 4 p = 1 / 2 p = 3 / 4 p = 1 N = 4 -2 -1 0 1 0 0.2 0.4 0.6 0.8 1 y -y 0 f ∀N -2 -1 0 1 0 0.2 0.4 0.6 0.8 1 y -y 0 f (a) (b) (c) y -y + σ - σ + y -y 0 σ σ 0 -1 1 -1 0 1 y -y + y -y 0 κ (a) (b) (c) The Gibbs free energy g ∞ (σ, β) and the corresponding force-elongation relations are illustrated in Fig. 15. In (a), the energies of the critical points of the free energy (2.14) are represented as function of the loading and the temperature, with several isothermal sections of the energy landscape are shown in (b). For each critical point p, the elongation ŷ = σp is shown in Fig. 15(c). At σ = σ 0 = v 0 , the free energy g ∞ becomes symmetric with respect to p = 1/2 and therefore we have ⟨p⟩ (σ 0 , β) = 1/2, independently of the value of β. The structure of the second order phase transition is further illustrated in Fig. 16(a). Both mechanical and thermal properties of the system can be obtained from the probability density (2.12). By eliminating y and taking the thermodynamic limit N → ∞ we obtain ρ ∞ (p; σ, β) = Z -1 exp [-βNg ∞ (p; σ, β)] with Z(σ, β) = ∑ p exp[-βNg ∞ (p; σ, β)]. The average mechanical behavior of the system is now controlled by the global minimizer p * (σ, β) of the marginal free energy g ∞ , for instance, g(σ, β) = g ∞ (p * , σ, β) and ⟨p⟩ (σ, β) = p * (σ, β), The average elongation ⟨y⟩ (σ, β) = σp * (σ, β) is illustrated in Fig. 16 (c), for the case β = 5. The jump at σ = σ 0 corresponds to the switch of the global minimum from C to A, see Fig. 16[(a) and(c)]. In Fig. 16[(d)-(f)] we also illustrate typical stochastic behavior of the order parameter p at fixed tension σ = σ 0 (ensuring that ⟨p⟩ = 1/2). Observe that in the ordered (low temperature, ferromagnetic) phase [see (f)], the thermal equilibrium is realized through the formation of temporal microstructure, a domain structure in time, which implies intermittent jumps between ordered metastable (long living) configurations. Such transition are systematically observed during the unzipping of biomolecules, see, for instance, Ref. [196]. In Fig. 17 we show the equilibrium susceptibility χ(σ, β) = -∂ ∂σ ⟨p⟩ (σ, β) = N β⟨[p -⟨p⟩ (σ, β)] 2 ⟩ ≥ 0, which diverges at β = β c and σ = σ 0 . We can also com- pute the equilibrium stiffness κ(σ, β) -1 = 1 N ∂ ∂σ ⟨y⟩ (σ, β) = β⟨[y -⟨y⟩ (σ, β)] 2 ⟩ ≥ 0, where ⟨y⟩ (σ, β) = σ -⟨p⟩ (σ, β), and see that it is always positive in the soft device. This is another manifestation of the fact that the soft and hard device ensembles are not equivalent. At the critical point (β = 4, σ = σ 0 ), the marginal energy of the system has a degenerate minimum corresponding to the configuration with p = 1/2; see Fig. 15[(c) dashed line]. Near the critical point, we have the asymptotics 0 1 0 1 B p g ∞ 0 1 0 1 A B C p g ∞ A B C 0.8 0.9 1 1.1 1.2 0 0.5 1 B A C σ/σ 0 p B A C -1 -0.5 0 0.5 1 0.8 0.9 1 1.1 1.2 y -y 0 σ σ 0 (b) (c) 0 0.5 1 0 1 time (×10 3 ) p 0 0.5 1 0 1 time (×10 3 ) p 0 0.5 1 0 1 time (×10 3 ) p C A B B (d) β = 2 (e) β = 4 (f) β = 5 p ∼ 1/2 ± ( √ 3/4)[β -4] 1/2 , for σ = σ 0 , and p ∼ 1/2 -sign[σ - σ 0 ] [(3/4) |σ -σ 0 |] 1/3 , for β = 4. showing that the critical exponents take the classical mean field values [START_REF] Balian | From Microphysics to Macrophysics Methods and Applications of Statistical Physics[END_REF]. Similarly we obtain ⟨y⟩ - y 0 = ±( √ 3/4) [β -4] 1/2 , for σ = σ 0 , and ⟨y⟩ -y 0 = sign[σ -σ 0 ] [(3/4) |σ -σ 0 |] 1/3 , for β = 4. In critical conditions, where the stiffness is equal to 0, the system becomes anomalously reactive; for instance, being exposed to small positive (negative) force increment it instantaneously unfolds (folds). In Fig. 18 we summarize the mechanical behavior of the system in hard [(a) and (b)] and soft devices [(c) and (d)]. In a hard device, the system develops negative stiffness below the critical temperature while remaining de-synchronized and fluctuating at fast time scale. Instead, in the soft device the stiffness is always non-negative. However, below the critical temperature the tension elongation relation develops a plateau which corresponds to cooperative (macroscopic) fluctuations between two highly synchronized metastable states. In the soft device ensemble, the pseudo-critical point of the hard device ensemble becomes a real critical point with diverging susceptibility and classical mean field critical exponents. For the detailed study of the thermal properties in soft and hard devices, see Refs. [121; 180]. Mixed device. Consider now the general case when λ b is finite. In the muscle context this parameter can be interpreted as the lump description of myofilaments elasticity [89; 91; 95], in cell adhesion it can be identified either with the stiffness of the extracellular medium or with the stiffness of the intracellular stress fiber [118; 197; 198], and for protein . In a hard device , the pseudo critical temperature β -1 c = 1/4 separates a regime where the tension elongation is monotone (a) from the region where the system develops negative stiffness (b). In soft device, this pseudo critical point becomes a real critical point above which (β > β c ) the system becomes bistable (d). folding in optical tweezers, it can be viewed as the elasticity of the optical trap or the DNA handles [186; 188; 199-204]. The presence of an additional series spring introduces a new macroscopic degree of freedom because the elongation of the bundle of parallel cross-bridges y can now differ from the total elongation of the system z, see Fig. 9. At zero temperature, the metastable states are again fully characterized by the order parameter p, representing the fraction of cross-bridges in the folded (post-power-stroke) configuration. At equilibrium, the elongation of the bundle is given by ŷ = (λ b zp) /(1 + λ b ), so that the energy of a metastable state is now vb (p; z) = v(p; ŷ) + (λ b /2)(zŷ) 2 , which can be rewritten as vb (p; z) = λ b 2(1 + λ b ) [ p(z + 1) 2 + (1 -p)z 2 ] + (1 -p)v 0 + p(1 -p) 2(1 + λ b ) . (2.15) Notice the presence of the coupling term ∼ p(1p), characterizing the mean field interaction between crossbridges. One can see that this term vanishes in the limit λ b → ∞. Again, when λ b → 0 and zy → ∞, while λ b (zy) → σ, we recover the soft device potential modulo an irrelevant constant. The global minimum of the energy (2.15) corresponds to one of the fully synchronized configurations (p = 0 or p = 1). These two configurations are separated at the transition point z = z 0 = (1 + λ b )v 0 /λ b -1/2 , by an energy barrier whose height now depends on the value of λ b , see Ref. [179] for more details. At finite temperature, the marginal free energy at fixed p and y can be written in the form f m (p, y; z, β) = f (p; y, β) + λ b 2 (z -y) 2 , (2.16) where f is the marginal free energy for the system in a hard device (at fixed y). Averaging over y brings about the interaction among cross-bridges exactly as in the case of a soft device. The only difference with the soft device case is that the interaction strength now depends on the new dimensionless parameter λ b . The convexity properties of the energy (2.16) can be studied by computing the Hessian, H(p, y; z, β) = ( 1 + λ b 1 1 [βp (1 -p)] -1 ) (2.17) which is positive definite if β < β c where the critical temperature is now β * c = 4(1 + λ b ). The latter relation also defines the critical line λ b = λ c (β) = β/4 -1, separating disordered phase (λ b > λ c ) , where the marginal free energy has a single minimum, from the ordered phase (λ b < λ c ), where the system can be bi-stable. As in the soft device case, elimination of the internal variable p allows one to write the partition function in a mixed device as Z = ∫ exp { -βN [ f m (y; z, β)] } dy. Here f m denotes the marginal free energy at fixed y and z f m (y; z, β) = f (y; β) + (λ b /2)(z -y) 2 (2.18) and f is the equilibrium free energy at fixed y, given by Eq. (2.9). We can now obtain the equilibrium free energy fm = -(1/β) log [Z(z, β)] and compute its successive derivatives. In particular the tension-elongation relation ⟨σ⟩ (⟨y⟩) and the equilibrium stiffness κ m can be written the form ⟨σ⟩ = λ b [z -⟨y⟩] , κ m = λ b { 1 -βNλ b [⟨ y 2 ⟩ -⟨y⟩ 2 ] } . As in the soft device case, we have in the thermodynamic limit, ⟨y⟩ (z, β) = y * (z, β), where y * is the global minimum of the marginal free energy (2.18). We can also write κ m = κ(y * ,β)λ b κ(y * ,β)+λ b , where κ is the thermal equilibrium stiffness of the system at fixed y, see Eq. (2.11). Since λ b > 0, we find that the stiffness of the system becomes negative when κ becomes negative, which takes place at low temperatures when β > 4. Our results in the mixed device case are summarized in Fig. 19(a) where we show the phase diagram of the system in the (λ b , β -1 ) plane. The hard and soft device limits, which we have already analyzed, correspond to points (a)-(d). At finite λ b there are three "phases": (i) In phase I, corresponding to β < 4, the marginal free energy (2.18) is convex and the equilibrium tension elongation relation is monotone ; (ii) In phase II [4 < β < 4(1 + λ b ), see (e) ] the free energy is still convex but the tension-elongation becomes non monotone; (iii) In phase III [β > 4(1 + λ b )], the marginal free energy (2.18) is non convex and the equilibrium response contains a jump, see (f) in the right panel of Fig. 19. Kinetics. Consider bi-stable elements described by microscopic variables x i whose dynamics can be represented as a series of jumps between the two states. The probabilities of the direct and reverse transitions in the time interval dt can be written as P(x i (t + dt) = -1|x i (t) = 0) = k + (y, β)dt, P(x i (t + dt) = 0|x i (t) = -1) = k -(y, β)dt. (2.19) Here k + (y, β) [resp. k -(y, β)] is the transition rate for the jump from the unfolded state (resp. folded state) to the folded state (resp. unfolded state). The presence of the In the mixed device, the system exhibits three phases, labeled I, II and III in the left panel. The right panels show typical dependence of the energy and the force on the loading parameter z and on the average internal elongation ⟨y⟩ in the subcritical (phase II, e) and in the supercritical (phase III, f) regimes. In phase I, the response of the system is monotone; it is analogous to the behavior obtained in a hard device for β < 4, see Fig. 18(b). In phase II, the system exhibits negative stiffness but no collective switching except for the soft device limit λ b → 0, see Fig. 18(d). I II III 0.1 0.2 0.3 0.4 0 1 2 3 4 β -1 λ b ∞ (a) (b) (c) (d) (e) (f) In phase III (supercritical regime), the system shows an interval of macroscopic bistability (see dotted lines) leading to abrupt transitions in the equilibrium response (solid line). x 0 jumps is a shortcoming of the hard spin model of Huxley and Simmons [74] and in the model with non-degenerate elastic bistable elements (soft spins) they are replaced by a continuous Langevin dynamics [99; 205], see Section 2.2.4. v 0 E 0 E 1 -1 (a) y > -1/2 u x 0 v 0 E 0 E 1 -1 (b) y < -1/2 u -2 -1 0 1 2 4 6 β = 1 β = 2 y kτ (c) To compute the transition rates k ± (y, β) without knowing the energy landscape separating the two spin states, we first follow [74] who simply combined the elastic energy of the linear spring with the idea of the flat microscopic energy landscape between the wells, see Fig. 20(a,b) for the notations. Assuming further that the resulting barriers E 0 and E 1 = E 0 + v 0 are large comparing to k b T, we can use the Kramers approximation and write the transition rates in the form k + (y, β) = k -exp [-β (y -y 0 )] , k -(y, β) = exp [-β E 1 ] = const, (2.20) where k -determines the timescale of the dynamic response: τ = 1/k -= exp [β E 1 ] . The latter is fully controlled by a single parameter E 1 whose value was chosen by HS to match the observations. Note that Eq. (2.20) is only valid if y > -1/2 [see Fig. 20(a)], which ensures that the energy barrier for the transition from pre-to post-power stroke is actually affected by the load. In the range y < -1/2, omitted by HS, the forward rate becomes constant, see Fig. 20(a). -1 0 v 0 v * k - k + x u -1 0 1 2 3 0 5 10 15 20 = 0 = -0.5 = -1 σ/σ 0 (k + + k -) τ (a) (b) The fact that only one transition rate in the HS approach depends on the load makes the kinetic model non-symmetric: the overall equilibration rate between the two states r = k + + k -monotonously decreases with stretching. For a long time this seemed to be in accordance with experiments [74; 85; 87; 206], however, a recent reinterpretation of the experimental results in Ref. [207] suggested that the recovery rate may eventually increase with the amplitude of stretching. This finding can be made compatible with the HS framework if we assume that both energy barriers, for the power stroke and for the reverse power stroke, are load dependent, see Fig. 21, and Ref. [180] for more details. This turns out to be a built-in property of the soft spin model considered in Section 2.2. In the hard spin model with N elements, a single stochastic trajectory can be viewed as a random walk characterized by the transition probabilities P [ p t+dt = p t + 1/N ] = ϕ + (p t , t)dt, P [ p t+dt = p t -1/N ] = ϕ -(p t , t)dt, P [ p t+dt = p t ] = 1 - [ ϕ + (p t , t) + ϕ -(p t , t) ] dt (2.21) where the rate ϕ + (resp. ϕ -) describes the probability for one of the unfolded (resp. folded) elements to fold (resp. unfold) within the time-window dt. While in the case of a hard device we could simply write ϕ + (t) = N(1p t )k + (y, β), and ϕ -(t) = N p t k -, in both soft and mixed devices, y becomes an internal variable whose evolution become dependent on p, making the corresponding dynamics non-linear. The isothermal stochastic dynamics of the system specified by the transition rates (2.20) is most naturally described in terms of the probability density ρ(p, t). It satisfies the master equation, ∂ ∂t ρ(p, t) = ϕ + (1 -p + 1/N, t) ρ (p -1/N, t) + ϕ -(p + 1/N; t) ρ (p + 1/N, t) -[ϕ + (1 -p; t) + ϕ -(p; t)] ρ (p, t) , (2.22) where ϕ + and ϕ -are the transition rates introduced in Eq. (2.21). This equation generalizes the HS mean-field kinetic equation dealing with the evolution of the first moment ⟨p⟩ (t) = ∑ pρ(p, t), namely ∂ ∂t ⟨p⟩ (t) = ⟨ ϕ + (1 -p, t) ⟩ - ⟨ ϕ -(p, t) ⟩ . (2.23) In the case of a hard device, studied by HS, the linear dependence of ϕ ± on p allows one to compute the averages on the right hand side of (2.23) explicitly. The result is the first order reaction equation of HS ∂ ∂t ⟨p⟩ = k + (y) (1 -⟨p⟩) -k -(y) ⟨p⟩ . ( 2 ρ(p, t) = ( N N p ) [⟨p(t)⟩] N p [1 -⟨p(t)⟩] N -N p . (2.25) The entire distribution is then enslaved to the dynamics of the order parameter ⟨p⟩ (t) captured by the original HS model. It is then straightforward to show that in the long time limit the distribution (2.25) converges to the Boltzmann distribution (2.8). In the soft and mixed devices the cross-bridges interact and the kinetic picture is more complex. To simplify the setting, we assume that the relaxation time associated with the internal variable y is negligible comparing to other time scales. This implies that the variable y can be considered as equilibrated, meaning in turn that y = ŷ(p, σ) = σp in a soft device and y = ŷ(p, z) = (1 + λ b ) -1 (λ b zp), in a mixed device. Below, we briefly discuss the soft device case, which already captures the effect of the mechanical coupling in the kinetics of the system. Details of this analysis can be found in Ref. [180]. To characterize the transition rates in a cluster of N > 1 elements under fixed external force, we introduce the energy w(p, p * ) corresponding to a configuration where p elements are folded (x i = -1) and p * elements are at the transition state (x i = ℓ), see Fig. 21. The energy landscape separating two configurations p and q can be represented in terms of the "reaction coordinate" ξ = px(qp), see Fig. 22. The transition rates between neighboring metastable states can be computed explicitly using our generalized HS model (see Fig. 21), τϕ + (p; σ, β) = N(1 -p) exp [-β∆ w+ (p; σ)] , τϕ -(p; σ, β) = N p exp [-β∆ w-(p; σ)] , (2.26) where ∆ w± are the energy barriers separating neighboring states, ∆ w+ (p; σ) = -ℓ(σ -p) -σ 0 + (1 + 3 N ) ℓ 2 2 ∆ w-(p; σ) = -(ℓ + 1)(σ -p) + (1 + 3 N ) ℓ 2 2 -1+N +2ℓ 2N . In (2.26) 1/τ = α exp[-β v * ] , with α = const, determining the overall timescale of the response. The mechanical coupling appearing in the exponent of (2.26) makes the dynamics nonlinear. To understand the peculiarities of the time dependent response of the parallel bundle of N cross-bridges brought about by the above nonlinearity, it is instructive to first examine the expression for the mean first passage time τ(p → p ′ ) characterizing transitions between two metastable states with N p and N p ′ (p < p ′ ) folded elements. Following Ref. [208] (and omitting the dependence on σ and β), we can write σ - 1 σ + 0 0.02 0.06 0.1 σ/σ 0 τ φ φ ( p→p 1 ) φ ( p→p 0 ) φ( p 0 → p 1 ) φ ( p 1 → p 0 ) σ - 1 σ + 10 -12 10 -10 10 -8 10 -6 10 -4 10 -2 σ/σ 0 τ φ exact N → ∞ σ - 1 σ + 10 -5 10 -4 10 -3 10 -2 10 -1 σ/σ 0 τ φ (p 0 ↔ p 1 ) exact N → ∞ (a) (b) (c) τ(p → p ′ ) = N p ′ ∑ N k=N p [ρ (k) ϕ + (k)] -1 N k ∑ N i=0 ρ (i) , (2.27) where ρ is the marginal equilibrium distribution at fixed p and ϕ + is the forward rate. In the case β > β c for the interval of loading [σ -, σ + ], the marginal free energy g ∞ [see (2.14)] has two minima which we denote p = p 1 and p = p 0 , with p 0 < p 1 . The minima are separated by a maximum located at p = p. We can distinguish two process: (i) The intra-bassin relaxation, which corresponds to reaching the metastable states (p = 0 or p = 1) starting from the top of the energy barrier p and (ii) The inter-basin relaxation, which deals with transitions between macro-states. For the intra-basin relaxation, the first passage time can be computed using Eq. (2.27), see Ref. [180]. The resulting rates φ( p → p 0,1 ) ≡ 1/[τ( p → p 0,1 )] are practically independent of the load and scale with 1/N, see Fig. 23(a). Regarding the transition between the two macrostates, we note that Eq. (2.27) can be simplified if N is sufficiently large. In this case, the sums in Eq. (2.27) can be transformed into integrals τ(p 0 → p 1 ) = N 2 p 1 ∫ p 0 [ρ ∞ (u)ϕ + (u)] -1 [ u ∫ 0 ρ ∞ (v) dv ] du , (2.28) where ρ ∞ ∼ exp[-βNg ∞ ] is the marginal distribution in the thermodynamic limit. The inner integral in Eq. (2.28) can be computed using Laplace method. Noticing that the function g ∞ has a single minimum in the interval [0, u > p 0 ] located at p 0 , we can write τ(p 0 → p 1 ) = [ 2πN β g ′′ ∞ (p 0 ) ] 1 2 p 1 ∫ p 0 [ρ ∞ (u) ϕ + (u)] -1 ρ ∞ (p 0 ) du. In the remaining integral, the inverse density (1/ρ ∞ ) is sharply peaked at p = p so again using Laplace method we obtain τ(p 0 → p 1 ) = 2π (N/β) ϕ + ( p) -1 g ′′ ∞ (p 0 ) g ′′ ∞ ( p) -1 2 × exp { β N [g ∞ ( p) -g ∞ (p 0 )] } . (2.29) We see that the first passage time is of the order of exp[N∆g ∞ ], see Eq. (2.29), where ∆g ∞ is the height of the energy barrier separating the two metastable states. In the thermodynamic limit, this energy barrier grows exponentially with N, which freezes collective inter-basin dynamics and generates metastability, see Fig. 23[(b) and (c)] and Ref. [180]. The above analysis can be generalized for the case of a mixed device by replacing the soft device marginal free energy g by its mixed device analog. The kinetic behavior of the system in the general case is illustrated in Fig. 24. The individual trajectories generated by the stochastic Eq. 2.21 are shown for N = 100. The system is subjected to a slow stretching in hard [(a) and (b)], soft [(c) and (d)] and mixed [(e) and (f)] devices. These numerical experiments mimic various loading protocols used for unzipping tests in biological macro-molecules [186; 199; 203; 209]. Observe that individual trajectories at finite N show a succession of jumps corresponding to collective foldingunfolding events. At large temperatures, see Fig. 24[(a),(c) and (e)], the transition between the folded and the unfolded state is smooth and is associated with a continuous drift of a unimodal density distribution, see inserts in Fig. 24. In the hard device such behavior persists even at low temperatures, see (b), which correlates with the fact that the marginal free energy in this case is always convex. Below the critical temperature [(d) and (f)], the mechanical response becomes hysteretic. The hysteresis is due to the presence of the macroscopic wells in the marginal free energy which is also evident from the bimodal distribution of the cross-bridges shown in the inserts. A study of the influence of the loading rate on the mechanical response of the system can be found in Ref. [180]. To illustrate the fast force recovery phenomenon, consider the response of the system to an instantaneous load increment. We compare the behaviors predicted by the master equation the exact two scale dynamics at low temperatures, event though the final equilibrium states are captured correctly. The difference between the chemo-mechanical description of HS and the stochastic simulation targeting the full probability distribution is due to the fact that in the equation describing the mean-field kinetics the transition rates are computed based on the average values of the order parameter. At large temperatures, where the distribution is uni-modal, the average values faithfully describe the most probable states and therefore the mean-field kinetic theory captures the timescale of the response adequately; see Fig. 25 (a). Instead, at low temperatures, when the distribution is bi-modal, the averaged values correspond to the states that are poorly populated; see Fig. 25 (b) where ⟨p⟩ in = 1/2. The value of the order parameter , which actually makes kinetics slow, describes a particular metastable configuration rather than the average state and therefore the mean-field kinetic equation fails to reproduce the real dynamics; see Fig. 25[(b) and(c)]. Soft spin model The hard spin model states that the slope of the T 1 curve, describing instantaneous stiffness of the fiber, and the slope of the T 2 curve are equal, which differs from what is observed experimentally, see Fig. 5. The soft spin model [99; 205] was developed to overcome this problem and to provide a purely mechanical continuous description of the phenomenon of fast force recovery. To this end, the discrete degrees of freedom were replaced by the continuous variables (x i ); the latter can be interpreted as projected angles formed by the segment S1 of the myosin head with the actin filament. Most importantly, the introduction of continuous variables has eliminated the necessity of using multiple intermediate configurations for the head domain [67; 68; 86]. The simplest way to account for the bistability in the configuration of the myosin head is to associate a bi-quadratic double-well energy u SS (x i ) with each variable x i , see Fig. 26(a); interestingly, a comparison with the reconstructed potentials for unfolding biological macro-molecules shows that a biquaqdratic approximation may be quantitavely adequate [186]. A nondegenerate spinodal region can be easily incorporated into this model, however, in this case we lose the desirable analytical transparency. It is sufficient for our purposes to keep the other ingredients of the hard spin model intact; the original variant of the soft spin model model (see Ref. [START_REF] Marcucci | A mechanical model of muscle contraction[END_REF]) corresponded to the limit κ b /(N κ 0 ) → ∞. x 0 uSS(x) v 0 -a (a) κ 2 κ 1 Σ uSS κ 0 elastic backbone N κ b Σ y z (b) In the soft spin model the total energy of the cross bridge can be written in the form v(x, y) = u SS (x) + (κ 0 /2)(y -x) 2 , (2.30) where u SS (x) = { 1 2 κ 1 (x) 2 + v 0 if x > ℓ, 1 2 κ 2 (x + a) 2 if x ≤ ℓ. (2.31) The parameter ℓ describes the point of intersection of the 2 parabolas in the interval [-a, 0], and therefore v 0 = (κ 2 /2)(ℓ + a) 2 -(κ 1 /2) ℓ 2 , is the energy difference between the pre-power-stroke and the post-power-stroke configurations. It will be convenient to use normalized parameters to characterize the asymmetry of the energy wells: λ 2 = κ 2 /(1 + κ 2 ) and λ 1 = κ 1 /(1 + κ 1 ). The dimensionless total internal energy per element of a cluster now reads v(x, y; z) = 1 N N ∑ i=1 [ u SS (x i ) + 1 2 (y -x i ) 2 + λ b 2 (z -y) 2 ] , (2.32) where λ b = κ b /(N κ 0 ). Here z is the control parameter. In a soft device case, the energy takes the form w(x, y; σ) = 1 N N ∑ i=1 [ u SS (x i ) + 1 2 (y -x i ) 2 -σy ] . (2.33) where σ is the applied tension per cross-bridge, see Ref. [179] for the details. where α i = 1 if x i > ℓ and 0 otherwise, we find that the global minimum of the energy again corresponds to one of the homogeneous states p = 0, 1 with a sharp transition at z = z 0 . We can also take advantage of the fact the soft spin model deals with continuous variables x i and define a continuous reaction path connecting metastable states with different number of folded units. Each folding event is characterized by a micro energy barrier that can be now computed explicitly. The typical structure of the resulting energy landscape is illustrated in Fig. 27 for different values of the coupling parameter λ b , see Ref. [179] for the details. In Fig. 28 we illustrate the zero temperature behavior of the soft-spin model with a realistic set of parameters, see Tab. 1 below. Finite temperature behavior. When z is the control parameter (mixed device), the equilibrium probability distribution for the remaining mechanical degrees of freedom can be written in the form ρ(x, y; z, β) = Z -1 (z, β) exp [-βNv (x, y; z)] , where β = (κ 0 a 2 )/(k b T) and Z(z, β) = ∫ exp [-βNv (x, y; z)] d xdy. In the soft device ensemble, z becomes a variable and the equilibrium distribution takes the form, ρ(x, y, z; σ, β) = Z -1 (σ, β) exp [-βNw (x, y, z; σ)] , (2.34) with Z(σ, β) = ∫ exp [-βNw (x, y, z, σ)] d xdydz. When z is fixed, the internal state of the system can be again characterized by the two mesoscopic paramters y and p. By integrating (2.34) over x and y we can define the marginal density ρ(p; z, β) = Z -1 exp [-βN f (p; z, β)]. Here f is the marginal free energy at fixed (p, z) which is illustrated in Fig. 29. As we see, the system undergoes an order-disorder phase transition which is controlled by the temperature and by the elasticity of the backbone. If the double well potential is symmetric (λ 1 = λ 2 ), this transition is of second order as in the hard spin model. A typical bifurcation diagrams for the case of slightly nonsymmetric energy wells are shown in Fig. 30. The main feature of the model without symmetry is that the second order phase transition becomes a first order phase transition. critical points, two corresponding to metastable states and one to an unstable state. The equilibrium response can be obtained by computing the partition function Z numerically. In the thermodynamic limit, we can employ the same methods as in the previous section and identify the equilibrium mechanical properties of the system with the global minimum of the marginal free energy f . In Fig. 31[(c)-(e)], we illustrate the equilibrium mechanical response of the system : similar phase diagrams have been also obtained for other systems with long-range interactions [211]. While the soft spin model is analytically much less transparent than the hard spin model, we can still show analytically that the system develops negative stiffness at sufficiently low temperatures. Indeed, we can write f ′′ = ⟨σ⟩ ′ = λ b [ 1 -βNλ b ⟨ (y -⟨y⟩) 2 ⟩] , where f is the equilibrium free energy of the system in a mixed device. This expression is sign indefinite and by the same reasoning as in the hard spin case, on can show that the critical line separating Phase I and Phase II is represented in Fig. 31 by a vertical line T = T c . In phase I (T > T c ) the equilibrium free energy is convex and the resulting tension-elongation is monotone. In Phase II (T < T c ) the equilibrium free energy is non-convex and the tension-elongation relation exhibits an interval with negative stiffness. In phase III the energy is nonconvex within a finite interval around z = z 0 , see dotted line in Fig. 31(e). As a result the system has to oscillate between two metastable states to remain in the global minimum of the free energy [solid line in Fig. 31(e)]. The ensuing equilibrium tension-elongation curve is characterized by a jump located at z = z 0 . Observe that the critical line separating Phase II and Phase III in Fig. 31 (b) represents the minimum number of crossbridges necessary to obtain a cooperative behavior at a given value of the temperature. We see that for temperatures around 300 K, the critical value of N is about 100 which corresponds approximately to the number of cross-bridges involved in isometric contraction in each half-sarcomere, see Section 2.2.3. This observation suggests that muscle fibers may be tuned to work close to the critical state [99]. A definitive statement of this type, however, cannot be made at this point in view of the considerable error bars in the data presented in Table 1. In a soft device, a similar analysis can be performed in terms of the marginal Gibbs free energy g(p; σ, β). A comparison of the free energies of a symmetric system in the hard and the soft device ensembles is presented in Fig. 32, where the parameters are such that the system in the hard device is in phase III, see Fig. 31. We observe that both free energies are bi-stable in this range of parameters, however the energy barrier separating the two wells in the hard device case is about three times smaller than in the case of a soft device. Since the macroscopic energy barrier separating the two state is proportional to N, the characteristic time of a transition increases exponentially with N as in the hard spin model, see Section 2.1.3. Therefore the kinetics of the power-stroke will be exponentially slower in the soft device than in the hard device as it is observed in experiment, see more about this in the next section. Note also that the macroscopic oscillations are more coherent in a soft device than in a hard device. By differentiating the equilibrium Gibbs free energy g(σ, β) = -1/(βN) log [Z(σ, β)] with respect to σ, we obtain the tension-elongation relation, which in a soft device is always monotone since g′′ = - [ 1 + βN ⟨ (z -⟨z⟩) 2 ⟩] < 0. This shows once again that soft and hard device ensembles are non-equivalent, in particular, that only the system in a hard device can exhibit negative susceptibility. In Fig. 33, we illustrate the behavior of the equilibrium free energies f and g in thermodynamic limit [(a) and (b)] together with the corresponding tension-elongation relations [(b) and (d)], see Ref. [212] for the details. The tension and elongation are normalized by their values at the transition point where ⟨p⟩ = 1/2 while the value of β is taken from experiments (solid line). The bi-stability (metastability) takes place in the gray region and we see that this region is much wider in the soft device than in the hard device, which corroborates that the energy barrier is higher in a soft device. Matching experiments. The next step is to match the model with experimental data. The difficulty of the parameter identification lies in the fact that the experimental results vary depending on the species, and here we limit our analysis to the data obtained from rana temporaria [73; 80; 87; 89]. Typical values of the parameters of the non-dimensional model obtained from these data are listed in Table 1. The first parameter a is obtained from structural analysis of myosin II [73; 96-98]. It has been shown that that its tertiary structure can be found in two conformations forming an angle of ∼70°. This corresponds to an axial displacement of the lever arm end of ∼10 nm. We therefore fix the characteristic length in our model at a = (10 ± 1) nm. The absolute temperature T is set to 277.15 K which correspond to 4 • C. This is the temperature at which most experiments on frog muscles are performed [206]. Several experimental studies aimed at measuring the stiffness of the myosin head and of the myofilaments (our backbone). One technique consists in applying rapid (100 µs) length steps to a tetanized fiber to obtain its overall stiffness κ tot , which corresponds to the elastic backbone in series with N cross-bridges: κ tot = (N κ 0 κ b )/(N κ 0 + κ b ). The stiffness associated with the double well potential (κ 1,2 ) is not included into this formula because the time of the purely elastic response is shorter than the time of the conformational change. This implies an assumption that the conformational degree of freedom is "frozen" during the purely elastic response. Such assumption is supported by experiments reported in Ref. [213], where shortening steps were applied at different stages of the fast force recovery, which means during the power-stroke. The results show that the overall stiffness is the same in the recovery process and in the isometric conditions. If we change the chemical environment inside the fiber by removing the cell membrane ("skinning") it is possible to perform the length steps under different calcium concentrations. We recall that calcium ions bind to the tropomyosin complex to allow the attachment of myosin heads to actin. Therefore, by changing the calcium environment, one can change the number of attached motors (N) and thus their contribution to the total stiffness while the contribution of the filaments remains the same [87; 89; 214]. Another solution is to apply rapid oscillations during the activation phase when force rises [100; 215]. These different techniques give κ b = (150 ± 10) pN nm -1 , a value which is compatible with independent X-ray measurements [76; 91; 93; 100; 167; 168; 181; 216]. To determine the stiffness of a single element, elastic measurements have been performed on fibers in rigor mortis where all the 294 cross-bridges of a half-sarcomere are attached, see Ref. [89]. Under the assumption that the filament elasticity is the same in rigor and in the state of tetanus, one can deduce the stiffness of a single cross-bridge. The value extracted from experiment is κ 0 = (2.7 ± 0.9) pN nm As we have seen, the phase diagram shown in Fig. 31(b), suggests a way to understand why N ≈ 100. Larger values of N are beneficial from the perspective of the total force developed by the system. However, reaching deep inside phase III means highly coherent response, which gets progressively more sluggish as N increases. In this sense being around the would be a compromise between a high force and a high responsiveness. It follows from the developed theory that for the normal temperature the corresponding value of N would be exactly around 100; for an attempt of a similar evolutionary justification for the size of titin molecule [217]. There are, of course, other functional advantages of a near-criticality associated, for instance, with diverging correlation length and the possibility of fast coherent response. At the end of the second phase of the fast force recovery (see Section 1.2.2), the system reaches an equilibrium state characterized by the tension T 2 in a hard device or by the shortening L 2 in a soft device. The values of these parameters are naturally linked with the equilibrium tension ⟨σ⟩ in a hard device and equilibrium length ⟨z⟩ in a soft device. In particular, the theory predicts that in the large deformation (shortening or stretching) regimes, the tension-elongation relation must be linear, see Fig. 33. The linear stiffness in these regimes corresponds to the series arrangement of N elastic elements, each one with stiffness equal to either κ 1 or κ 2 and with a series spring characterized by the stiffness κ b . Using the classical dimensional notations-(T, L) instead of the non dimensional (σ, z)-the tension elongation relation at large shortening takes the form T 2 (L) = κ 0 κ 2 κ 0 +κ 2 κ b κ 0 κ 2 κ 0 +κ 2 + κ b (L + a) In experiment, the tension T 2 drops to zero when a step L 2 ≃ -14 nm hs -1 (nanometer per half-sarcomere) is applied to the initial configuration L 0 . Therefore L 0 = -a -L 2 . Sincea = 11 nm, we obtain L 0 = 3.2 nm. Using a linear fit of the experimental curve shown in Fig. 5 (shortening) we finally obtain κ 2 ≃ 1 pN nm -1 . The value of κ 1 is more difficult to determine since there are only few papers dealing with stretching [94; 218]. Based on the few available measurements, we conclude that the stiffness in stretching is 1.5 larger than in shortening which gives κ 1 ≃ 3.6 pN nm -1 . A recent analysis of the fast force recovery confirms this estimate [207]. The last parameter to determine is the intrinsic bias of the double well potential, v 0 , which controls the active tension in the isometric state. The tetanus of a single sarcomere in physiological conditions is of the oder of 500 pN [80; 100]. If we adjust v 0 to ensure that the equilibrium tension matches this value, we obtain v 0 ≃ 50 zJ. This energetic bias can also be interpreted as the maximum amount of mechanical work that the cross-bridge can produce during one stroke. Since the amount of metabolic energy resulting from the hydrolysis of one ATP molecule is of the order of 100 zJ we obtain a maximum efficiency around 50 % which agrees with the value currently favoured in the literature [18; 219]. Kinetics. After the values of the nondimensional parameters are identified, one can simulate numerically the kinetics of fast force recovery by exposing the mechanical system to a Langevin thermostat. For simplicity, we assume that the macroscopic variables y and z are fast and are always mechanically equilibrated. Such quasiadiabatic approximation is not essential but it will allow us to operate with a single relaxation time-scale associated with the microscopic variables x i . Denoting by η the corresponding drag coefficient we construct the characteristic timescale τ = η/κ, which will be adjusted to fit the overall rate of fast force recovery. The response of the internal variables x i is governed by the non-dimensional system dx i = b(x i )dt + √ 2β -1 dB i where the drift is b(x, z) = -u ′ SS (x i ) + (1 + λ b ) -1 (λ b z + 1 N ∑ x i ) -x i , b(x, σ) = -u ′ SS (x i ) + σ + N -1 ∑ x i -x i in a hard and a soft device, respectively. In both cases the diffusion term dB i represents a standard Wiener processes. In Fig. 34, we illustrate the results of stochastic simulations imitating fast force recovery , using the same notations as in actual experiments. The system, initially in thermal equilibrium at fixed L 0 (or T 0 ), was perturbed by applying fast (∼100 µs) length (load) steps with different amplitudes. Typical ensemble-averaged trajectories are shown in Fig. 34[(a) and (b)] in the cases of hard and soft device, respectively. In a soft device (b) the system was not able to reach equilibrium within the realistic time scale when the applied load was sufficiently close to T 0 , see, for instance, the curve T = 0.9 T 0 in Fig. 34(b), where the expected equilibrium value is L 2 = -5 nm hs -1 . Instead, it remained trapped in a quasistationary (glassy) state because of the high energy barrier required to be crossed in the process of the collective powerstroke. The implied kinetic trapping, which fits the pattern of two-stage dynamics exhibited by systems with long-range interactions [211; 220; 221], may explain the failure to reach equilibrium in experiments reported in Refs. [92; 161; 222]. In the hard device case, the cooperation among the cross-bridges is much weaker and therefore the kinetics is much faster, which allows the system to reach equilibrium at experimental time scale. A quantitative comparison of the obtained tensionelongation curves with experimental data [see Fig. 34(c)] shows that for large load steps the equilibrium tension fits the linear behavior observed in experiment as it can be expected from our calibration procedure. For near isometric tension in a soft device the model also predicts the correct interval of ki-netic trapping, see the gray region in Fig. 34(c). While the model suggests that negative stiffness should be a characteristic feature of the realistic response in a hard device for a single half-sarcomere (see Fig. 31), such behavior has not been observed in experiments on whole myofibrils. Note, however, that in the model all cross bridges are considered to be identical and, in particular, it is assumed that they are attached with the same initial pre-strain. If there exists a considerable quenched disorder resulting from the randomness of the attachment/detachment positions, the effective force elongation curve will be flatter [151]. Another reason for the disappearence of the negative susceptibility may be that the actual spring stiffness inside a cross-bridge is smaller due to nonlinear elasticity [223]. One can also expect the unstable half-sarcomeres to be stabilized actively through processes involving ATP , see Refs. [158; 224] and our Section 3. The softening can be also explained by the collective dynamics of many half sarcomeres organized in series, see our Section 2.3. The comparison of the rates of fast recovery obtained in our simulations with experimental data (see Fig. 34) shows that the soft-spin model reproduces the kinetic data in both hard and soft ensembles rather well. Note, in particular, that the rate of recovery in both shortening and stretching protocols increases with load. This is a direct consequence of the fact that the energy barriers for forward and the reverse transitions depend on the mechanical load. Instead, in the original formulation of the HS, and in most subsequent chemomechanical models, the reverse rate was kept constant and this effect was missing. In Ref. [207], the authors proposed to refine the HS model by introducing a load dependent barrier also for the reversed stroke, see the results of their modeling in Fig. 34. Interacting half-sarcomeres So far, attention has been focused on (passive) behavior of a single force generating unit, a half-sarcomere. We dealt with a zero dimensional, mean field model without spatial complexity. However, as we saw in Fig. 8(a), such elementary force generating units are arranged into a complex, spatially extended structure. Various types of cross-links in this structure can be roughly categorized as parallel or series connections. A prevalent perspective in physiological literature is that interaction among force generating units is so strong that the mean field model of a single unit provides an adequate description of the whole myofibril. The underlying assumption is that the deformation, associated with muscle contractions, is globally affine. To challenge this hypothesis, we consider in this Section the simplest arrangement of force generating units. We assume that the whole section of a muscle myofibril between the neighboring Z disk and M-line deforms in an affine way and treat such transversely extended unit as a (macro) half-sarcomere. The neighboring (macro) half-sarcomeres, however, will be allowed to deform in an non-affine way. The resulting model describes a chain of (macro) half-sarcomeres arranged in series and the question is whether the fast force recovery in such a chain takes place in an affine way [225]. Chain models of a muscle myofibril were considered in Refs. [2; 114; 226] where the nonaffinity of the deformation was established based on the numerical simulations of kinetics. Analytical studies of athermal chain models with bi-stable Here we present a simple analytical study of the equilibrium properties of a chain of half-sarcomeres which draws on Ref. (A) u ss κ 0 N κ b (B) y 1 z 1 (A) u ss κ 0 N κ b y 2 z 2 [231] and allows one to understand the outcome of the numerical experiments conducted in Ref. [114]. Two half-sarcomeres. Consider first the most elementary series connection of two half-sarcomeres, each of them represented as a parallel bundle of N cross-bridges. This system can be viewed as a schematic description of a single sarcomere, see Fig. 35(b). To understand the mechanics of this system, we begin with the case where the temperature is equal to zero. The total (nondimensional) energy per cross bridge reads v = 1 2 { 1 N N ∑ i=1 [ u SS (x 1i ) + 1 2 (y 1 -x 1i ) 2 + λ b 2 (z 1 -y 1 ) 2 ] + 1 N N ∑ i=1 [ u SS (x 2i ) + 1 2 (y 2 -x 2i ) 2 + λ b 2 (z 2 -y 2 ) 2 ] } . (2.35) In a hard device case, when we impose the average elongation z = (1/2)(z 1 + z 2 ), none of the half-sarcomeres is loaded individually in either soft or hard device. In a soft device case, the applied tension σ, which we normalized by the number of cross bridges in a half-sarcomere, is the same in each halfsarcomere when the whole system is in equilibrium. The corresponding dimensionless energy per cross bridge is w = v -σz. The equilibrium equations for the continuous variables x i are the same in hard and soft devices, and have up to 3 solutions,          xk1 (y k ) = (1 -λ 1 ) ŷk , if x ki ≥ ℓ, xk2 (y k ) = (1 -λ 2 ) ŷk -λ 1 , if x ki < ℓ, xk * = ℓ, (2.36) where again λ 1,2 = κ 1,2 /(1 + κ 1,2 ) and ŷk denotes the equilibrium elongation of the half-sarcomere with index k = 1, 2. We denote by ξ = {ξ 1 , ξ 2 }, the micro-configuration of a sarcomere where the triplets ξ k = (p k , r k , q k ), with p k + q k + r k = 1, characterize the fractions of cross bridges in halfsarcomere k that occupy position xk1 , xk * (spinodal state) and xk0 , respectively. For a given configuration ξ k , the equilibrium value of y k is given by ŷk (ξ k , z k ) = λ b ẑk + r k ℓ -p k λ 2 λ b + λ xb (ξ k ) , where λ xb (ξ k ) = p k λ 2 +q k λ 1 +r k , is the stiffness of each halfsarcomere. The elongation of a half-sarcomere in equilibrium is ẑk = ŷk + σ/λ b , where σ is a function of z and ξ in the hard device case and a parameter in the soft device case. To close the system of equations we need to add the equilibrium relation between the tension σ and the total elongation z = (1/2)(ŷ 1 + ŷ2 ) + σ/λ b . After simplifications, we obtain σ(z, ξ) = λ(ξ) [ z + 1 2 ( p 1 λ 2 -r 1 ℓ λ xb (ξ 2 ) + p 2 λ 2 -r 2 ℓ λ xb (ξ 2 ) )] , (2.37) ẑ(σ, ξ) = σ λ(ξ) - 1 2 ( p 1 λ 2 -r 1 ℓ λ xb (ξ 1 ) + p 2 λ 2 -r 2 ℓ λ xb (ξ 2 ) ) (2.38) in a hard and a soft devices, respectively, where λ(ξ ) -1 = λ -1 b +(1/2)[λ xb (ξ 1 ) -1 +λ xb (ξ 2 ) -1 ] is compliance of the whole sarcomere. The stability of a configuration (ξ 1 , ξ 2 ) can be checked by computing the Hessian of the total energy and one can show, that configurations containing cross-bridges in the spinodal state are unstable, see Refs. [212; 227] for detail. We illustrate the metastable configurations in Fig. 36 (hard device) and Fig. 37 (soft device). For simplicity, we used a symmetric double well potential (λ 1 = λ 2 = 0.5, ℓ = -0.5). Each metastable configuration is labeled by a number representing a micro-configuration in the form {(p 1 , q 1 ), (p 2 , q 2 )} where p k = 0, 1/2, 1 (resp. q k = 0, 1/2, 1) denotes the fraction of cross bridges in the post-power-stroke state (resp. pre-power-stroke) in halfsarcomere k. The correspondence between labels and configurations goes as follows: 1: {(1, 0), (1, 0)} -2 and 2': {(1, 0), ( 1 2 , 1 2 )} and {( 1 2 , 1 2 ), (1, 0)} -3: {( 1 2 , 1 2 ), ( 1 2 , 1 2 )} -4 and 4': {(1, 0), (0, 1)} and {(0, 1), (1, 0)} -5 and 5': {( 12 , 1 2 ), (0, 1)} and {(0, 1), ( 12 , 1 2 )} -6: {(0, 1), (0, 1)}. For instance the label 2': {( 12 , 1 2 ), (1, 0)} corresponds to a configuration where in the first half-sarcomere, half of the cross bridges are in post-powerstroke and another half are in pre-power-stroke; in the second half-sarcomere, all the cross bridges are in post-power-stroke. In the hard device case (see Fig. 36) the system, following the global minimum path (bold line), evolves through non affine states 4 {(1, 0), (0, 1)} and 4' {(0, 1), (1, 0)}, where one halfsarcomere is fully in pre-power-stroke, and the other one is fully in post-power-stroke. This path is marked by two transitions located at z * 1 and z * 2 see Fig. 36(a). The inserted sketches in Fig. 36 (b) show a single sarcomere in the 3 configurations encountered along the global minimum path. Note that along the two affine branches, where the sarcomere is in affine state (1 and 6), the M-line (see the middle vertical dashed line) is in the middle of the structure. Instead, in the non-affine state (branch 4), the two half-sarcomeres are not equally stretched, and the M-line is not positioned in the center of the sarcomere. As a result of the (spontaneous) symmetry breaking, the M-line can be shifted in any of the two possible directions to form either configuration 4 or 4', see also Ref. [225]. In the soft device case [see Fig. 37], the system following the global minimum path never explores non-affine states. Instead both half-sarcomeres undergo a full unfolding transition at the same threshold tension σ * . If the temperature is different from zero we need to compute the partition functions Z 2 (z, β) = ∫ exp [-2βNv(z, x)] δ(z 1 + z 2 -2z) dx (2.39) Z 2 (σ, β) = ∫ exp [-2βNw(σ, x)] dx, (2.40) in a hard and a soft device, respectively, where again β = (κa 2 /(k b T). The corresponding free energies are f2 (z, β) = -(1/β) log[Z 2 (z)] and g2 (σ, β) = -(1/β) log[Z 2 (σ)]. The explicit expressions of these free energies can be obtained in the thermodynamic limit N → ∞, but they are too long to be presented here, see Refs. [212; 231] for more details. We illustrate the results in Fig. 38 where we show both, the energies and the tension-elongation isotherms. We see that a sarcomere exhibits different behavior in the two loading conditions. In particular, the Gibbs free energy remains concave in the soft device case for all temperatures while the Helmholtz free energy becomes nonconvex at low temperatures in the hard device case. Nonconvexity of the Helmholtz free energy results in nonmonotone tension-elongation relations with the developments of negative stiffness. It is instructive to compare the obtained non-affine tension-elongation relations with the ones computed under the assumption that each half-sarcomere is an elementary constitutive element with a prescribed tension-elongation relation. We suppose that such a relation can be extracted from the response of a half-sarcomere in either soft or hard device which allows us to use expressions obtained earlier, see Fig. 33. The hard device case is presented in Fig. 39. With thick lines we show the equilibrium tension-elongation relation while thin lines correspond to the behavior of the two phenomenologically modeled half-sarcomeres in series exhibiting each either soft or hard device constitutive behavior. Note that if the chosen constitutive relation corresponds to the hard device protocol [illustrated in Fig 33(b)], we obtain several equilibrium states for a given total elongation which is a result of the imposed constitutive constraints, see Fig. 39(a). The global minimum path predicted by the "constitutive model" shows discontinuous transitions between stable branches which resemble continuous transitions along the actual equilibrium path. If instead we use the soft device constitutive law for the description of individual half-sarcomeres [illustrated in Fig. 33(d)], the tension-elongation response becomes monotone and is therefore completely unrealistic, see Fig. 39(b). We reiterate that in both comparisons, the misfit is due to the fact that in a fully equlibrated sarcomere none of the half-sarcomeres is loaded in either soft or hard device. It would be interesting to show that a less schematic system of this type can reproduce non-affinities observed experimentally [235]. In Fig. 40 we present the result of a similar analysis for a sarcomere loaded in a soft device. In this case, if the "constitutive model" is based on the hard device tensionelongation relations [from Fig. 33(b)], we obtain the same (constrained) metastable states as in the previous case, see Fig. 39(a), thin lines. This means, in particular, that the response contains jumps while the actual equilibrium response is monotone, see Fig. 40(a). Instead, if we take the soft device tension-elongation relation as a "constitutive model", we obtain the correct overall behavior, see Fig. 40(b). This is expected since in the (global) soft device case both halfsarcomeres are effectively loaded in the same soft device and the overall response is affine. A chain of half-sarcomeres. Next, consider the behavior of a chain of M half-sarcomeres connected in series. As before, each half-sarcomere is modeled as a parallel bundle of N cross bridges. We first study the mechanical response of this system at zero temperature. Introduce x ki -the continuous degrees of freedom characterizing the state of the cross bridges in halfsarcomere k, y k -the position of the backbone that connects all the cross bridges of the half-sarcomere k and z k -the total elongation of the half-sarcomere k. The total energy (per cross bridge) of the chain takes the form v(x, y, z) = 1 M N M ∑ k=1 { N ∑ i=1 [ u SS (x ki ) + 1 2 (y k -x ki ) 2 +λ b 1 2 (z k -y k ) 2 ] } , (2.41) where x = {x ki }, y = {y k } and z = {z k }. In the hard device, the total elongation of the chain is prescribed: M z = M ∑ k=1 z k , where z is the average imposed elongation (per halfsarcomere). In the soft device case, the tension σ is imposed and the energy of the system also includes the energy of the loading device w = v -σ ∑ M k=1 z k . We again characterize the microscopic configuration of each half-sarcomere k by the triplet ξ k = (p k , q k , r k ), denoting as before the fraction of cross bridges in each of the wells and in the spinodal point, with p k +q k +r k = 1 for all 1 ≤ k ≤ M. The vector ξ = (ξ 1 , . . . , ξ M ) then characterizes the configuration of the whole chain. In view of the complexity of the ensuing energy landscape, here we characterize only a subclass of metastable configurations describing homogeneous (affine) states of individual halfsarcomeres. More precisely, we limit our attention to configurations with q k = 0, p k = 1, 0 and r k = 1, 0 for all 1 ≤ k ≤ M. In this case, a single half-sarcomere can be characterized by a spin variable m k = 1, 0. The resulting equilibrium tension-elongation relations in hard and soft devices take the form σ(z, m) = [ 1 λ b + 1 M M ∑ k=1 1 m k λ 2 + (1 -m k )λ 1 ] -1 × [ z + 1 M M ∑ k=1 m k λ 1 m k λ 2 + (1 -m k )λ 1 ] , (2.42) ẑ(σ, m) = [ 1 λ b + 1 M M ∑ k=1 1 m k λ 2 + (1 -m k )λ 1 ] σ - 1 M M ∑ k=1 m k λ 2 m k λ 2 + (1 -m k )λ 1 , (2.43) where m = (m 1 , . . . , m M ). In Fig. 41 we show the energy and the tension-elongation relation for the system following the global minimum path in a hard device. Observe that the tension-elongation relation contains a series of discontinuous transitions as the order parameter M -1 ∑ m k increases monotonously from 0 to 1 and their number increases with M while their size decreases. In the limit M → ∞, the relaxed (minimum) energy is convex but not strictly convex, see the interval where the energy depends linearly on the elongation for the case M = 20 in Fig. 41(a), see also Refs. [227;236]. The corresponding tension-elongation curves [see Fig. 41(b)] exhibit a series of transitions. In contrast to the case of a single half sarcomere, the limiting behavior of a chain is the same in the soft and hard devices (see the thick line). The obtained analytical results are in full agreement with the numerical simulations reported in Refs. [114; 164; 235; 237]. Fig. 42 illustrates the distribution of elongations of individual half-sarcomere in a hard device case as the system evolves along the global minimum path. One can see that when deformation becomes non-affine the population of halfsarcomere splits into 2 groups: one group is stretched at the level above average (top trace above diagonal) and the other one at the level below average (bottom trace below diagonal). The numbers beside the curves indicates the amount of halfsarcomeres in each group. In the soft device case, the system always remains in the affine state: all half-sarcomeres change conformation at the same moment and therefore the system stays on the diagonal (the dashed lines) in Fig. 42. Assume now that the temperature is different from zero. The partition function for the chain in a soft device can be obtained as the product of individual partition functions: Z M (σ, β) = [Z s (σ, β)] M = [√ 2π N βλ b ∫ exp [-βNg(σ, x, β)] dx ] M , which reflects he fact that the half-sarcomeres in this setting are independent. In the hard device, the analysis is more involved because of the total length constraint. In this case we need to compute Z M (z, β) = ∫ exp [-βN Mv(z, x)] δ [ 1 M ∑ z k -z ] dx (2.44) A semi-explicit asymptotic solution can be obtained for the hard device case in the limit β → ∞ and M → ∞. Note first, that the partition function depends only on the "average magnetization" m -the fraction of half-sarcomeres in post-power-stroke conformation. At M N → ∞ we obtain asymptotically (see Ref. [212;231] for the details) Z M (z, β) ≈ C ϕ(m * ) exp [-βM NΨ(m * ; z, β)] [ βM N ∂ 2 m Ψ(m; z, β) m=m * ] 1 2 , (2.45) where C = ( 2π β ) (N +2)M -1 2 N 1 2 -M . Using the notations µ 1,2 = (λ 1,2 λ b )/(λ 1,2 + λ b ) , we can now write the expression for the marginal free energy at fixed m in the form Ψ(m; z, β) = 1 2 [ m µ 2 + 1 -m µ 1 ] -1 (z + m) 2 + (1 -m) v 0 - 1 2β [ m log (1 -λ 2 ) + (1 -m) log (1 -λ 1 ) ] + 1 βN [ m log (m) + (1 -m) log (1 -m) + m 2 log (λ 2 λ b ) + 1 -m 2 log (λ 1 λ b ) ] , (2.46) where computation of the second derivative of (2.46) with respect to m shows that Ψ is always convex. In other words, our assumption that individual half-sarcomeres respond in an affine way, implies that the system does not undergo a phase transition in agreement with what is expected for a 1D system with short range interactions. Now we can compute the Helmholtz free energy and the equilibrium tension-elongation relation for a chain in a hard device ϕ(m) = { [m/µ 2 + (1 -m)/µ 1 ] [m (1 -m)] } -1 2 . Here m * is the minimum of Ψ in the interval ]0, 1[. A direct f∞ (z, β) = Ψ(m * ; z, β), (2.47 ) σ∞ (z, β) = ( m * µ 2 + 1 -m * µ 1 ) -1 (z + m * ) . (2.48) In the case of a soft device, the Gibbs free energy and the corresponding tension-elongation relation are simply the re-scaled versions of the results obtained for a single halfsarcomere, see Section 2.2. In Fig. 43 we illustrate a typical equilibrium behavior of a chain in a hard device. The increase of temperature enhances the convexity of the energy, as in the case of a single half-sarcomere, however, when the temperature decreases we no longer see the negative stiffness. Instead, when N is sufficiently large, we see a tension-elongation plateau similar to what is observed in experiments on myofibrils, see Fig. 43(b). The obtained results can be directly compared with experimental data. Consider, for instance, the response of a chain with M = 20 half-sarcomeres submitted to a rapid length step. The equilibrium model with realistic parameters predicts in this case a tension-elongation plateau close to the observed T 2 curve, see dashed line in 44(a). Our numerical experiments, however, could not reproduce the part of this plateau in the immediate vicinity of the state of isometric contractions. This may mean that even in the chain placed in a hard device, individual half-sarcomeres end up being loaded in a mixed device and can still experience kinetic trapping. Our stochastic simulations for a chain in a soft device reproduce the whole trapping domain around the state of isometric contractions, see Fig. 44(a). The computed rate of the quick recovery for the chain is shown in Fig. 44(b). We see that the model is able to capture quantitatively the difference between the two loading protocols. However, the hard device response of the chain (see squares) is more sluggish than in the case of a single half-sarcomere. Once again, we see an interval around the state of isometric contractions where our system cannot reach its equilibrium state at the experimental time scale. Note, however, that the rate of relaxation to equilibrium increases with both stretching and shortening, saturating for large applied steps as it was experimentally observed in Ref. [207]. Active rigidity As we have seen in Section 2.2. Purely entropic stabilization is excluded in this case because the temperature alone is not sufficiently high to ensure positive stiffness of individual half-sarcomeres [114]. Here we discuss a possibility that the homogeneity of the myofibril configuration is due to active stabilization of individual half-sarcomeres [224]. We conjecture that metabolic resources are used to modify the mechanical susceptibility of the system and to stabilize configurations that would not have existed in the absence of ATP hydrolysis [239][240][241]. We present the simplest model showing that active rigidity can emerge through resonant non-thermal excitation of molecular degrees of freedom. The idea is to immitate the inverted Kapitza pendulum [242], aside from the fact that in biological systems the inertial stabilization has to be replaced by its overdamped analog. The goal is to show that a macroscopic mechanical stiffness can be controlled at the microscopic scale by a time correlated noise which in biological setting may serve as a mechanical representation of a nonequilibrium chemical reaction [243]. Mean field model To justify the prototypical model with one degree of freedom, we motivate it using the modeling framework developed above. Suppose that we model a half-sarcomere by a parallel array of N cross-bridges attached to a single actin filament following Section 2.2. We represent again attached cross bridges as bistable elements in series with linear springs but now assume additionally that there is a nonequilibrium driving provided through stochastic rocking of the bi-stable elements. More specifically, we replace the potential u SS (x) for individual cross-bridges by u SS (x)x f (t), where f (t) is a correlated noise with zero average simulating out of equilibrium environment, see Ref. [244] for more details. If such a half-sarcomere is subjected to a time dependent deterministic force f ext (t), the dynamics can be described by the following system of nondimensional Langevin equations ẋi = -∂ x i W + √ 2Dξ(t), ν ẏ = -∂ y W, (3.1) where ξ(t) a white noise with the properties ⟨ ξ(t) ⟩ = 0, and ⟨ ξ(t 1 )ξ(t 2 ) ⟩ = δ(t 2t 1 ). Here D is a temperature-like parameter, the analog of the parameter β -1 used in previous sections. The (backbone) variable y, coupled to N fast soft-spin type variables x i through identical springs with stiffness κ 0 , is assumed to be macroscopic, deterministic and slow due to the large value of the relative viscosity ν. We write the potential energy in the form W = ∑ N i=1 v(x i , y, t)-f ext y, where v(x, y, t) is the energy (2.30) with a time dependent tilt in x and the function f ext (t) is assumed to be slowly varying. The goal now is to average out fast degrees of freedom x i and to formulate the effective dynamics in terms of a single slow variable y. Note, that the equation for y can be re-written as ν N ẏ = κ 0 ( 1 N N ∑ i=1 x i -y ) + f ext N , (3.2) which reveals the mean field nature of the interaction between y and x i . If N is large, we can replace 1 N ∑ N i=1 x i by ⟨x⟩ using the fact that the variables x i are identically distributed and exchangeable [245]. Denoting ν 0 = ν/N and g ext = f ext /(κ 0 N) and assuming that these variables remain finite in the limit N → ∞, we can rewrite the equation for y in the form ν 0 ẏ = κ 0 [(⟨x⟩ -y) + g ext (t)]. Assume now for determinacy that the function f ext (t) is periodic and choose its period τ 0 in such a way that Γ = ν 0 /κ 0 ≫ τ 0 . We then split the force κ 0 (⟨x⟩y) acting on y into a slow component κ 0 ψ(y) = κ 0 (⟨x⟩y) and a slow-fast component κ 0 ϕ(y, t) = κ 0 (⟨x⟩ -⟨x⟩) where ⟨x⟩ = lim t→∞ (1/t) ∫ t 0 ∫ ∞ -∞ x ρ(x, t) dx dt, and ρ(x, t) is the probability distribution for the variable x. We obtain Γ ẏ = ψ(y)+ϕ(y, t)+ g ext and the next step is to average this equation over τ. To this end we introduce a decomposition y(t) = z(t) + ζ(t), where z is the averaged (slow) motion and ζ is a perturbation with time scale τ. Expanding our dynamic equation in ζ, we obtain, Γ( ż + ζ) = ψ(z) + ∂ z ψ(z)ζ + ϕ(z, t) + ∂ z ϕ(z, t)ζ + g ext . (3.3) Since g ext (t) ≃ τ -1 0 ∫ t+τ 0 t g ext (u) du, we obtain at fast time scale Γ ζ = ϕ(z, t), see Ref. [246] for the general theory of these type of expansions. Integrating this equation between t 0 and t ≤ t 0 + τ 0 at fixed z we obtain ζ(t) -ζ(t 0 ) = Γ -1 ∫ t t 0 ϕ(z(t 0 ), u)du and since ϕ is τ periodic with zero average, we can conclude that ζ(t) is also τ 0 periodic with zero average. If we now formally average (3.3) over the fast time scale τ 0 , we obtain Γ ż = ψ(z) + r + g ext , where r = (Γτ 0 ) -1 ∫ τ 0 ∫ t 0 ∂ z ϕ(z, t)ϕ(z, u) dudt. Given that both ϕ(z, t) and ∂ z ϕ(z, t) are bounded, we can write |r | ≤ (τ 0 /Γ)c ≪ 1, where the "constant" c depends on z but not on τ 0 and Γ. Therefore, if N ≫ 1 and ν/(κ 0 N) ≫ τ 0 , the equation for the coarse grained variable z(t) = τ -1 0 ∫ t+τ 0 t y(u) du can be written in terms of an effective potential (ν/N) ż = -∂ z F + f ext /N. To find the effective potential we need to compute the primitive of the averaged tension F(z) = ∫ z σ(s) ds, where σ(y) = κ 0 [y -⟨x⟩]. The problem reduces to the study of the stochastic dynamics of a variable x(t) described by a dimensionless Langevin equation ẋ = -∂ x w(x, y, t) + √ 2Dξ(t). (3.4) The potential w(x, y, t) = w p (x, t) + v e (x, y) is the sum of two components: w p (x, t) = u SS (x) -x f (t) , mimicking an out of equilibrium environment and v e (x, y) = (κ 0 /2)(xy) 2 , describing the linear elastic coupling of the "probe" with a "measuring device" characterized by stiffness κ 0 . We assume that the energy is supplied to the system through a timecorrelateded rocking force f (t) which is characterized by an amplitude A and a time scale τ. To have analytical results, we further assume that the potential u SS (x) is bi-quadratic, u SS (x) = (1/2) (|x| -1/2) 2 . Similar framework has been used before in the studies of directional motion of molecular motors [247]. The effective potential F(z) can be viewed as a nonequilibrium analog of the free energy [248][249][250][251]. While in our case, the mean-field nature of the model ensures the potentiality of the averaged tension, in a more general setting, the averaged stochastic forces may lose their gradient structure and even the effective "equations of state" relating the averaged forces with the corresponding generalized coordinates may not be well defined [252][253][254][255][256][257]. Phase diagrams Suppose first that the non-equilibrium driving is represented by a periodic (P), square shaped external force f (t) = A(-1) n(t) with n(t) = ⌊2t/τ⌋, (3.5) where the brackets denote the integer part. The Fokker-Planck equation for the time dependent probability distribution ρ(x, t) reads ∂ t ρ = ∂ x [ρ ∂ x w(x, t) + D∂ x ρ] . (3.6) Explicit solution of (3.6) can be found in the adiabatic limit when the correlation time τ is much larger than the escape time for the bi-stable potential u SS [132; 258]. The idea is that the time average of the steady state probability can be computed from the mean of the stationary probabilities with constant driving force (either f (t) = A or f (t) = -A). The adiabatic approximation becomes exact in the special case of an equilibrium system with A = 0, when the stationary probability distribution can be written explicitly 2 . The tension elongation curve σ(z) can then be computed analytically, since we know ⟨x⟩ = ⟨x⟩ = ∫ ∞ -∞ x ρ 0 (x) dx. The resulting curve and the corresponding potential F(z) are shown in Fig. 45(a). At zero temperature the equilibrium system with A = 0 exhibits negative stiffness at z = 0 where the effective potential F(z) has a maximum (spinodal state). As temperature increases we observe a standard entropic stabilization of the configuration z = 0, see Fig. 45(a). ρ 0 (x) = Z -1 exp [-ṽ(x)/D] . Here Z = ∫ ∞ -∞ exp(-ṽ(x)/D)dx, and ṽ(x, z) = (1/2)(|x| - 1/2) 2 + (κ 0 /2)(x -z) By solving equation ∂ z σ| z=0 = 0, we find an explicit expression for the critical temperature D e = r/[8(1 + κ 0 )] where r is a root of a transcendental equation 1 + √ r/πe -1/r /[1 + erf (1/ √ r)] = r/(2κ 0 ). The behavior of the roots of the equation σ(z) = -κ 0 (⟨x⟩z) = 0 at A = 0 is shown in Fig. 46 (b) which illustrates a second order phase transition at D = D e . In the case of constant force f ≡ A the stationary probability distribution is also known [START_REF] Risken | The Fokker-Planck Equation[END_REF] C A is the tri-critical point, D e is the point of a second order phase transion in the passive system. The "Maxwell line" for a first order phase transition in the active system is shown by dots. Here κ 0 = 0.6. Adapted from Ref. [224]. ρ A (x) = Z -1 exp [-(ṽ(x) -Ax) /D] , where again Z = ∞ ∫ -∞ exp(-ṽ(x)/D)dx. In adiabatic approximation we can write the time averaged stationary distribution in the form, ρ Ad (x) = 1 2 [ρ A (x) + ρ -A (x)], which gives ⟨x⟩ = 1 2 [⟨x⟩(A) + ⟨x⟩(-A)] . The force-elongation curves σ(z) and the corresponding potentials F(z) are shown in Fig. 45 (b). We see the main effect: as the degree of non-equilibrium, characterized by A, increases, not only the stiffness in the state z = 0, where the original double well potential u SS had a maximum, changes from negative to positive, as in the case of entropic stabilization, but we also see that the effective potential F(z) develops around this point a new energy well. We interpret this phenomenon as the emergence of active rigidity because the new equilibrium state becomes possible only at a finite value of the driving parameter A, while the temperature D can be arbitrarily small. The behavior of the roots of the equation σ(z) = -κ 0 (⟨x⟩z) = 0 at A 0 is shown in Fig. 46(a) which now illustrates a first order phase transition. The full steady state regime map (dynamic phase diagram) summarizing the results obtained in adiabatic approximation is presented in Fig. 47 (a). There, the "paramagnetic" phase I describes the regimes where the effective potential F(z) is convex, the "ferromagnetic" phase II is a bi-stability domain where the potential F(z) has a double well structure and, ). This is an effect of stochastic resonance which is beyond reach of the adiabatic approximation. Force-elongation relations characterizing the mechanical response of the system at different points on the (A, D) plane [see Fig. 47 (b)] are shown in Fig. 48 where the upper insets illustrate the typical stochastic trajectories and the associated cycles in {⟨x(t)⟩, f (t)} coordinates. We observe that while in phase I thermal fluctuations dominate periodic driving and undermine the two well structure of the potential, in phase III the jumps between the two energy wells are fully synchronized with the rocking force. In phase II the system shows intermediate behavior with uncorrelated jumps between the wells. In Fig. 48(d) we illustrate the active component of the force σ a (z) = σ(z; A) -σ(z; 0) in phases I, II and III. A salient feature of Fig. 48(d) is that active force generation is significant only in the resonant (Kapitza) phase III. A biologically beneficial plateau (tetanus) is a manifestation of the triangular nature of a pseudo-well in the active landscape F a (z) = ∫ z σ a (s)ds; note also that only slightly bigger ( f , ⟨x⟩) hysteresis cycle in phase III, reflecting a moderate increase of the extracted work, results in considerably larger active force. It is also of interest that the largest active rigidity is generated in the state z = 0 where the active force is equal to zero. If we now estimate the non-dimensional parameters of the model by using the data on skeletal muscles, we obtain A = 0.5, D = 0.01, τ = 100 [224]. This means that muscle myosins in stall conditions (physiological regime of isometric contractions), may be functioning in resonant phase III. The model can therefore provide an explanation of the observed stability of skeletal muscles in the negative stiffness regime [99]; similar stabilization mechanism may be also assisting the titin-based force generation at long sarcomere lengths [262]. The results presented in this Section for the case of periodic driving were shown in Ref. [224] to be qualitatively valid also for the case of dichotomous noise. However, the Ornstein-Uhlenbeck noise was unable to generate a nontrivial Kapitza phase. To conclude, the prototypical model presented in this Section shows that by controlling the degree of non-equilibrium in the system, one can stabilize apparently unstable or marginally stable mechanical configurations and in this way modify the structure of the effective energy landscape (when it can be defined). The associated pseudo-energy wells with resonant nature may be crucially involved not only in muscle contraction but also in hair cell gating Active force generation In this Section we address the slow time scale phase of force recovery which relies on attachment-detachment processes [79]. We review two types of models. In models of the first type the active driving comes from the interaction of the myosin head with actin filament, while the power stroke mechanism remains passive [269]. In models of the second type, the active driving resides in the power stroke machinery [244]. The latter model is fully compatible with the biochemical Lynm-Taylor cycle of muscle contractions. Contractions driven by the attachment-detachment process A physiological perspective that the power-stroke is the driving force of active contraction was challenged by the discovery that myosin catalytic domain can operate as a Brownian ratchet, which means that it can move and produce contraction without any assistance from the power-stroke mechanism [136; 137; 142]. It is then conceivable that contraction is driven directly by the attachment-detachment machinery which can rectify the correlated noise and select a directionality following the polarity of actin filaments [60; 143]. To represent the minimal set of variables characterizing skeletal Myosin II in both attached and detached statesposition of the motor domain, configuration of the lever domain and the stretch state of the series elastic element-we use three continuous coordinates [269]. To be maximally transparent we adopt the simplest representation of the attachementdetachment process provided by the rocking Brownian ratchet model [132; 247; 270; 271]. We interpret again a half-sarcomere as a HS type parallel bundle of N cross bridges. We assume, however, that now each cross-bridge is a three-element chain containing a linear elastic spring, a bi-stable contractile element, and a molecular motor representing the ATP-regulated attachment-detachment process, see Fig. 49. The system is loaded either by a force f ext representing a cargo or is constrained by the prescribed displacement of the backbone. The elastic energy of the linear spring can be written as v e (x) = 1 2 κ 0 (zy-ℓ) 2 , where κ 0 is the elastic modulus and ℓ is the reference length. The energy u SS of the bi-stable mechanism is taken to be three-parabolic u SS (y -x) =          1 2 κ 1 (y -x) 2 y -x ≥ b 1 -1 2 κ 3 (y -x -b) 2 + c b 2 ≤ y -x < b 1 1 2 κ 2 (y -x -a) 2 + v 0 y -x < b 2 (4.1) where κ 1,2 are the curvatures of the energy wells representing pre-power stroke and post-power stroke configurations, respectively and a < 0 is the characteristic size of the power stroke. The bias v 0 is again chosen to ensure that the two wells have the same energy in the state of isometric contraction. The energy barrier is characterized by its position b, its height c and its curvature κ 3 . The values of parameters b 1 and b 2 are chosen to ensure the continuity of the energy function. We model the myosin catalytic domain as the Brownian ratchet of Magnasco type [132]. More specifically, we view it as a particle moving in an asymmetric periodic potential while being subjected to a correlated noise. The periodic potential is assumed to be piece-wise linear in each period Φ(x) = { Q λ 1 L (x -nL), 0 < x -nL < λ 1 L Q λ 2 -Q λ 2 L (x -nL), λ 1 L < x -nL < L (4.2) where Q is the amplitude, L is the the period, λ 1 -λ 2 is the measure of the asymmetry; λ 1 + λ 2 = 1. The variable x marks the location of a particle in the periodic energy landscape: the head is attached if x is close to one of the minima of Φ(x) and detached if it is close to one of the maxima. Inserts illustrate the states of various mechanical subunits. The system of N cross-bridges of this type connected in parallel is modeled by the system of Langevin equations [269]                  ν x ẋi = -Φ ′ (x i ) + u ′ SS (y i -x i )+ f (t + t i ) + √ 2D x ξ x (t) ν y ẏi = -u ′ SS (y i -x i ) -κ 0 (y i -z -ℓ i ) + √ 2D y ξ y (t) ν z ż = N ∑ i=1 κ 0 (y i -z -ℓ i ) + f ext + √ 2D z ξ z (t) (4. 3) with D x,y,z = ν x,y,z k b T, where ν x,y,z denote the relative viscosities associated with the macroscopic variables, and ξ is a standard white noise. The correlated component of the noise f (t), imitating the activity of the ATP, is assumed to be periodic and piece-wise constant, see Eq. (3.5). Since our focus here is on active force generation rather than on active oscillations, we de-synchronize the dynamics by introducing phase shifts t i , assumed to be independent random variables uniformly distributed in the interval [0, T]; we also allow the pre-strains ℓ i to be random and distribute them in the intervals [iLa/2, iL + a/2]. Quenched disorder disfavors coherent oscillations observed under some special conditions (e.g. Ref. [166]). While we leave such collective effects outside our review, several comprehensive expositions are availbale in the literature, see Refs. [12; 36; 38; 112; 114; 143; 157; 158; 165; 166; 171; 171; 173; 272-282]. To illustrate the behavior of individual mechanical units we first fix the parameter z = 0 and write the total energy of a cross-bridge as a function of two remaining mechanical variables y and x: v(x, y) = Φ(x) + u SS (y -x) + v e (-y) (4.4) The associated energy landscape is shown in Fig. 50, where the upper two local minima A and B indicate the pre-power stroke and the post-power stroke configurations of a motor attached in one position on actin potential, while the two lower local minima A ′ and B ′ correspond to the pre-power stroke and the post-power stroke configurations of a motor shifted to a neighboring attached position. We associate the detached state with an unstable position around the maximum separating the minima (A, B) and (A ′ ,B ′ ), see Ref. [269] for more details. In Fig. 51 we show the results of numerical simulations of isotonic contractions at f ext = 0.5 T 0 , where T 0 is the stall tension. One can see that the catalytic domain of an Observe that the position of the backbone can be considered stationary during the recharging of the power stroke. In this situation, the key-factor for the possibility of recharging (after the variable x has overcome the barrier in the periodic potential) is that the total energy v(x, y) has a minimum when the snap-spring is in the pre-power stroke state. The corresponding analytical condition is (Q/v 0 ) > (λ 1 L)/a which places an important constraint on the choice of parameters [269]. A direct comparison of the simulated mechanical cycle with the Lymn-Taylor cycle (see Fig. 2) shows that while the two attached configurations are represented in this model adequately, the detached configurations appear only as transients. In fact, one can see that the (slow) transition B → A ′ represents a combined description of the detachment, of the power stroke recharge and then of another attachment. Since in the actual biochemical cycle such a transition are described by at least two distinct chemical states, the ratchet driven model is only in partial agreement with biochemical data. Contractions driven by the power stroke We now consider a possibility that acto-myosin contractions are propelled directly through a conformational change. The model where the power-stroke is the only active mechanism driving muscle contraction was developed in Ref. [244]. To justify such change of the modeling perspective, we recall that in physiological literature active force generation is largely attributed to the power-stroke which is perceived as a part of active rather than passive machinery [153]. This opinion is supported by observations that both the power-stroke and the reverse-power-stroke can be induced by ATP even in the absence of actin filaments [71], that contractions can be significantly inhibited by antibodies which impair lever arm activity [283], that sliding velocity in mutational myosin forms depends on the lever arm length [192] and that the directionality can be reversed as a result of modifications in the lever arm domain [284; 285]. Although the simplest models of Brownian ratchets neglect the conformational change in the head domain, some phases of the attachment-detachment cycle have been interpreted as a power-stroke viewed as a part of the horizontal shift of the myosin head [144; 286]. In addition, ratchet models were considered with the periodic spatial landscape supplemented by a reaction coordinate, representing the conformational change [287; 288]. In all these models, however, power stroke was viewed as a secondary element and contractions could be generated even with the disabled power stroke. The main functionality of the power-stroke mechanism was attributed to fast force recovery which could be activated by loading but was not directly ATP-driven [74; 99; 289]. The apparently conflicting viewpoint that the power-stroke mechanism consumes metabolic energy remains, however, the underpinning of the phenomenological chemo-mechanical models that assign active roles to both the attachmentdetachment and the power-stroke [86; 102]. These models pay great attention to structural details and in their most comprehensive versions faithfully reproduce the main experimental observations [68; 115; 290]. In an attempt to reach a synthetic description, several hybrid models, allowing chemical states to coexist with springs and forces, have been also proposed, see Refs. [112; 152; 291]. These models, however, still combine continuous dynamics with jump transitions which makes the precise identification of structural analogs of the chemical steps and the underlying micro-mechanical interactions challenging [154]. The model. Here, following Ref. [244], we sketch a mechanistic model of muscle contractions where power stroke is the only active agent. To de-emphasize the ratchet mechanism discussed in the previous section, we simplify the real picture and represent actin filaments as passive, non-polar tracks. The power-stroke mechanism is represented again by a symmetric bi-stable potential and the ATP activity is modeled as a center-symmetric correlated force with zero average acting on the corresponding configurational variable. A schematic representation of the model for a single crossbridge is given in Fig. 52(b), where x is the observable position of a myosin catalytic domain, yx is the internal variable characterizing the phase configuration of the power stroke element and d is the distance between the myosin head and the actin filament. Through the variable d we can take into account that when the lever arm swings, the interaction of the head with the binding site weakens, see Fig. 52 To mimic the underlying steric interaction, we assume that when a myosin head executes the power-stroke, it moves away from the actin filament and therefore the control function Ψ(yx) progressively switches off the actin potential, see Fig. Then the overdamped stochastic dynamics can be described by the system of dimensionless Langevin equations ẋ = -∂ x G(x, y) -f (t) + √ 2Dξ x (t) ẏ = -∂ y G(x, y) + f (t) + √ 2Dξ y (t). (4.7) Here ξ(t) is the standard white noise with ⟨ξ i (t)⟩ = 0, and ⟨ξ i (t)ξ j (s)⟩ = δ i j δ(ts) and D is a dimensionless measure of temperature; for simplicity the viscosity coefficients are assumed to be the same for variables x and y. The time dependent force couple f (t) with zero average represents a correlated component of the noise. In the computational experiments a periodic extension of the symmetric triangular potential Φ(x) with amplitude Q and period L was used, see Fig. 53(a). The symmetric potential u SS (yx) was taken to be bi-quadratic with the same stiffness k in both phases and the distance between the bottoms of the wells denoted by a, see Fig. 53(b). The correlated component of the noise f (t) was described by a periodic extension of a rectangular shaped function with amplitude A and period τ, Fig. 53(c). Finally, the steric control ensuring the gradual switch of the actin potential is described by a step function Ψ(s) = (1/2) [1 -tanh (s/ε)] , (4.8) where ε is a small parameter, see Fig. 53(d). The first goal of any mechanical model of muscle contraction is to generate a systematic drift v = lim t→∞ ⟨x(t)⟩/t without applying a biasing force. The dependence of the average velocity v on the parameters of the model is summarized in Fig. 54. It is clear that the drift in this model is exclusively due to A 0. When A is small, the drift velocity shows a maximum at finite temperatures which implies that the system exhibits stochastic resonance [294]. At high amplitudes of the ac driving, the motor works as a purely mechanical ratchet and the increase of temperature only worsens the performance [136; 137; 143]. One can say that the system (4.7) describes a power-strokedriven ratchet because the correlated noise f (t) acts on the relative displacement yx. It effectively "rocks" the bistable potential and the control function Ψ(yx) converts such "rocking" into the "flashing" of the periodic potential Φ(x). It is also clear that the symmetry breaking in this problem is imposed exclusively by the asymetry of the coupling function Ψ(y-x). Various other types of rocked-pulsated ratchet models have been studied in Refs. [295; 296]. The idea that the source of non-equilibrium in Brownian ratchets is resting in internal degrees of freedom [297; 298] originated in the theory of processive motors [299][300][301][302]. For instance, in the description of dimeric motors it is usually assumed that ATP hydrolysis induces a conformational transformation which then affects the position of the motor legs [303]. Here the same idea is used to describe a non-processive motor with a single leg that remains on track due to the presence of a thick filament. By placing emphasis on active role of the conformational change in non-processive motors the model brings closer the descriptions of porters and rowers as it was originally envisaged in Ref. [304]. 4.2.2. Hysteretic coupling The analysis presented in Ref. [244] has shown that in order to reproduce the whole Lymn-Taylor cycle, the switchings in the actin potential must take place at different values of the variable yx depending on the direction of the conformational change. In other words, we need to replace the holonomic coupling (4.6) by the memory operator d{x, y} = Ψ{y(t)x(t)} (4.9) whose output depends on whether the system is on the "striking" or on the "recharging" branch of the trajectory, see Fig. 55. Such memory structure can be also described by a rate independent differential relation of the form ḋ = Q(x, y, z) ẋ + R(x, y, d) ẏ, which makes the model non-holonomic. Using (4.9) we can rewrite the energy of the system as a functional of its history y(t) and x(t) G{x, y} = Ψ{y(t)x(t)}Φ(x) + u SS (yx). (4.10) In the Langevin setting (4.7), the history dependence may mean that the underlying microscopic stochastic process is non-Markovian (due to, say, configurational pinning [305]), or that there are additional non-thermalized degrees of freedom that are not represented explicitly, see Ref. [306]. In general, it is well known that the realistic feedback implementations always involve delays [307]. To simulate our hysteretic ratchet numerically we used two versions of the coupling function (4.8) shifted by δ with the branches Ψ(yx ± δ) blended sufficiently far away from the hysteresis domain, see Fig. 55. Our numerical experiments show that the performance of the model is not sensitive to the shape of the hysteresis loop and depends mostly on its width characterized by the small parameter δ. In Fig. 56 we illustrate the "gait" of the ensuing motor. The center of mass advances in steps and during each step the power-stroke mechanism gets released and then recharged again, which takes place concurrently with attachmentdetachment. By coupling the attached state with either pre-or post-power-stroke state, we can vary the directionality of the motion. The average velocity increases with the width of the hysteresis loop which shows that the motor can extract more energy from the coupling mechanism with longer delays. The results of the parametric study of the model are summarized in Fig. 57. The motor can move even in the absence of the correlated noise, at A = 0, because the nonholonomic coupling (4.10) breaks the detailed balance by itself. At finite A the system can use both sources of energy (hysteretic loop and ac noise) and the resulting behavior is much richer than in the non-hysteretic model, see Ref. [244] for more details. Lymn-Taylor cycle. The mechanical "stroke" in the space of internal variables (d, yx) can be now compared with the Lymn-Taylor acto-myosin cycle [59] shown in Fig. 2 and in the notations of this Section in Fig. 58(a). We recall that the chemical states constituting the Lymn-Taylor cycle have been linked to the structural configurations (obtained from crystallographic reconstructions): A(attached, pre-power-stroke → AM*ADP*Pi), B(attached, post-powerstroke → AM*ADP), C(detached, post-power-stroke → M*ATP), D(detached, pre-power-stroke → M*ADP*Pi). In the discussed model the jump events are replaced by continuous transitions and the association of chemical states with particular regimes of stochastic dynamics is not straightforward. In Fig. 58(b), we show a fragment of the averaged trajectory of a steadily advancing motor projected on the (x, yx) plane. In Fig. 58(c) the same trajectory is shown in the (x, yx, d) space with fast advances in the d direction intentionally schematized as jumps. By using the same letters A, B, C, D as in Fig. 58(a) we can visualize a connection between the chemical/structural states and the transient mechanical configurations of the advancing motor. Suppose, for instance, that we start at point A corresponding to the end of the negative cycle of the ac driving f (t). The system is in the attached, pre-power-stroke state and d = 1. As the sign of the force f (t) changes, the motor undergoes a power-stroke and reaches point B while remaining in the attached state. When the configurational variable y-x passes the detachment threshold, the myosin head detaches which leads to a transition from point B to B ′ on the plane d = 0. Since the positive cycle of the force f (t) continues, the motor completes the power-stroke by moving from B ′ to point C. At this moment, the rocking force changes sign again which leads to recharging of the power-stroke mechanism in the detached state, described in Fig. 58(a) as a transition from C to D. In point D, the variable yx reaches the attachment threshold. The myosin head reattaches and the system moves to point D ′ where d = 1 again. The recharging continues in the attached state as the motor evolves from D ′ to a new state A, shifted by one period. One can see that the chemical states constituting the minimal enzyme cycle can be linked to the mechanical configurations traversed by this stochastic dynamical system. The detailed mechanical picture, however, is more complicated than in the prototypical Lymn-Taylor scheme. In some stages of the cycle one can use the Kramers approximation to build a description in terms of a discrete set of chemical reactions, however, the number of such reactions should be larger than in the minimal Lymn-Taylor model. In conclusion, we mention that the identification of the chemical states, known from the studies of the prototypical catalytic cycle in solution, with mechanical states, is a precondition for the bio-engineering reproduction of a wide range of cellular processes. In this sense, the discussed schematization of the contraction phenomenon can be viewed as a step towards building engineering devices imitating actomyosin enzymatic activity. Force-velocity relations. The next question is how fast such motor can move against an external cargo. To answer this question we assume that the force f ext acts on the variable y which amounts to tilting of the potential (4.10) along the y direction G{x, y} = Ψ{y(t)x(t)}Φ(x) + u SS (yx)y f ext . (4.11) A stochastic system with energy (4.11) was studied numerically in Ref. [244] and in Fig. 59 we illustrate the obtained forcevelocity relations. The quadrants in the ( f ext , v) plane where R = f ext v > 0 describe dissipative behavior. In the other the other two quadrants, where R = f ext v < 0, the system shows anti-dissipative behavior. Observe that at low temperatures the convexity properties of the force-velocity relations in active pushing and active pulling regimes are different. In the case of pulling the typical force-velocity relation is reminiscent of the Hill's curve describing isotonic contractions, see Ref. [79]. In the case of pushing, the force-velocity relation can be characterized as convex-concave and such behavior has been also recorded in muscles, see Refs. [161;308;309]. The asymmetry is due to the dominance of different mechanisms in different regimes. For instance, in the pushing regimes, the motor activity fully depends on ac driving and at large amplitudes of the driving the system performs as a mechanical ratchet. Instead, in the pulling regimes, associated with small amplitudes of external driving, the motor advances because of the delayed feedback. Interestingly, the dissimilarity of convexity properties of the force-velocity relations in pushing and pulling regimes has been also noticed in the context of cell motility where actomyosin contractility is one of the two main driving forces, see Ref. [310]. Descending limb In this Section, following Ref. [238], we briefly address one of the most intriguing issues in mesoscopic muscle mechanics: an apparently stable behavior on the "descending limb" which is a section of the force-length curve describing isometrically tetanized muscle [17-19; 39]. As we have seen in the previous Sections, the active force f generated by a muscle in a hard (isometric) device depends on the number of pulling cross-bridge heads. The latter is controlled by the filament overlap which may be changed by the (pre-activation) passive stretch ∆ℓ. A large number of experimental studies have been devoted to the measurement of the isometric tetanus curve f (∆ℓ), see Fig. 60 and Fig. 3. Since the stretch beyond a certain limit would necessarily decrease the filament overlap, the active component of f (∆ℓ) must contain a segment with a negative slope known as the "descending limb" [78; 311-315]. The negative stiffness associated with active response is usually corrected by the positive stiffness provided by passive crosslinkers that connect actin and myosin filaments. However, for some types of muscles the total force-length relation f (∆ℓ) describing active and passive elements connected in parallel, still has a range where force decreases with elongation. It is this overall negative stiffness that will be the focus of the following discussion. If the curve f (∆ℓ) is interpreted as a description of the response of the "muscle material" shown in Fig. 61, the softening behavior associated with negative overall stiffness should lead to localization instability and the development of strain inhomogeneities [227; 316]. In terms of the observed quantities, the instability would mean that any initial imperfection would cause a single myosin filament to be pulled away from the center of the activated half-sarcomeres. Some experiments seem to be indeed consistent with non-uniformity of the Z-lines spacing, and with random displacements of the thick filaments away from the centers of the sarcomeres [225; 312; 314; 317; 318]. The nontrivial half-sarcomeres length distribution can be also blamed for the observed disorder and skewing [319; 320]. The link between non-affine deformation and the negative stiffness is also consistent with the fact that the progressive increase of the range of dispersion in half-sarcomere lengths, associated with a slow rise of force during tetanus (creep phase), was observed mostly around the descending limb [321][322][323], even though the expected ultimate strain localization leading to failure was not recorded. A related feature of the muscle response on the descending limb is the non-uniqueness of the isometric tension, which was shown to depend on the pathway through which the elongation is reached. Experiments demonstrate that when a muscle fiber is activated at a fixed length and then suddenly stretched while active, the tension first rises and then falls without reaching the value that the muscle generates when stimulated isometrically [320; [324][325][326][327][328][329][330]. The difference between tetani subjected to such post-stretch and the corresponding isometric tetani reveals a positive instantaneous stiffness on the descending limb. Similar phenomena have been observed during sudden shortening of the active muscle fibers: if a muscle is allowed to shorten to the prescribed length it develops less tension than during direct tetanization at the final length. All these puzzling observations have been discussed extensively in the literature interpreting half-sarcomeres as softening elastic springs [54; 78; 331-334]. The fact of instability on the descending limb for such spring chain was realized already by Hill [331] and various aspects of this instability were later studied in Refs. [332; 335]. It is broadly believed that a catastrophic failure in this system is warranted but is not observed because of the anomalously slow dynamics [313; 320; 336-338]. In a dynamical version of the model of a chain with softening springs, each contractile component is additionally bundled with a dashpot characterized by a realistic (Hill-Katz) force-velocity relation [226; 313; 320; 336-339]. A variety of numerical tests in such dynamic setting demonstrated that around a descending limb the half-sarcomeres configuration can become non uniform but at the time scale which is unrealistically long. Such over-damped dynamic model was shown to be fully compatible with the residual force after stretch on the descending limb, and the associated deficit of tension after shortening. These simulations, however, left unanswered the question about the fundamental origin of the multi-valudness of the muscle response around the descending limb. For instance, it is still debated whether such non-uniqueness is a property of individual half-sarcomeres or a collective property of the whole chain. It is also apparently unclear how the local (microscopic) inhomogeneity of a muscle myofibril can coexist with the commonly accepted idea of a largely homogenous response at the macro-level. To address these questions we revisit here the onedimensional chain model with softening springs reinforced by parallel (linear) elastic springs, see Fig. 61 and62. A formal analysis [238], following a similar development in the theory of shape memory alloys [227], shows that this mechanical system has an exponentially large (in N) number of configurations with equilibrated forces, see an illustration for small N in Fig. 63 and our goal will be to explore the consequences of the complexity of the properly defined energy landscape. Pseudo-elastic energy. The physical meaning of the energy associated with the parallel passive elements is clear but the challenge is to associate an energy function with active elements. In order to generate active force, motors inside the active element receive and dissipate energy, however, this not the energy we need to account for. As we have already seen, active elements posses their own passive mechanical machinery which is loaded endogenously by molecular motors. Therefore some energy is stored in these passive structures. For instance, we can account for the elastic energy of attached springs and also consider the energy of de-bonding. A transition from one tetanized state to another tetanized state, leads to the change in the stored energy of these passive structures. Suppose that to make an elongation dℓ along the tetanus, the external force f (ℓ) must perform the work f dℓ = dW where W(ℓ) is the energy of the passive structures that accounts not only for elastic stretching but also for inelastic effect associated with the changes in the number of attached cross-bridges. By using the fact that the isometric tetanus curve f (ℓ) has a up-down-up structure we can conclude that the effective energy function W(ℓ) must have a double-well structure. If we subtract the contribution due to parallel elasticity W p (ℓ), we are left with the active energy W a (ℓ), which will then have the form of a Lennard-Jones potential. Shortening below the inflection point of this potential would lead to partial "neutralization" of cross-bridges, and as a result the elastic energy of contributing pullers progressively diminishes. Instead, we can assume that when the length increases beyond the inflection point (point of optimal overlap), the system develops damage (debonding) and therefore the energy increases. After all bonds are broken, the energy of the active element does not change any more and the generated force becomes equal to zero. Local model. Consider now a chain of half sarcomeres with nearest neighbor interactions and controlled total length, see Fig. 61. Suppose that the system selects mechanical configurations where the energy invested by pullers in loading the passive sub-structures is minimized. The energy minimizing configurations will then deliver an optimal trade-off between elasticity and damage in the whole ensemble of contractile units. This assumption is in agreement with the conventional interpretation of how living cells interact with an elastic environment. For instance, it is usually assumed that active contractile machinery inside a cell rearranges itself in such a way that the generated elastic field in the environment minimizes the elastic energy [340; 341]. The analysis of the zero temperature chain model for a myofibril whose series elements are shown in Fig. 62 confirms that the ensuing energy landscape is rugged, see Ref. [238]. The possibility of a variety of evolutionary paths in such a landscape creates a propensity for history dependence, which, in turn, can be used as an explaination of both the "permanent extra tension" and the "permanent deficit of tension" observed in areas adjacent to the descending limb. The domain of metastability on the force-length plane, see Fig. 63, is represented by a dense set of stable branches with a fixed degree of inhomogeneity. Note that in this system the negative overall slope of the force-length relation along the global minimum path can be viewed as a combination of a large number of micro-steps with positive slopes. Such "coexistence" of the negative averaged stiffness with the positive instantaneous stiffness, first discussed in Ref. [332], can be responsible for the stable performance of the muscle fiber on the descending limb. Observe, however, that the strategy of global energy minimization contradicts observations because the reported negative overall stiffness is incompatible with the implied convexification of the total energy. Moreover, the global minimization scenario predicts considerable amount of vastly over-stretched (popped) half-sarcomeres that have not been seen in experiments. We are then left with a conclusion that along the isometric tetanus at least some of the active, nonaffine configurations correspond to local rather than global minima of the stored energy. A possible representation of the experimentally observed tetanus curve as a combination of local and global minimization segments is presented by a solid thick line in Fig. 63. In view of the quasi-elastic nature of the corresponding response, it is natural to associate the ascending limb of the tetanus curve at small levels of stretch with the homogeneous (affine) branch of the global minimum path (segment AB in Fig. 63). Assume that around the point where the global minimum configuration becomes non-affine (point B in Fig. 63), the system remains close to the global minimum path. Then, the isometric tetanus curve forms a plateau separating ascending and descending limbs (segment between points B and C in Fig. 63). Such plateau is indeed observed in experiments on myofibrils and is known to play an important physiological role ensuring robustness of the active response. We can speculate that a limited mixing of "strong" and "weak" (popped) halfsarcomeres responsible for this plateau can be confined close to the ends of a myofibril while remaining almost invisible in the bulk of the sample. To account for the descending limb, we must assume that as the length of the average half-sarcomere increases beyond the end of the plateau (point C in Fig. 63), the tetanized myofibril can no longer reach the global minimum of the stored energy. To match observations we assume that beyond point C in Fig. 63 the attainable metastable configurations are characterized by the value of the active force, which deviates from the Maxwell value and becomes progressively closer to the value generated by the homogeneous configurations as we approach the state of no overlap (point D). The numerical simulations show [238] that the corresponding non-affine configurations can be reached dynamically as a result of the instability of a homogeneous state. One may argue that such, almost affine metastable configurations, may be also favored due to the presence of some additional mechanical signaling , which takes a form of inter-sarcomere stiffness or next to nearest neighbor (N N N) interaction. As the point D in Fig. 63 is reached, all cross-bridges are detached and beyond this point the myofibril is supported exclusively by the passive parallel elastic elements (segment DE). Since all the metastable non-affine states involved in this construction have an extended range of stability, the application of a sudden deformation will take the system away from the isometric tetanus curve BCD in Fig. 63. It is then difficult to imagine that the isometric relaxation, following such an eccentric loading, will allow the system to stabilize again exactly on the curve BCD. Such "metastable" response would be consistent with residual force enhancement observed not only around the descending limb but also above the optimal (physiological) plateau and even around the upper end of the ascending limb. It is also consistent with the observations showing that the residual force enhancement after stretch is independent of the velocity of the stretch, that it increases with the amplitude of the stretch and that it is most pronounced along the descending limb. Nonlocal model. While the price of stability in this system appear to be the emergence of the limited microscopic non-uniformity in the distribution of half sarcomere lengths, we now argue that it may be still compatible with the macroscopic (averaged) uniformity of the whole myofibril [319]. To support this statement we briefly discuss here a model of a myofibril which involves long range mechanical signaling between half-sarcomeres via the surrounding elastic medium, see Ref. [238]. The model is illustrated in Fig. 64. It includes two parallel elastically coupled chains. One of the chains, containing double well springs, is the same as in the local model. The other chain contains elements mimicking additional elastic interactions in the myofibril of possibly non-one-dimensional nature; it is assumed that the corresponding shear (leaf) springs are linearly elastic. The ensuing model is nonlocal and involves competing interactions: the double-well potential of the snap-springs favors sharp boundaries between the "phases", while the elastic foundation term favors strain uniformity. As a result of this competition the energy minimizing state can be expected to deliver an optimal trade off between the uniformity at the macro-scale and the non-uniformity (non-affinity) at the microscale. The nonlocal extension of the chain model lacks the permutation degeneracy and generates peculiar microstructures with fine mixing of shorter half sarcomeres located on the ascending limb of the tension-length curve and longer half sarcomeres supported mostly by the passive structures [238]. The mixed configurations represent periodically modulated patterns that are undistinguishable from the homogeneous deformation if viewed at a coarse scale. The descending limb can be again interpreted as a union of positively sloped steps that can be now of vastly different sizes. It is interesting that the discrete structure of the force-length curve survives in the continuum limit, which instead of smoothening makes it extremely singular. More specifically, the variation of the degree of non-uniformity with elongation along the global energy minimum path exhibits a complete devil's staircase type behavior first identified in a different but conceptually related system [342], see Fig. 65 andRef. [238] for more details. To make the nonlocal model compatible with observations, one should again abandon the global minimisation strategy and associate the descending limb with metastable (rather than stable) states. In other words, one needs to apply an auxiliary construction similar to the one shown in Fig. 63 for the local model, which anticipates an outcome produced by a realistic kinetic model of tetanization. Non-muscle applications The prototypical nature of the main model discussed in this review (HS model, a parallel bundle of bistable units in passive or active setting ) makes it relevant far beyond the skeletal muscle context. It provides the most elementary description of molecular devices capable of transforming in a Brownian environment a continuous input into a binary, allor-none output that is crucial for the fast and efficient strokelike behavior. The capacity of such systems to flip in a reversible fashion between several metastable conformations is essential for many processes in cellular physiology, including cell signaling, cell movement, chemotaxis, differentiation, and selective expression of genes [343; 344]. Usually, both the input and the output in such systems, known as allosteric, are assumed to be of biochemical origin. The model, dealing with mechanical response and relying on mechanical driving, complements biochemical models and presents an advanced perspective on allostery in general. The most natural example of the implied hypersensitivity concerns the transduction channels in hair cells [345]. Each hair cell contains a bundle of N ≈ 50 stereocilia which are The broadly accepted model of this phenomenon [119] views the hair bundle as a set of N bistable springs arranged in parallel. It is identical to the HS model if the folded (unfolded) configurations of cross-bridges are identified with the closed (opened) states of the channels. The applied loading, which tilts the potential and biases in this way the distribution of closed and open configurations, is treated in this model as a hard device version of HS model. Experiments, involving a mechanical solicitation of the hair bundle through an effectively rigid glass fiber, showed that the stiffness of the hair bundle is negative around the physiological functioning point of the system [120], which is fully compatible with the predictions of the HS model. A similar analogy can be drawn between the HS model and the models of collective unzipping for adhesive clusters [7; 12; 118; 341; 346]. At the micro-scale we again encounter N elements representing, for instance, integrins or cadherins, that are attached in parallel to a common, relatively rigid pad. The two conformational states, which can be described by a single spin variable, are the bound and the unbound configurations. The binding-unbinding phenomena in a mechanically biased system of the HS type are usually described by the Bell model [117], which is a soft device analog of the HS model with κ 0 = ∞. In this model the breaking of an adhesive bond represents an escape from a metastable state and the corresponding rates are computed by using Kramers' theory [341; 347] as in the HS model. In particular, the rebinding rate is often assumed to be constant [263 ; 348], which is also the assumption of HS for the reverse transition from the post-to the pre-power-stroke state. More recently, Bell's model was generalized through the inclusion of ligand tethers, bringing a finite value to κ 0 and using the master equation for the probability distribution of attached units [118; 348]. The main difference between the Bell-type models and the HS model is that the detached state cannot bear force while the unfolded conformation can. As a result, while the cooperative folding-unfolding (ferromagnetic) behavior in the HS model is possible in the soft device setting [99], similar cooperative binding-unbinding in the Bell model is impossible because the rebinding of a fully detached state has zero probablity. To obtain cooperativity in models of adhesive clusters, one must use a mixed device, mimicking the elastic backbone and interpolating between soft and hard driving [118; 179; 197; 341]. Muscle tissues maintain stable architecture over long periods of time. However, it is also feasible that transitory muscle-type structures can be assembled to perform particular functions. An interesting example of such assembly is provided by the SNARE proteins responsible for the fast release of neurotransmitors from neurons to synaptic clefts. The fusion of synaptic vesicles with the presynaptic plasma membrane [349; 350] is achieved by mechanical zipping of the SNARE complexes which can in this way transform from opened to closed conformation [351]. To complete the analogy, we mention that individual SNAREs participating in the collective zipping are attached to an elastic membrane that can be mimicked by an elastic or even rigid backbone [352]. The presence of a backbone mediating long-range interactions allows the SNAREs to cooperate in fast and efficient closing of the gap between the vesicle and the membrane. The analogy with muscles is corroborated by the fact that synaptic fusion takes place at the same time scale as the fast force recovery (1 ms) [353]. Yet another class of phenomena that can be rationalized within the HS framework is the ubiquitous flip-flopping of macro-molecular hairpins subjected to mechanical loading [187; 188; 196; 199]. We recall that in a typical experiment of this type, a folded (zipped) macromolecule is attached through compliant links to micron-sized beads trapped in optical tweezers. As the distance between the laser beams is increased, the force applied to the molecule rises up to a point where the subdomains start to unfold. An individual unfolding event may correspond to the collective rupture of N molecular bonds or an unzipping of a hairpin. The corresponding drops in the force accompanied by an abrupt increase in the total stretch can lead to an overall negative stiffness response [186; 199; 203]. Other molecular systems exhibiting cooperative unfolding include protein β-hairpins [354] and coiled coils [209]. The backbone dominated internal architecture in all these systems leads to common mean-field type mechanical feedback exploited by the parallel bundle model [355; 356]. Realistic examples of unfolding in macromolecules may involve complex "fracture" avalanches [357] that cannot be modeled by using the original HS model. However, the HS theoretical framework is general enough to accommodate hierarchical meta-structures whose stability can be also biased by mechanical loading. The importance of the topology of interconnections among the bonds and the link between the collective nature of the unfolding and the dominance of the HS-type parallel bonding have been long stressed in the studies of protein folding [358]. The broad applicability of the HS mechanical perspective on collective conformational changes is also corroborated by the fact that proteins and nucleic acids exhibit negative stiffness and behave differently in soft and hard devices [209; 359; 360]. The ensemble dependence in these systems suggests that additional structural information can be obtained if the unfolding experiments are performed in the mixed device setting. The type of loading may be affected through the variable rigidity of the "handles" [361; 362] or the use of an appropriate feedback control that can be modeled in the HS framework by a variable backbone elasticity. As we have already mentioned, collective conformational changes in distributed biological systems containing coupled bistable units can be driven not only mechanically, by applying forces or displacements, but also biochemically by, say, varying concentrations or chemical potentials of ligand molecules in the environment [363]. Such systems can become ultrasensitive to external stimulations as a result of the interaction between individual units undergoing conformational transformation which gives rise to the phenomenon of conformational spread [344; 364]. The switch-like input-output relations are required in a variety of biological applications because they ensure both robustness in the presence of external perturbations and ability to quickly adjust the configuration in response to selected stimuli [343; 365]. The mastery of control of biological machinery through mechanically induced conformational spread is an important step in designing efficient biomimetic nanomachines [195; 366; 367]. Since interconnected devices of this type can be arranged in complex modular metastructures endowed with potentially programmable mechanical properties, they are of particular interest for micro-enginnering of energy harvesting devices [13]. To link this behavior to the HS model, we note that the amplified dose response, characteristic of allostery, is analogous to the sigmoidal stress response of the paramagnetic HS system where an applied displacement plays the role of the controlled input of a ligand. Usually, in allosteric protein systems, the ultrasensitive behavior is achieved as a result of nonlocal interactions favoring all-or-none types of responses; moreover, the required long-range coupling is provided by mechanical forces acting inside membranes and molecular complexes. In the HS model such coupling is modeled by the parallel arrangement of elements, which preserves the general idea of nonlocality. Despite its simplicity, the appropriately generalized HS model [99] captures the main patterns of behavior exhibited by apparently purely chemical systems, including the possibility of a critical point mentioned in Ref. [363]. Conclusions In contrast to inert matter, mechanical systems of biological origin are characterized by structurally complex network architecture with domineering long-range interactions. This leads to highly unusual mechanical properties in both statics and dynamics. In this review we identified a particularly simple system of this type, mimicking a muscle half-sarcomere, and systematically studied its peculiar mechanics, thermodynamics and kinetics. In the study of passive force generation phenomena our starting point was the classical model of Huxley and Simmons (HS). The original prediction of the possibility of negative stiffness in this model remained largely unnoticed. For 30 years the HS model was studied exclusively in the hard device setting which concealed the important role of cooperative effects. A simple generalization of the HS model for the mixed device reveals many new effects, in particular the ubiquitous presence of coherent fluctuations. Among other macroscopic effects exhibited by the generalized HS model are the non-equivalence of the response in soft and hard devices and the possibility of negative susceptibilities. These characteristics are in fact typical for nonlinear elastic materials in 3D at zero temperature. Thus, the relaxed energy of a solid material must be only quasi-convex which allows for non-monotone stress strain relations and different responses in soft and hard devices [368]. Behind this behavior is the long range nature of elastic interactions which muscle tissues appear to be emulating in 1D. For a long time it was also not noticed that the original parameter fit by HS placed skeletal muscles almost exactly in the critical point. Such criticality is tightly linked to the fact that the number of cross-bridges in a single half sarcomere is of the order of 100. This number is now shown to be crucial to ensure mechanical ultra sensitivity that is not washed out by finite temperature and it appears quite natural that muscle machinery is evolutionaty tuned to perform close to a critical point. This assumption is corraborated by the observation that criticality is ubiquitous in biology from the functioning of auditory system [120] to the macroscopic control of upright standing [369; 370]. The mechanism of fine tuning to criticality can be understood if we view the muscle fiber as a device that can actively modify its rigidity. To this end the system should be able to generate a family of stall states parameterized by the value of the meso-scopic strain. A prototypical model reviewed in this paper shows that by controlling the degree of nonequilibrium in the system, one can indeed stabilize apparently unstable or marginally stable mechanical configurations, and in this way modify the structure of the effective energy landscape (when it can be defined). The associated pseudoenergy wells with resonant nature may be crucially involved in securing robustness of the near critical behavior of the muscle system. Needless to say that the mastery of tunable rigidity in artificial conditions can open interesting prospects not only in biomechanics [371] but also in engineering design incorporating negative stiffness [372] or aiming at synthetic materials involving dynamic stabilization [373; 374]. In addition to the stabilization of passive force generation, we also discussed different modalities of how a power-strokedriven machinery can support active muscle contraction. We have shown that the use of a hysteretic design for the power-stroke motor allows one to reproduce mechanistically the complete Lymn-Taylor cycle. This opens a way towards dynamic identification of the chemical states, known from the studies of the prototypical catalytic reaction in solution, with particular transient mechanical configurations of the actomyosin complex. At the end of this review we briefly addressed the issue of ruggedness of the global energy landscape of a tetanized muscle myofibril. The domain of metastability on the forcelength plane was shown to be represented by a dense set of elastic responses parameterized by the degree of cross-bridge connectivity to actin filaments. This observation suggests that the negative overall slope of the force-length relation may be a combination of a large number of micro-steps with positive slopes. In this review we focused almost exclusively on the results obtained in our group and mentioned only peripherally some other related work. For instance, we did not discuss a vast body of related experimental results, e.g. Refs. [116; 166; 375; 376]. Among the important theoretical work that we left outside, are the results on active collective dynamics of motors [12; 377-379]. Interesting attempts of building alternative models of muscle contraction [56; 380] and of creating artificial devices imitating muscle behavior [195] were also excluded from the scope of this paper. Other important omissions concern the intriguing mechanical behavior of smooth [381; 382] and cardiac [383][384][385][386][START_REF] Caruel | Thermodynamical framework for modeling chemo-mechanical coupling in muscle contraction-formulation and preliminary results CMBE[END_REF] muscles. Despite the significant progress in the understanding of the microscopic and mesoscopic aspects of muscle mechanics, achieved in the last years, many fundamental problems remain open. Thus, the peculiar temperature dependence of the fast force recovery [207; 388] has not been systematically studied, despite some recent advances [121; 180]. A similarly important challenge presents the delicate asymmetry between shortening and stretching, which may require the account of the second Myosin head [START_REF] Brunello | Proc. Natl. Acad. Sci[END_REF]. Left outside most of the studies is the short-range coupling between cross-bridges due to filaments extensibility [76], the inhomogeneity of the relative displacement between myosin and actin filaments, and more generally the possibility of a non-affine displacements in the system of interacting cross bridges. Other under-investigated issues include the mechanical role of additional conformational states [74] and the functionality of parallel elastic elements [389]. We anticipate that more efforts will be also focused on the study of contractional instabilities and actively generated internal motions [148] that should lead to the understanding of the self-tuning mechanism bringing sarcomeric systems towards criticality [99; 390; 391]. Criticality implies that fluctuations become macroscopic, which is consistent with observations at stall force conditions. The proximity to the critical point allows the system to amplify interactions, ensure strong feedback, and achieve considerable robustness in front of random perturbations. In particular, it is a way to quickly and robustly switch back and forth between highly efficient synchronized stroke and stiff behavior in the desynchronized state [99]. Figure 1. Figure 2. Figure 3 . 3 Figure 3. Isometric contraction (a) and isotonic shortening (b) experiments. (a) Isometric force T 0 as function of the sarcomere length linked to the amount of filament overlap. (b) Force-velocity relation obtained during isotonic shortening. Data in (b) are taken from Ref. [61]. Figure 4 . 4 Figure 4. Fast transients in mechanical experiments on single muscle fibers in length clamp [hard device, (a) and (b)]; and in force clamp [soft device, (c) and (d)]. Typical experimental responses are shown separately on a slow timescale [(a) and (c)] and on a fast time scale [(b) and (d)]. In (a) and (c) the numbers indicate the distinctive steps of the transient responses: the elastic response (1), the processes associated with passive power stroke (2) and the ATP driven approach to steady state(3)(4). Data are adopted from Refs.[74][75][76]. Figure 6 . 6 Figure 6. Drastically different kinetics in phase 2 of the fast load recovery in length clamp (circles) and force clamp (squares) experiments. Data are from Refs. [74; 80; 85-87; 93]. Figure 7. Figure 8 .Figure 9 . 89 Figure 8. Structure of a myofibril. (a) Anatomic organization of half sarcomeres linked by Z disks (A) and M lines (B). (b) Schematic representation of the network of half sarcomeres; (c) Topological structure of the same network emphasizing the dominance of long-range interactions. Figure 10 . 10 Figure 10. Behavior of a HS model with N = 10 at zero temperature. [(a) and (b)] Tension-elongation relations corresponding to the metastable states (gray) and along the global minimum path (thick lines), in hard (a) and soft (b) devices. (c-e) [respectively (f-h)] Energy levels of the metastable states corresponding to p = 0, 0.1, . . . , 1, at different elongations y (respectively tensions σ). Corresponding transitions (E→B, P→Q, ...) are shown in (a) and (b). Adapted from Ref. [179]. Figure 12 . 12 Figure 12. Hill-type energy landscapes in a hard device, for N = 1 (a) and N = 4 (b).(c) Equilibrium free-energy profile f (solid line), which is independent of N together with the metastable states for N = 4 (dotted lines). Here v 0 = 1/2 and β = 10. Adapted from Ref.[121]. Figure 13 . 6 Figure 14 . 13614 Figure 13. Equilibrium properties of the HS model in a hard device for different values of temperature. (a) Helmholtz free energy; (b) tensionelongation relations; (c) stiffness. In the limit β → ∞ (dot-dashed line), corresponding to zero temperature, the stiffness κ diverges at y = y 0 . Adapted from Ref. [121]. Figure 15 . 15 Figure 15. Mechanical behavior along metastable branches. (a) Free energy of the metastable states. For β > 4 (see dotted line), three energy levels coexist at the same tension. (b) Free energy at three different temperatures. (c) Tension-elongation curves. Figure 16 .Figure 17 . 1617 Figure 16. Phase transition at σ = σ 0 , and its effect on the stochastic dynamics. (a) Bifurcation diagram at σ = σ 0 . Lines show minima of the Gibbs free energy g ∞ . [(b) and (c)] Tension-elongation relations corresponding to the metastable states (gray) and in the global minimum path (black). [(d)-(f)] Collective dynamics with N = 100 in a soft device under constant force at different temperatures. Here the loading is such that ⟨p⟩ = 1/2 for all values of β. Fig 16 (a) is adapted from Ref. [99]. Figure 18 . 18 Figure 18. Different regimes for the HS model in the two limit cases of hard [(a) and (b)] and soft devices [(c) and (d)]. In a hard device , the pseudo critical temperature β -1 c = 1/4 separates a regime where the tension elongation is monotone (a) from the region where the system develops negative stiffness (b). In soft device, this pseudo critical point becomes a real critical point above which (β > β c ) the system becomes bistable (d). Figure 19 . 19 Figure 19. Phase diagram in the mixed device. The hard and soft device cases, already presented in Fig. 18, correspond to the limits (a)-(d).In the mixed device, the system exhibits three phases, labeled I, II and III in the left panel. The right panels show typical dependence of the energy and the force on the loading parameter z and on the average internal elongation ⟨y⟩ in the subcritical (phase II, e) and in the supercritical (phase III, f) regimes. In phase I, the response of the system is monotone; it is analogous to the behavior obtained in a hard device for β < 4, see Fig.18(b). In phase II, the system exhibits negative stiffness but no collective switching except for the soft device limit λ b → 0, see Fig.18(d).In phase III (supercritical regime), the system shows an interval of macroscopic bistability (see dotted lines) leading to abrupt transitions in the equilibrium response (solid line). Figure 20 . 20 Figure 20. Energy barriers in the HS model. [(a) and (b)] Two functioning regimes. The regime (b) was not considered by Huxley and Simmons. (c), Relaxation rate as function of the total elongation y. The characteristic timescale is τ = exp [βE 1 ]. Adapted from Ref. [121]. Figure 21 . 21 Figure 21. (a) Generalization of the Huxley and Simmons model of the energy barriers based on the idea of the transition state v * corresponding to the conformation ℓ. (b) Equilibration rate between the states as function of the loading parameter at different values of ℓ. The original HS model corresponds to the case ℓ = -1. In (b) v 0 = 1, v * = 1.2 and β = 2. Dotted lines in (b) is a schematic representation of diffusion (versus reaction) dominated processes. Adapted from [180] Figure 22 . 22 Figure 22. Energy landscape characterizing the sequential folding process of N = 10 bistable elements in a soft device with σ = σ 0 . Parameters are v 0 = 1, v * = 1.2, and ℓ = -0.5. Adapted from Ref. [180] Figure 23 . 23 Figure 23. Intra-and inter-bassin relaxation rates in a soft device. (a) Relaxation towards to the metastable state in the case of a reflecting barrier at p = p (intra-bassin relaxation). [(b) and (c)] Transition between the two macroscopic configurations p 0 (σ) and p 1 (σ) (interbassin relaxation). (b) Forward [ k(p 0 → p 1 )] and reverse [ k(p 1 → p 0 )] rates. (c) Equilibration rate k(p 0 ↔ p 1 ) = k(p 0 → p 1 ) + k(p 1 → p 0 ). Solid line, computation based on Eq. (2.27); dot-dashed line, thermodynamic limit approximation, see Eq. (2.29). The parameters are N = 200, β = 5 and ℓ = -0.5. Adapted from Ref. [180]. Figure 24 . 24 Figure 24. Quasi-static response to ramp loading in different points of the phase diagram. [(a) and (b)] Hard device, see Ref. [121]; [(c) and (d)] soft device, see Ref.[180] and [(e) and (f)], mixed device. In each point, stochastic trajectories obtained from Eq. 2.21 (solid lines) are superimposed on the thermal equilibrium response (dashed lines). The inserts show selected snapshots of the probability distribution solving the master equation(2.22). Figure 25 . 25 Figure[START_REF] Close | [END_REF]. Relaxation of the average conformation in response to fast force drops at different temperatures and initial conditions ⟨p⟩ in . Thick lines, solutions of the master equation (2.22); thin lines, solutions of the mean-field HS equation. In (b), the initial condition corresponds to thermal equilibrium: bimodal distribution and ⟨p⟩ in = 1/2. In (c), the initial condition corresponds to the unfolded metastable state: unimodal distribution and ⟨p⟩ in ≈ 0.06. Snapshots at different times show the probability density profiles. Figure 26 . 26 Figure 26. Soft spin (snap-spring) model of a parallel cluster of crossbridges. (a) Dimensional energy landscape of a bistable cross-bridge. (b) Structure of a parallel bundle containing N cross-bridges. Adapted from Ref. [99]. 2. 2 . 1 . 20 Figure 27 . 212027 Figure 27. Energy landscape along the global minimum path for the soft-spin model in a hard device at different values of the coupling parameter λ b with N = 20. Adapted from Ref. [179]. The asymmetry in the potential is the results of choosing λ 2 λ 1 . Parameters are, λ 2 = 0.4, λ 1 = 0.7, ℓ = -0.3. Figure 28 .β = 7 ,Figure 29 . 28729 Figure 28. Soft spin model at zero temperature with parameters adjusted to fit experimental data, see Tab. 1 in Section 2.2.3. (a) Tensionelongation relations in the metastable states (gray area) and along the global minimum path (solid lines). [(b) and (c)] Energy landscape corresponding to successive transitions between the homogeneous states (A→B and C→F), respectively. [(d) and (e)] Size of the energy barriers corresponding to the individual folding (B → ) and (B ← ) at finite N (open symbols) and in the thermodynamic limit (solid lines). Adapted from Ref. [179]. Figure 30 .Figure 31 . 3031 Figure 30. Bifurcation diagram with non-symmetric wells. Solid (dashed) lines correspond to local minima (maxima) of the free energy. Parameters are, λ 2 = 0.47, λ 1 = 0.53, ℓ = -0.5, λ b = 0.5 and β = 20. Here z is such that ⟨p⟩ = 1/2 at β = 20 and λ b = 0.5. Figure 32 . 32 Figure 32. Non-equilibrium energy landscapes: f (solid lines) and g (dashed lines) at z = z 0 . Trajectories on the right are obtained from stochastic simulations. Minima are arbitrarily set to 0. Parameters are: β = 10, λ 1 = λ 2 = 1/2, v 0 = 0, λ b = 0.1 (symmetric system). Figure 33 . 33 Figure 33. Soft spin model in hard [(a) and (b)] and soft [(c) and (d)] devices. [(a) and (c)] Free energies; [(b) and (d)] tension elongation relations. The solid lines correspond to the parameters listed in Tab.1 and the gray regions indicate the corresponding domains of bistability. The tension and elongation are normalized to their values at the transition point where ⟨p⟩ = 1/2. -1 [80; 91; 100]. Given that we know the values of κ and a, we can estimate the non-dimensional inverse temperature,β = (κ 0 a 2 )/(k b T) = 71 ± 26.Once κ b and κ 0 are known, the number of cross-bridges attached in the state of isometric contraction can be obtained directly from the formula κ tot = (N κ 0 κ b )/(N κ 0 + κ b ). Experimental data indicate that N = 106 ± 11[80; 87; 100]. We can then deduce the value of our coupling parameter, λ b = κ b /(N κ 0 ) = 0.54 ± 0.19. 1 T 2 δL = -1 nm hs - 1 δL = -5 nm hs - 1 timeFigure 34 . 121134 Figure 34. Soft-spin model compared with experimental data from Fig. 5 and 6. [(a) and (b)] Average trajectories were obtained from stochastic simulations, after the system was exposed to various load steps in hard (a) and soft (b) devices. (b') Schematic representation of the regime shown in (b) for large times illustrating eventual equilibration (dotted line). (c) Tension-elongation relation obtained from the numerical simulations (sim.) compared with experimental data (symbols, exp.); dotted line, thermal equilibrium in a soft device. (d) Comparison of the rates of recovery: crosses show the result of the chemomechanical model from Ref[207]; asterisks show the "fast component" of the recovery rate (see explanations of such fast-slow decomposition in Ref[207]). Figure adapted from Ref.[99]. Here parameters are: κ 2 = 0.41 pN nm -1 , κ 1 = 1.21 pN nm -1 , λ b = 0.72, ℓ = -0.08 nm, N = 112, β = 52 (κ 0 = 2 pN nm -1 , a = 10 nm, T = 277.13 K, z 0 = 4.2 nm hs -1 . Figure 35 . 35 Figure 35. Model of a sarcomere. A single sarcomere is located between two Z-disks (A). The M-line (B) separates the two half-sarcomeres. A single sarcomere contains two arrays of N parallel cross-bridges connected by two linear springs Figure 36 .Figure 37 . 3637 Figure 36. Mechanical equilibrium in a half-sarcomere chain with N = 2 and symmetric double well potential in a hard device. (a) Energy levels; (b) Tension-elongation relation. Solid lines, metastable states; dashed lines, unstable states; bold lines:, global minimum. Parameters: λ 1 = λ 2 = 0.5, u 0 = 0, ℓ = -0.5, λ b = Figure 38 . 38 Figure 38. Equilibrium response of a single sarcomere in the thermodynamic limit. [(a) and (b)] Hard device; [(c) and (d)] soft device. [(a) and (c)] Gibbs and Helmholtz free energy; [(b) and (d)] corresponding tension-elongation . Parameters are, λ 1 = 0.7, λ 2 = 0.4, ℓ = -0.3, λ b = 1. Figure 39 . 39 Figure 39. Tension-elongation relations for a in a hard device. Thick lines: equilibrium tension-elongation relations based on the computation of the partition function (2.39). Thin lines: response of two half-sarcomere in series, each one endowed with the constitutive relation illustrated in Fig. 33(b). (a) Hard device constitutive law. (b) Soft device constitutive law, see Ref. [212] for more details. Parameters are: λ 1 = 0.7, λ 2 = 0.4, ℓ = -0.3, N = 10, β = 20 and λ b = 1. Figure 40 . 40 Figure 40. Tension-elongations for a sarcomere in a soft device. Thick lines: equilibrium tension-elongation relations based on the computation of the partition function (2.40). Thin lines: response of two half-sarcomere in series, each one endowed with the constitutive relation illustrated in Fig. 33(d). (a) Hard device constitutive law. (b) Soft device constitutive law, see Ref. [212] for more details. Parameters are as in Fig. 39. Figure 41 . 41 Figure 41. Global minimum of the (hard device) energy in the zero temperature limit (β → ∞) for a sarcomere chain with different M: (a) -energies; (b) -tension-elongation relations. In (b) the solid line represents the tension-elongation relation in a soft device. Parameters are: λ 1 = 0.7, λ 2 = 4, ℓ = -0.3. = 20 Figure 42 . 2042 Figure 42. Elongation of half-sarcomeres along the global minimum path for M = 2 (a) and M = 20 (b) in a hard device. Upper branch, pre-power-stroke half-sarcomeres; lower branch, post-powerstroke half-sarcomeres. Numbers indicate how many half-sarcomere are in each branch at a given z. Dashed lines, Soft device response. Parameters are: λ 1 = 0.7, λ 2 = 0.4, ℓ = -0.3. Figure 43 . 43 Figure 43. Influence of the parameter N on the equilibrium response of an infinitely long chain (M → ∞) in a hard device: (a) free energy; (b) tension-elongation relation. The asymmetry of the tension curve is a consequence of the asymmetry of the double well potential. Parameters are: λ 1 = 0.7, λ 2 = 0.4, ℓ = -0.3, λ b = 1, and β = 10. 1 ,Figure 44 . 144 Figure 44. Quick recovery response of a with M = 20 half-sarcomeres. (a) Tension elongation relation obtained with M = 20 in a hard device (circles) and in a soft device (squares) compared with the same experiments as in Fig. 5 (triangles). (b) Corresponding rates in hard (circles) and soft (squares) devices compared with experimental data from Fig. 6 (triangles). Figure 45 . 45 Figure45. Tension elongation curves σ(z) in the case of periodic driving (adiabatic limit). The equilibrium system (A = 0) is shown in (a) and and out-of-equilibrium system (A 0) -in (b). The insets show the effective potential F(z). Here κ 0 = 0.6. Adapted from Ref.[224]. Figure 46 . 46 Figure 46. The parameter dependence of the roots of the equation σ(z) = 0 in the adiabatic limit: (a) fixed D = 0.04 and varying A, first order phase transition [line C A -M A in Fig. 47 (a)]; (b) fixed A = 0 and varying D, second order phase transition [line D e -C A in Fig. 47 (a)]. The dashed lines correspond to unstable branches. Here κ = 0.6. Adapted from Ref. [224]. Figure 47 . 47 Figure 47. Phase diagram in (A, D) plane showing phases I,II and III: (a) -adiabatic limit, (b) -numerical solution at τ = 100 (b).C A is the tri-critical point, D e is the point of a second order phase transion in the passive system. The "Maxwell line" for a first order phase transition in the active system is shown by dots. Here κ 0 = 0.6. Adapted from Ref.[224]. Figure 48 . 48 Figure 48. (a-c) Typical tension-length relations in phases I, II and III . Points α, β and γ are the same as in Fig. 47 (b); (d) shows the active component of the force. Inserts show the behavior of stochastic trajectories in each of the phases at z ≃ 0 (gray lines) superimposed on their ensemble averages (black lines); the stationary hysteretic cycles, the structure of the effective potentials F(z) and the active potential F a (z) defined as a primitive of the active force σ a (z). The parameters: κ 0 = 0.6, τ = 100. Adapted from Ref. [224]. [119], integrin binding [263], folding/unfolding of proteins subjected to periodic forces [264] and other driven biological phenomena [265-268]. Figure 49 . 49 Figure49. Schematic representation of a parallel bundle of crossbridges that can attach and detach. Each cross bridge is modeled as a series connection of a ratchet Φ, a bi-stable snap-spring u SS , and linear elastic element v e . Figure 50 . 50 Figure 50. Contour plot of the effective energy v(x, y; z 0 ) at z 0 = 0. Inserts illustrate the states of various mechanical subunits. Figure 51 . 51 Figure 51. The numerical simulation of the time histories for different mechanical units in a load clamp simulation at zero external force: (a) the behavior of the myosin catalytic domain; (b) the behavior of the power stroke element (snap-spring); (c) the behavior of the elastic element; (d) the total displacement of the backbone. Figure 52 . 52 Figure 52. (a) An illustration of the steric effect associated with the power-stroke; (b) sketch of the mechanical model. Adapted from Ref. [244]. Figure 53 . 53 Figure 53. The functions Φ, u SS , f and the coupling function Φ used in numerical experiments. Analytic expressions for (a),(b) and (c) are given by Eqs. [(4.1),(4.2) and (4.8)], respectively. Adapted from Ref. [244]. (a). The implied steric rotation-translation coupling in ratchet models has been previously discussed in Refs. [154; 292; 293].We write the energy of a single cross-bridge in the formĜ(x, y, d) = d Φ(x) + u SS (yx),(4.5)where Φ(x) is a non-polar periodic potential representing the binding strength of the actin filament and u SS (yx) is a symmetric double-well potential describing the power-stroke element, see Fig.49. The coupling between the state of the power-stroke element yx and the spatial position of the motor x is implemented through the variable d. In the simplest version of the model d is assumed to be a function of the state of the power-stroke element d(x, y) = Ψ(yx).(4.6) 52(b). Similarly, as the power-stroke is recharging, the myosin head moves progressively closer to the actin filament and therefore the function Ψ(yx) should be bringing the actin potential back into the bound configuration.In view of (4.6), we can eliminate the variable d and introduce the redressed potential G(x, y) = Ĝ[x, y, Ψ(yx)]. Figure 54 . 54 Figure 54. The dependence of the average velocity v on temperature D and the amplitude of the ac signal A. The pre-and post-powerstroke states are labeled in such a way that the purely mechanical ratchet would move to the left. Adapted from Ref. [244]. Figure 55 . 55 Figure 55. The hysteresis operator Ψ{y(t)x(t)} linking the degree of attachment d with the previous history of the power-stroke configuration y(t)x(t). Adapted from Ref. [244]. Figure 56 . 56 Figure 56. Stationary particle trajectories in the model with the hysteretic coupling (4.9). Parameters are: D = 0.02 and A = 1.5. Adapted from Ref. [244]. Figure 57 . 57 Figure 57. The dependence of the average velocity v on temperature D in the hysteretic model with δ = 0.5. Adapted from Ref. [244]. Figure 58 . 58 Figure 58. (a) Schematic illustration of the four-step Lymn-Taylor cycle in the notations of this Section. (b) A steady-state cycle in the hysteretic model projected on the (x, yx) plane; color indicates the sign of the rocking force f (t): black if f (t) > 0 and gray if f (t) < 0; (c) Representation of the same cycle in the (d, x, yx) space with identification of the four chemical states A, B, C, D constituting the Lymn-Taylor cycle shown in (a). The level sets represent the energy landscape G at d = 0 (detached state) and d = 1 (attached state). The parameters are: D = 0.02, A = 1.5, and δ = 0.75. Adapted from Ref. [244]. Figure 59 . 59 Figure 59. The force-velocity relation in the model with hysteretic coupling at different amplitudes of the ac driving A and different temperatures D. The hysteresis width is δ = 0.5. Adapted from Ref. [244]. Figure 60 . 60 Figure 60. Schematic isometric tetanus with a descending limb. Adapted from Ref. [238]. Figure 61 . 61 Figure 61. The model of a muscle myofibril. Adapted from Ref. [238]. Figure 62 . 62 Figure 62. Non-dimensional tension-elongation relations for the active element (a), for the passive elastic component (b) and for the bundle (c). Adapted from Ref. [238]. Figure 63 . 63 Figure 63. The structure of the set of metastable branches of the tension-elongation relation for N = 10. Here f is the total tension (a) and f a is the active tension (b). The thick gray line represents the anticipated tetanized response. Adapted from Ref. [238]. Figure 64 . 64 Figure 64. Schematic representation of the structure of a halfsarcomere chain surrounded by the connecting tissue. Adapted from Ref. [238]. Figure 65 . 65 Figure 65. (a) The force-length relation along the global energy minimum path in the continuum limit for the model shown in Fig. 64. (b) The force-length relation along the global energy minimum path with the contribution due to connecting tissue subtracted. (c) The active forcelength relation along the global energy with the contribution due to connecting tissue and sarcomere passive elasticity subtracted. Adapted from Ref. [238]. Table 1 . 1 Realistic values (with estimated error bars) for the parameters of the snap-spring model ( 1 zJ = 10 -21 J). Dimensional Non-dimensional a 10 ± 1 nm κ 0 2.7 ± 0.9 pN nm -1 N 100 ± 30 T 277.15 K β 80 ± 30 κ b 150 ± 10 κ 1 3 ± 1 pN nm -1 pN nm -1 λ b 0.56 ± 0.25 λ 1 0.5 ± 0.1 κ 2 1.05 ± 0.75 pN nm -1 λ 2 0.25 ± 0.15 v 0 50 ± 10 zJ v 0 0.15 ± 0.30 Acknowledgments We thank J.-M. Allain, L. Marcucci, I. Novak, R. Sheska and P. Recho for collaboration in the projects reviewed in this article. We are also grateful to V. Lombardi group, D. Chapelle, P. Moireau, T. Lelièvre and P. Martin for numerous inspiring discussions. The PhD work of M.C. was supported by the Monge Fellowship from Ecole Polytechnique. L. T. was supported by the French Governement under the Grant No. ANR-10-IDEX-0001-02 PSL.
204,999
[ "2026", "761652" ]
[ "84321", "1159" ]
01753845
en
[ "spi" ]
2024/03/05 22:32:10
2017
https://theses.hal.science/tel-01753845/file/CHEVRIE_Jason.pdf
ORGANISATION DE LA THÈSE contrôle sont finalement illustrées au travers de plusieurs scénarios expérimentaux ex-vivo utilisant des caméras ou l'échographie 3D comme retour visuel. Chapitre 5: Nous considérons le problème des mouvements du patient pendant la procédure d'insertion d'aiguille. Nous présentons d'abord une vue d'ensemble des techniques de compensation de mouvement pendant l'insertion d'une aiguille. Notre approche de contrôle introduite dans le chapitre 4 est ensuite étendue et nous exploitons la méthode de mise à jour de modèle proposée dans le chapitre 3 afin de se charger de l'insertion d'une aiguille dans des tissus subissant des mouvements latéraux. Nous fournissons les résultats expérimentaux obtenus en utilisant notre approche de contrôle pour guider l'insertion d'une aiguille dans un fantôme constitué de tissus mous en mouvement. Ces expériences ont été réalisées en utilisant plusieurs modalités de retour d'information, fournies par un capteur d'efforts, un capteur électromagnétique et l'échographie 2D. Conclusion: Finalement nous concluons cette thèse et présentons des perspectives pour de possibles extensions et applications. Acknowledgments There are many people that I want to thank for making the realization of this thesis such a great experience. I have never really felt at ease telling my life and my feelings, so I will keep it short and hope to make it meaningful. I would first like to deeply thank Alexandre Krupa and Marie Babel for the supervision of this work. I greatly appreciated the freedom and trust they gave me to conduct my research as well as their guidance, feedback and support at the different stages of this thesis: from peaceful periods to harder times when I was late just a few hours before submission deadlines. I am really grateful to Philippe Poignet and Nicolas Andreff for the time they took to review my manuscript within the tight schedule that was given to them. I would also like to thank them along with Bernard Bayle and Sarthak Misra for being members of my thesis committee and for providing interesting feedback and discussions on my work to orient my future research. Additional thanks go to Sarthak for the great opportunity he gave me to spend some time in the Netherlands at the Surgical Robotics Laboratory. I would also like to thank the other members of this lab for their welcome, and especially Navid for the time we spent working together. I would like to express my gratitude to François Chaumette for introducing me the Lagadic team months before I started this thesis, which gave me the desire to work there. Many thanks go to all the members and ex-members of this team that I had the pleasure to meet during these years at IRISA. A bit for their help with my work, but mostly for the great atmosphere during all the coffee/tea/lunch breaks and the various out-of-office activities. The list has become too long to mention everyone separately, but working on this thesis during these years would not have been such a great pleasure without all of the moments we spent together, and I hope this can continue. I would also like to thank all my friends from Granville, Cachan or Rennes, who all contributed to the success of this thesis, consciously or not. A special thank you to Rémi, for various reasons in general, but more specifically here for being among the very few people to proof-read a part of this manuscript and give some feedback. Finally, my warmest thanks go to my family for their continuous support and encouragements during all this time, before and during the thesis. Les procédures cliniques mini-invasives se sont largement étendues durant ce dernier siècle. La méthode traditionnellement utilisées pour traiter un patient a longtemps été de recourir à la chirurgie ouverte, qui consiste à faire de larges incisions dans le corps pour pouvoir observer et manipuler ses structures internes. Le taux de succès de ce genre d'approche est tout d'abord limité par les lourdes modifications apportées au corps du patient, qui mettent du temps à guérir et peuvent entrainer des complications après l'opération. Il s'ensuit également un risque accru d'infection dû à l'exposition des tissues internes à l'environnement extérieur. Au contraire, les procédures mini-invasives ne requièrent qu'un nombre limité de petites incisions pour accéder aux organes. Le bien-être général du patient est donc amélioré grâce à la réduction des douleurs post-opératoires et la limitation de la présence de larges cicatrices. Le temps de rétablissement des patients est également grandement réduit [EGH + 13], en même temps que les risques d'infection [START_REF] Gandaglia | Effect of minimally invasive surgery on the risk for surgical site infections: Results from the national surgical quality improvement program (nsqip) database[END_REF], ce qui conduit à de meilleurs taux de succès des opérations et une réduction des coûts pour les hôpitaux. Lorsque la chirurgie ouverte était nécessaire avant l'introduction de l'imagerie médicale, diagnostique et traitement pouvaient faire partie d'une seule et même opération, pour tout d'abord voir les organes et ensuite planifier et effectuer l'opération nécessaire. Les rayons X ont été parmi les premiers moyens découverts pour permettre l'obtention d'une vue anatomique de l'intérieur du corps sans nécessiter de l'ouvrir. Plusieurs modalités d'imagerie ont depuis été développées et améliorées, parmi lesquelles la tomodensitométrie (TDM) [START_REF] Hounsfield | Computerized transverse axial scanning (tomography): Part 1. description of system[END_REF], l'imagerie par résonance magnétique (IRM) [START_REF] Lauterbur | Image formation by induced local interactions: Examples employing nuclear magnetic resonance[END_REF] et l'échographie [START_REF] Wild | Application of echo-ranging techniques to the determination of structure of biological tissues[END_REF] sont maintenant largement utilisées dans le domaine médical. Au-delà des capacités de diagnostic accrues qu'elle offre, l'imagerie médicale a joué un rôle important dans le développement de l'approche chirurgicale mini-invasive. Observer l'intérieur du corps est nécessaire au succès d'une intervention chirurgicale, afin de voir les tissus d'intérêt et la position des outils chirurgicaux. De part la nature même de la chirurgie miniinvasive, une ligne de vue directe sur l'intérieur du corps n'est pas possible et il est donc nécessaire d'utiliser d'autres moyens d'observation visuelle, tels que l'insertion d'endoscope ou des techniques d'imagerie anatomique. i RÉSUMÉ EN FRANÇAIS Chaque technique a ses propres avantages et inconvénients. Les endoscopes utilisent des caméras, ce qui offre une vue similaire à un oeil humain. Les images sont donc faciles à interpréter, cependant il n'est pas possible de voir à travers les tissus. À l'opposé, l'imagerie anatomique permet de visualiser l'intérieur des tissus, mais un entrainement spécifique des médecins est nécessaire pour l'interprétation des images obtenues. La tomodensitométrie utilise des rayons X, qui sont des radiations ionisantes, ce qui limite néanmoins le nombre d'images qui peuvent être acquises afin de ne pas exposer le patient à des doses de rayonnement trop importantes [SBA + 09]. L'équipe médicale doit également rester en dehors de la salle où se trouve le scanner pendant la durée d'acquisition. D'un autre côté l'IRM utilise des radiations non-invasives et fournit également des images de haute qualité, avec une grande résolution et un large champ de vue. Cependant ces deux modalités imposent de sévères contraintes, telles qu'un long temps nécessaire pour obtenir une image ou un équipement coûteux et encombrant qui limite l'accès au patient. Dans ce contexte l'échographie est une modalité de choix grâce à sa capacité à fournir une visualisation en temps réel des tissus et des outils chirurgicaux en mouvement. De plus, elle est non-invasive et ne requière que des scanners légers et des sondes facilement manipulables. Des outils longilignes sont souvent utilisés pour les procédures miniinvasives afin d'être insérés à travers de petites incisions réalisées à la surface du patient. En particulier les aiguilles ont été utilisées depuis longtemps pour extraire ou injecter des substances directement dans le corps. Elles procurent un accès aux structures internes tout en ne laissant qu'une faible marque dans les tissus. Pour cette raison elles sont des outils de premier choix pour une invasion minimale et permettent d'atteindre de petites structures dans des régions profondes du corps. Cependant les aiguille fines peuvent présenter un certain niveau de flexibilité, ce qui rend difficile le contrôle précis de leur trajectoire. Couplé au fait qu'une sonde échographique doit être manipulée en même temps que le geste d'insertion d'aiguille, la procédure d'insertion peut rapidement devenir une tâche ardue qui requière un entrainement spécifique des cliniciens. En conséquence, le guidage robotisé des aiguilles est devenu un vaste sujet de recherche pour fournir un moyen de faciliter l'intervention des cliniciens et augmenter la précision générale de la procédure. La robotique médicale a pour but de manière générale de concevoir et contrôler des systèmes mécatroniques afin d'assister les cliniciens dans leur tâches. L'objectif principal étant d'améliorer la précision, la sécurité et la répétabilité des opérations tout en réduisant leur durée [START_REF] Taylor | Medical robotics in computer-integrated surgery[END_REF]. Cela peut grandement bénéficier aux procédures d'insertion d'aiguille en particulier, pour lesquelles la précision est bien souvent cruciale pour éviter les erreurs de ciblage et la répétition inutile d'insertions. L'intégration d'un système robotique dans les blocs opératoires reste un grand défi en raison des contraintes cliniques et de l'acceptation du dispositif technique par le personnel médical. Parmi les différentes conceptions qui ont été proposées, certains sysii MOTIVATIONS CLINIQUES tèmes présentent plus de chances de succès que d'autres. De tels systèmes doivent offrir soit une assistance au chirurgien sans modifier de manière significative le déroulement de l'opération soit des bénéfices clairs à la fois sur la réussite de l'opération et sur les conditions opératoires du chirurgien. C'est le cas par exemple des systèmes d'amélioration des images médicales ou de suppression des tremblements ou encore des systèmes télé-opérés. Pour les procédures d'insertion d'aiguille, cela consisterait principalement à fournir un monitoring en temps réel du déroulement de l'insertion ainsi qu'un système robotique entre le patient et la main du chirurgien servant à assister le processus d'insertion. À cet égard, un système robotique guidé par échographie est un bon choix pour fournir une imagerie intra-opératoire en temps réel et une assistance pendant l'opération. Motivations cliniques Les aiguilles sont largement utilisées dans une grande variété d'actes médicaux pour l'injection de substances ou le prélèvement d'échantillons de tissus ou de fluides directement à l'intérieur du corps. Alors que certaines procédures ne nécessitent pas un placement précis de la pointe de l'aiguille, comme les injections intramusculaires, le résultat des opérations sensibles dépend grandement de la capacité à atteindre une cible précise à l'intérieur du corps. Dans la suite nous présentons quelques applications pour lesquelles un ciblage précis et systématique est crucial pour éviter des conséquences dramatiques et qui pourraient grandement bénéficier d'une assistance robotisée. Biopsies pour le diagnostic de cancer Le cancer est devenu une des causes majeures de mortalité dans le monde avec 8.2 millions de décès dus au cancer estimés à travers le monde en 2015 [TBS + 15]. Parmi les nombreuses variétés de cancer, le cancer de la prostate est l'un des plus diagnostiqués parmi les hommes et le cancer du sein parmi les femmes, le cancer du poumon étant aussi une cause majeure de décès pour les deux. Cependant la détection précoce des cancers peut améliorer la probabilité de succès d'un traitement et diminuer le taux de mortalité. Indépendamment du type de tumeur, la biopsie est la méthode de diagnostic traditionnellement utilisée pour confirmer la malignité de tissus suspects. Elle consiste à utiliser une aiguille pour prélever un petit échantillon de tissu à une position bien définie à des fins d'analyse. Le placement précis de l'aiguille est d'une importance capitale dans ce genre de procédure afin d'éviter une erreur de diagnostic due au prélèvement de tissus sains autour de la région suspectée. Les insertions manuelles peuvent donner des résultats variables qui dépendent du clinicien effectuant l'opération. Le guidage robotique de l'aiguille a donc le potentiel de grandement améliorer les performances des biopsies. Un retour échographique est souvent utilisé, par iii RÉSUMÉ EN FRANÇAIS exemple pour le diagnostic du cancer de la prostate [START_REF] Kaye | Robotic ultrasound and needle guidance for prostate cancer manage-ment: review of the contemporary literature[END_REF]. La tomodensitométrie est également un bon choix pour le cancer du poumon et un système robotique est d'une grande aide afin de compenser les mouvements de respiration [ZTK + 13]. Les systèmes robotiques peuvent également être utilisés afin de maintenir et modifier la position des tissus pour aligner une tumeur potentielle avec l'aiguille, particulièrement dans le cas de biopsies du cancer du sein [START_REF] Mallapragada | Robotassisted real-time tumor manipulation for breast biopsy[END_REF]. Curiethérapie La curiethérapie a prouvé être un moyen efficace pour traiter le cancer de la prostate [GBS + 01]. Elle consiste à placer de petits grains radioactifs dans la tumeur à détruire. Cette procédure nécessite le placement précis et uniforme d'une centaine de grains, ce qui peut prendre du temps et requière une grande précision. Les conséquences d'un mauvais placement peuvent être dramatiques par la destruction de structures sensibles alentours, comme la vessie, le rectum, la vésicule séminale ou l'urètre. L'insertion est habituellement effectuée sous échographie trans-rectale, ce qui peut permettre d'utiliser un système robotisé pour accomplir des insertions précises et répétées sous guidage échographique [START_REF] Hungr | A 3d ultrasound robotic prostate brachytherapy system with prostate motion tracking[END_REF] [SSK + 12] [START_REF] Kaye | Robotic ultrasound and needle guidance for prostate cancer manage-ment: review of the contemporary literature[END_REF]. L'IRM est également couramment utilisée et fait l'objet de recherche pour une utilisation avec un système robotique [START_REF] Seifabadi | Toward teleoperated needle steering under continuous mri guidance for prostate percutaneous interventions[END_REF]. Cancer du foie Après le cancer du poumon, le cancer du foie est la cause majeure de décès dus au cancer chez l'homme, avec environ 500000 décès chaque année [TBS + 15]. L'ablation par radiofréquence est la principale modalité thérapeutique utilisée pour effectuer une ablation de tumeur du foie [START_REF] Lencioni | Percutaneous image-guided radiofrequency ablation of liver tumors[END_REF]. Une sonde d'ablation, apparentée à une aiguille, est insérée dans le foie et génère de la chaleur pour détruire localement les tissus. Guider précisément la sonde sous guidage visuel peut éviter la destruction inutile de trop de tissus. Les biopsies du foie peuvent également être effectuées en utilisant des aiguilles de ponction percutanée [START_REF] Grant | Guidelines on the use of liver biopsy in clinical practice[END_REF]. Utiliser un guidage robotisé sous modalité échographique pourrait permettre d'éviter de multiple insertions qui augmentent les saignements hépatiques et peuvent avoir de graves conséquences. Contributions Dans cette thèse nous traitons du contrôle automatique d'un système robotique pour l'insertion d'une aiguille flexible dans des tissus mous sous guidage échographique. Traiter ce sujet nécessite de considérer plusieurs points. Tout d'abord l'interaction entre l'aiguille et les tissus doit être modélisée afin iv CONTRIBUTIONS de pouvoir prédire l'effet du système robotique sur l'état de la procédure d'insertion. Le modèle doit être capable de représenter les différents aspects de l'insertion et être à la fois suffisamment simple pour être utilisé en temps réel. Une méthode de contrôle doit également être conçue pour permettre de diriger la pointe de l'aiguille vers sa cible tout en maintenant la sécurité de l'opération. Le ciblage précis est rendu difficile par le fait que les tissus biologiques peuvent présenter une grande variété de comportements. Guider l'aiguille introduit aussi nécessairement une certaine quantité de dommages aux tissus, de telle sorte qu'un compromis doit être choisi entre le succès du ciblage et la réduction des dommages. Les mouvements physiologiques du patient peuvent également être une source importante de mouvement de la région ciblée et doivent aussi être pris en compte pour éviter d'endommager les tissus ou l'aiguille. Finalement la détection fiable de l'aiguille dans les images échographiques est un pré-requis pour pouvoir guider l'aiguille dans la bonne direction. Cependant cette tâche est rendue difficile par la faible qualité de la modalité échographique. Afin de relever ces défis, nous apportons plusieurs contributions dans cette thèse, qui sont : • Deux modèles 3D de l'interaction entre une aiguille flexible à pointe biseautée et des tissus mous. Ces modèles sont conçus pour permettre un calcul en temps réel et fournir une représentation 3D de l'ensemble du corps de l'aiguille pendant son insertion dans des tissus en mouvement. • Une méthode d'estimation des mouvements latéraux des tissus en utilisant uniquement des mesures disponibles sur le corps de l'aiguille. • Une méthode de suivi d'aiguille flexible dans des volumes échographiques 3D qui prend en compte les artefacts inhérents à la modalité échographique. • La conception d'une approche de contrôle pour un système robotique insérant une aiguille flexible dans des tissus mous. Cette approche a été développée de manière à être facilement adaptable à n'importe quels composants matériels, que ce soit le type d'aiguille, le système robotique utilisé pour le contrôle des mouvements de l'aiguille ou la modalité de retour utilisée pour obtenir des informations sur l'aiguille. Elle permet également de considérer des stratégies de contrôle hybrides, comme la manipulation des mouvements latéraux appliqués à la base de l'aiguille ou le guidage de la pointe de l'aiguille exploitant une géométrie asymétrique de cette pointe. • La validation ex-vivo des méthodes proposées en utilisant diverses plateformes expérimentales et différents scénarios afin d'illustrer la flexibilité de notre approche de commande pour différents cas d'insertion d'aiguille. v RÉSUMÉ EN FRANÇAIS Organisation de la thèse Le contenu de chaque chapitre de cette thèse est à présent détaillé dans la suite. Chapitre 1: Nous présentons le contexte clinique et scientifique dans lequel s'inscrit cette thèse. Nous définissons également nos objectifs principaux et présentons les différents défis associés. Le matériel utilisé dans les différentes expériences effectuées est également présenté. Chapitre 2: Nous présentons une vue d'ensemble des modèles d'interaction aiguille/tissus. Un état de l'art des différentes familles de modèles est tout d'abord fourni, avec un classement des modèles selon leur complexité et leur utilisation prévue en phase pre-opératoire ou intra-opératoire. Nous proposons ensuite une première contribution sur la modélisation 3D d'une aiguille à pointe biseautée, qui consiste en deux modèles numériques pouvant être utilisés pour des applications en temps-réel et offrant la possibilité de considérer le cas de tissus en mouvement. Les performances des deux modèles sont évaluées et comparées à partir de données expérimentales. Chapitre 3: Nous traitons le problème du suivi du corps d'une aiguille incurvée dans des volumes échographiques 3D. Les principes généraux de l'acquisition d'images échographiques sont tout d'abord décrits. Ensuite nous présentons une vue d'ensemble des algorithmes récents de détection et de suivi utilisés pour la localisation du corps de l'aiguille ou seulement de sa pointe dans des séquences images échographiques 2D ou 3D. Nous proposons ensuite une nouvelle contribution au suivi 3D d'une aiguille en exploitant les artefacts naturels apparaissant autour de l'aiguille dans des volumes 3D. Finalement nous proposons également une méthode de mise à jour de notre modèle d'aiguille en utilisant les mesures acquises pendant l'insertion pour prendre en compte les mouvements latéraux des tissus. Le modèle mis à jour est utilisé pour prédire la nouvelle position de l'aiguille et améliorer le suivi de l'aiguille dans le prochain volume 3D acquis. Chapitre 4: Nous nous concentrons sur le sujet principal de cette thèse qui est le contrôle robotisé d'une aiguille flexible insérée dans des tissus mous sous guidage visuel. Nous dressons tout d'abord un état de l'art sur le guidage d'aiguilles flexibles, depuis le contrôle bas niveau de la trajectoire de l'aiguille jusqu'à la planification de cette trajectoire. Nous présentons ensuite la contribution principale de cette thèse, qui consiste en une approche de contrôle pour le guidage d'aiguille qui a la particularité d'utiliser plusieurs stratégies de guidage et qui est indépendante du type de manipulateur robotique utilisé pour actionner l'aiguille. Les performances de cette approche de vi Chapter 1 Introduction Minimally invasive procedures have greatly expanded over the past century. The traditional way to cure a patient has long been to resort to open surgery, which consists in making a large cut in the body to observe and manipulate its intern parts. The success rate of such an approach is first limited by the heavy modifications made to the body, which take time to heal and can lead to complications after the surgery. There is also a greater risk of subsequent infections due to the large exposure of the inner body to the outside environment. On the contrary, minimally invasive procedures only require a limited number of small incisions to access the organs. Therefore, this improves the overall well-being of the patient thanks to reduced postoperative pain and scarring. The recovery time of the patient is also greatly reduced [EGH + 13] along with the risk of infections [START_REF] Gandaglia | Effect of minimally invasive surgery on the risk for surgical site infections: Results from the national surgical quality improvement program (nsqip) database[END_REF], resulting in higher success rates for the operations and a cost reduction for the hospitals. When open surgery was necessary before the introduction of medical imaging, diagnosis and treatment could be two parts of a same intervention, in order to first see the organs and then plan and perform the required surgery. X-rays were among the first tools discovered to provide an anatomical view of the inside of the body without needing to open it. Several imaging modalities have since been developed and improved for medical purposes, among which computerized tomography (CT) [START_REF] Hounsfield | Computerized transverse axial scanning (tomography): Part 1. description of system[END_REF], magnetic resonance imaging (MRI) [START_REF] Lauterbur | Image formation by induced local interactions: Examples employing nuclear magnetic resonance[END_REF] and ultrasound (US) [START_REF] Wild | Application of echo-ranging techniques to the determination of structure of biological tissues[END_REF] are now widely used in the medical domain. Beyond the improved diagnosis capabilities that it offers, medical imaging has played an important role in the development of the minimally invasive surgery approach. Viewing the inside of the body is necessary to perform successful surgical interventions, in order to see the tissues of interest and the position of the surgical tools. Due to the nature of minimally invasive surgery, a direct view is not possible and it is thus necessary to use other means of visual observation, such as endoscope insertion or anatomical imaging techniques. Each technique has its own advantages and drawbacks. Endoscopes use cameras, which offer the same view as a human eye. The 1 CHAPTER 1. INTRODUCTION images are thus easy to interpret, however it is not possible to see through the tissues. On the other hand, anatomical imaging allows a visualization of the inside of the tissues, but a specific training of the physicians is required in order to interpret the images. CT imaging uses X-rays, which are ionizing radiations, therefore limiting the number of images that can be acquired in order not to expose the patient to a too high amount of radiations [SBA + 09]. The medical staff should also remain outside the scanner room during the acquisition. On the other hand MRI makes use of non-invasive radiations and also provides high quality images, with high resolution and large field of view. However they impose severe constraints, such as a long time to acquire an image or an expensive and bulky scanner that limits the access to the patient. In this context, ultrasonography is a modality of choice for intra-operative imaging, due to its ability to provide a real-time visualization of tissues and tools in motion. Additionally, it is non-invasive and requires lightweight scanners and portable probes. Slender tools are often used for minimally invasive procedures in order to be inserted through narrow incisions made at the surface of the patient. In particular, needles have been used since long times to extract or inject substances directly inside the body. They provide an access to inner structures while leaving only a very light wound in the tissues. For this reason they are tools of first choice for minimal invasiveness that can allow reaching small structures in deep regions. Thin needles can however exhibit a certain amount of flexibility, which makes accurate steering of the needle trajectory more complicated. Coupled to the handling of an US probe at the same time as the needle insertion gesture, the insertion procedure can become a challenging task which requires specific training of the clinician. Consequently, robotic needle steering has become a vast subject of research to ease the intervention of the clinician and to improve the overall accuracy of the procedure. Medical robotics in general aims at designing and controlling mechatronics systems to assist the clinicians in their tasks. The main goal is to improve the accuracy, safety and repeatability of the operations and to reduce their duration [START_REF] Taylor | Medical robotics in computer-integrated surgery[END_REF]. It can greatly benefit the needle insertion procedures for which accuracy is often crucial to avoid mistargeting and unnecessary repeated insertions. However, the integration of a robotic system in the operating room remains a great challenge due to clinical constraints and acceptance of the technical device from the medical staff. Among the many designs that have been proposed, some systems have better chances of being accepted. Such systems should either assist the surgeon without requiring a lot of modifications of the clinical workflow or should procure clear benefits for both the success of the operation and the operating conditions of the surgeon. This is for example the case of imaging enhancement and tremor cancellation systems, or of tele-operated systems. For needle insertions procedures, this would mainly consists in providing a real-time monitoring of the 1.1. CLINICAL MOTIVATIONS state of the insertion as well as a robotic system between the patient and the hand of the surgeon assisting at the insertion process. In this context, an USguided robotic system is a great choice to provide real-time intra-operative imaging and assistance during the operation. Clinical motivations Needles are widely used in a great variety of medical acts for the injection of substances or the sampling of fluids or tissues directly inside the body. While some procedures do not require an accurate placement of the needle tip, such as intramuscular injections, the results of sensitive operations highly depend on the ability to reach a precise location inside the body. In the following we present some applications for which systematic accurate targeting is crucial to avoid dramatic consequences and which could greatly benefit from a robotic assistance. Biopsy for cancer diagnosis Cancer has become one of the major cause of death in the world with 8.2 million cancer deaths estimated worldwide in 2015 [TBS + 15]. Among the many types of cancers, prostate cancer is the most diagnosed cancer among men and breast cancer among women, with lung cancer being a leading cause of cancer deaths for both. However early detection of cancer can improve the chance of success of cancer treatment and diminish the mortality rates. Whatever the kind of tumor, biopsies are the traditional diagnostic method used to confirm the malignancy of suspected tissues. It consists in using a needle to get a small sample of tissues at a defined location for analysis purposes. The accurate placement of the needle is of paramount importance in this procedure to avoid misdiagnosis due to the sampling of healthy tissues surrounding the suspected lesion. Freehand insertions can give variable results depending on the clinician performing the operation. Therefore, robotic needle guidance under visual feedback has the potential to greatly improve the performances of biopsies. Ultrasound feedback is often used, as for example for the diagnostic of prostate cancer [START_REF] Kaye | Robotic ultrasound and needle guidance for prostate cancer manage-ment: review of the contemporary literature[END_REF]. Computerized tomography (CT) is also a good choice for lung cancer diagnosis and a robotic system is of great help to compensate for breathing motions [ZTK + 13]. Robotic systems can also be used to maintain and modify the position of the tissues to align a suspected tumor with the needle, especially for breast cancer biopsy [START_REF] Mallapragada | Robotassisted real-time tumor manipulation for breast biopsy[END_REF]. Brachytherapy Brachytherapy has proven to be an efficient way to treat prostate cancer [GBS + 01]. It consists in placing small radioactive seeds in the tumors to CHAPTER 1. INTRODUCTION destroy. The procedure requires the accurate uniform placement of about a hundred seeds, which can be time consuming and require great accuracy. The consequence of misplacement can be dramatic due to the destruction of surrounding sensitive tissues like bladder, rectum, seminal vesicles or urethra. The insertion is usually performed under transrectal ultrasound, which can allow the use of robotic systems to perform accurate and repetitive insertions under ultrasound (US) guidance [START_REF] Hungr | A 3d ultrasound robotic prostate brachytherapy system with prostate motion tracking[END_REF] [SSK + 12] [START_REF] Kaye | Robotic ultrasound and needle guidance for prostate cancer manage-ment: review of the contemporary literature[END_REF]. Magnetic resonance imaging (MRI) is also commonly used and is the subject of research to explore its use together with a robotic system [START_REF] Seifabadi | Toward teleoperated needle steering under continuous mri guidance for prostate percutaneous interventions[END_REF]. Liver cancer Liver cancer is the major cause of cancer deaths after lung cancer among men with about 500000 deaths each year [TBS + 15]. Radiofrequency ablation is the primary therapetic modality to perform liver tumor ablations [START_REF] Lencioni | Percutaneous image-guided radiofrequency ablation of liver tumors[END_REF]. An electrode needle is inserted in the liver and generates heat to locally destroy the tissues. Accurately guiding the needle under imageguidance can help avoiding unnecessary tissue destruction. Liver biopsies can also be performed using percutaneous punction needles [START_REF] Grant | Guidelines on the use of liver biopsy in clinical practice[END_REF]. Performing robotic ultrasound (US) guidance could avoid multiple insertions that increase hepatic bleeding and can have dramatic consequences. Scientific context Reaching a specific region in the body without performing open surgery is a challenging task that has been a vast subject of research and developments over the past decades. Many robotic designs have been proposed to achieve this goal. In the following we present a non-exhaustive overview of these different technologies as well as various kinds of sensor modalities that have been developed and used to provide feedback on the medical procedure. We then define where we positioned the work presented in this thesis relative to this context. Robotic designs Continuum robots: These systems are snake-like robots consisting of a succession of actively controllable articulations, as can be seen in Fig. 1.1a. They offer a large control over their whole shape and can be used to perform many kinds of operations. Many varieties of designs are possible and the study of such robots is a vast field of research by itself [START_REF] Walker | Robot strings: Long, thin continuum robots[END_REF] [START_REF] Burgner-Kahrs | Continuum robots for medical applications: A survey[END_REF]. However their design and control are often complex and their diameter is usually larger than standard needles, which limit the use of such system in practice. Concentric tubes: This kind of robots, also known as active cannulas, is a special kind of continuum robots which consist of a telescopic set of flexible concentric pre-curved tubes that can slide and rotate with respect to each other [START_REF] Webster | Design and kinematic modeling of constant curvature continuum robots: A review[END_REF]. Each tube is initially maintained inside the larger tubes and the insertion of such device is performed by successively inserting each set of tubes and leaving in place the outermost tubes one after another, as seen in Fig. 1.1b. They offer additional steering capabilities compared to flexible needles due to the pre-curved nature of each element, while maintaining a relatively small diameter. Furthermore, once the tubes have been deployed, rotation of the different elements allows for controlled deformations of the system all along its body. Although the design can be limited to only one pre-curved stylet placed in an outer straight tube, as was proposed in [OEC + 05], some other designs are possible to enable an additional control of the curvature of each tube [START_REF] Chikhaoui | Kinematics and performance analysis of a novel concentric tube robotic structure with embedded soft micro-actuation[END_REF]. As continuum robots, the modeling and control of such systems remain quite complex [START_REF] Dupont | Design and control of concentric-tube robots[END_REF] [BLH + 16]. SCIENTIFIC CONTEXT Needle insertion devices: Many robotic systems have been designed for the insertion of traditional needles and particularly for asymmetric tip needles. Several kinds of asymmetries are possible, as illustrated in Fig. 1.2. These needles tend to naturally deviate from a straight trajectory, such that the rotation around their shaft plays an important role. Many needle Figure 1.3: General concept of a needle insertion device (taken from [START_REF] Webster | Design considerations for robotic needle steering[END_REF]). insertion systems have been proposed, all being a variant of the same design consisting of one linear stage for the insertion and one rotative stage for needle rotation along and around its main axis [START_REF] Webster | Design considerations for robotic needle steering[END_REF], as depicted in Active needles: Alternatively, many designs have been proposed to replace traditional needles and provide additional control capabilities over their bending. A needle made of multiple segments that can slide along each other was designed such that the shape of the tip can be modified during the insertion [START_REF] Ko | Closedloop planar motion control of a steerable probe with a "programmable bevel"; inspired by nature[END_REF]. A 1 degree of freedom (DOF) actuated needle tip was designed such that it can act as a pre-bent tip needle with variable angle between the shaft and the pre-bent tip [AGL + 16]. A similar tendon-actuated needle tip with 2 DOF was also used to allow the orientation of the tip without rotation of the needle around its axis [START_REF] Roesthuis | Modeling and steering of a novel actuatedtip needle through a soft-tissue simulant using fiber bragg grating sensors[END_REF]. Additional considerations about tip designs can be found in [START_REF] Van De Berg | Design choices in needle steering-a review[END_REF]. These needle designs allows a high controllability of the tip trajectory, however, in addition to the increased complexity of the needle itself, they require a special system to be able to control the additional DOF from the needle base. A combination of different methods can also be made as was done in [SMR + 15], where using a succession of cable driven continuum robot, concentric tubes and beveled-tip needle increases the reachable space and final accuracy of the targeting. Sensor feedback In order to be used for needle insertion assistance, a robotic system should be able to monitor the state of the insertion. Therefore, feedback modalities have to be used to provide some information on the needle and the tissues. The choice of the sensors is an important issue that has to be taken into account from the beginning of the conception of the system. Indeed, they should either be directly integrated into the system or they can pose compatibility issues in the case of external modalities. In the following we provide an overview of some feedback modalities currently used or explored for needle insertion procedures. Shape feedback: The shape of the entire needle can be reconstructed using fiber Bragg grating (FBG) sensors. This kind of sensor consists in several optic fibers integrated in the needle. The light propagates differently in these fibers depending on the curvature of the fiber at certain locations, such that the curvature of the needle can be measured and used to retrieve its shape [PED + 10]. This kind of sensor requires a special design of the needle, since the fibers need to follow the same deformations as the needle does. An electromagnetic (EM) tracker can also be used for the tracking of the position and orientation of a specific point of the needle, which is typically the tip. They provide a great accuracy on the measures and currently available trackers are small enough such that they can fit directly in standard needles. Real-time imaging modalities: Feedback on the needle position is not sufficient for needle insertions since the position of the targeted region must also be known. Using an imaging modality can provide a visual feedback on the position of both the needle and the target. Ultrasound (US) is the CHAPTER 1. INTRODUCTION modality of choice for real-time imaging due its fast acquisition rate of 2D or 3D images, good resolution and safety. Special 2.5D US transducers are also the subject of current research to enable a direct detection and display of the needle tip in a 2D US image, even when the tip is outside the imaging plane of the probe [XWF + 17]. However, these transducers are currently not commonly available. A limiting factor of US in general is the low quality of the images due to the intrinsic properties of US waves. On the other hand, computerized tomography (CT)-scan or magnetic resonance imaging (MRI) are used for manual insertions thanks to the high quality of their images and the large field of view that they offer. However, as stated previously, this kind of imaging method can not be used directly for real-time image-guided robotic needle insertion, due to their long acquisition time. They can still be used for non real-time tele-operated robotic control, by alternating between insertion steps and imaging steps, however a single needle insertion can take more than 45 minutes. Tissue motions between two acquisitions is also an issue that requires additional real-time sensors to be compensated for, such as force sensors [MDG + 05] or optical tracking [ZTK + 13]. On the contrary CT fluoroscopy can be used to acquire real-time images. However, in manual needle insertion this exposes the clinician to a high dose of noxious radiations. This can be avoided by wearing unpractical special shielding or by using a remotely controlled insertion system [SPB + 02]. However the patient is still exposed to the high amount of radiations necessary for real-time performances. Fast MRI acquisition has also recently been explored to perform imageguided needle insertion [PvKL + 15]. Decreasing the image size and quality, a 2D image could be acquired with a sufficient resolution every 750 ms. By comparison, the US modality can provide a full 3D volume with similar resolution within the same acquisition time, and 2D US can acquire several tens of images per second. Force feedback: Force sensors can be used to measure the forces applied to the needle and tissues. Force sensing can be useful to monitor the state of the insertion, for example by detecting the perforation of the different structures that the needle is going through [START_REF] Okamura | Force modeling for needle insertion into soft tissue[END_REF]. It can also be used with tele-operated robotic systems to provide a feedback to the clinician [PBB + 09] or compensate for tissue motion [START_REF] Joinié-Maurin | Force feedback teleoperation with periodical disturbance compensation[END_REF]. Any kind of force sensors can be used with the US modality, however compatibility issues have to be taken into account for the design of sensors compatible with CT [KPM + 14] or MRI [START_REF] Gassert | Sensors for applications in magnetic resonance environments[END_REF]. CHALLENGES Objectives The objective of this thesis is to focus on the robotic steering of traditional flexible needles. These needles are already widely available and used in clinical practice. Moreover they do not require specific hardware, contrary to other special robotic designs, which requires dedicated control hardware and techniques. The idea is then to provide a generic formulation of the different concepts that we introduce, such that our work can be adapted to several kinds of needle tip and rigidity. In this context, the control of the full motion of the needle base should thus be performed, such that it is not limited to flexible beveled-tip needles but can also be used to insert rigid symmetric tip needles. The formulation should also stay as much as possible independent of the actual robotic system used to perform the needle steering. This choice is motivated by the fact that it would ease the clinical acceptance of the method and could be applicable to several robotic systems and medical applications. Another objective is to focus on the insertion under ultrasound (US) guidance, motivated by the fact that it is already used in current medical practice and does not require any modification of the needle to provide a real-time feedback on its whole shape. For the development and validation of our work, we try to keep in mind some clinical constraints related to the set-up and registration time, which should be as small as possible. Several other modalities have also to be explored, such as force feedback and electromagnetic (EM) feedback, which can easily be implemented alongside traditional needles and the US modality. Challenges In order to fulfill our objective of performing the ultrasound-guided control of a robotic system for the insertion of a flexible needle in soft tissues, several challenges needs to be addressed. We describe these different challenges in the following. Interaction modeling: First, the control of the insertion of a flexible needle with a robotic system requires a model of the interaction between the needle and soft tissues. The effect of the inputs of the robotic insertion system on the needle position and effective length have to be modeled as well. The model should be complete to represent the whole body of the needle in 3D as well as the influence of the tip geometry on its trajectory. It should also be generic enough so that it can be easily adaptable to several kinds of needles. Since it is used for intra-operative purposes, it should be able to represent the current state of the insertion, taking into account the effect of potential motions of the tissues on the deformation of the needle. While complex and accurate models of the needle and tissues exist, the complexity of the modeling must remain reasonable such that real-time performances can be achieved. Needle control: The control of the trajectory of a flexible needle is a challenging task in itself. The complex interaction of the needle with the tissues at its tip and along its shaft is difficult to completely predict, especially because of the great variety of behaviors exhibited by biological tissues. Accurately reaching a target requires then to take into account and to exploit the flexibility of the needle and the motion of the tissues. The safety of the operation should also be ensured to avoid excessive damage caused by the needle onto the tissues. This is a difficult task since inserting the needle necessarily introduces a certain amount of tissue cutting, and steering the needle can only be achieved through an interaction of the needle with the tissues. Tissue motion: Many needle insertion procedures are not performed under general anesthesia. As a consequence, physiological motions of the patient can not always be controlled. Patient motions can have several effects on the needle insertion. First it introduces a motion of the targeted region, which should be accounted for in order to avoid mistargeting. The trajectory of a flexible needle could also be modified by tissue motions. During manual needle insertion, clinicians can directly see the motion of the skin of the patient and feel the forces applied on the needle and tissues, such that they can easily follow the motions of the patient if needed. A robotic system should also be able to adapt to some motions of the patient while inserting the needle to avoid threatening the safety of the operation. This point represents a great challenge due to the limited perception available for the robotic system. Needle detection: Accurate localization of the needle in ultrasound (US) images is a necessary condition to be able to control the state of the insertion. The low quality of US images is a first obstacle to the accurate localization of the needle tip. It can greatly vary depending on the tissues being observed and the position of the needle relatively to the US probe. Using 3D US has the advantage that the whole shaft of the needle can be contained in the field of view of the US probe, which is not the case with 2D US. However, even in 3D US the needle is not equally visible at all points due to specific artifacts that can come from the surrounding tissues or from the needle itself. Even if the 3D volume acquisition is relatively fast, the position of the needle in the volume can still greatly vary due to the motion of the patient or of the probe between two acquisitions. Overall needle localization using US feedback represents a challenging task that is still an open issue that has to be addressed. CONTRIBUTIONS Contributions In order to address the challenges mentioned previously, we present several contributions in this thesis, which are: • two 3D models of the interaction between a flexible needle with a bevel tip and soft tissues. The models are designed to allow real-time processing and to provide a 3D representation of the entire needle body during the insertion in moving tissues; • a method to estimate the lateral motions of the tissues using only the measures available on the needle; • a method for tracking a flexible needle in 3D ultrasound volumes taking into account the artifacts inherent to the ultrasound modality; • the design of a framework for the control of a robotic system holding a flexible needle inserted in soft tissues. The framework is designed to be easily adaptable to any hardware components, whatever the needle type, the robotic system used for the control of the needle motion or the feedback modality used to provide information on the needle location. It can also provide hybrid control strategies like manipulation of the lateral motions of the needle base or tip-based steering of the needle tip; • the ex-vivo validations of the proposed methods using various experimental platforms and scenarios in order to illustrate the flexibility of the framework in performing needle insertions. The contributions on the topic of an hybrid control strategy used to steer a flexible needle under visual feedback were published in an article in the proceedings of the International Conference on Robotics and Automation (ICRA) [START_REF] Chevrie | Needle steering fusing direct base manipulation and tip-based control[END_REF]. The contributions on the topic of needle modeling and tissue motion estimation using visual feedback were published in an article in the proceedings of the International Conference on Intelligent Robots and Systems (IROS) [START_REF] Chevrie | Online prediction of needle shape deformation in moving soft tissues from visual feedback[END_REF]. Experimental context Experiments presented in this thesis were primarily conducted on the robotic platform of the Lagadic team at IRISA/Inria Rennes, France. Others were also conducted at the Surgical Robotics Laboratory attached to the University of Twente, Enschede, the Netherlands. This offered the opportunity to test the genericity of our methods using different experimental setups. We present in this section the list of the different equipments that we used in the different experiments presented all along this thesis. CHAPTER 1. INTRODUCTION The general setup that we used is made up of four parts: a needle attached to a robotic manipulator, several homemade phantoms simulating soft tissues, a set of sensors providing various kinds of feedbacks and a workstation used to process the data and manage the communications between the different components. Robots Two different kinds of needle manipulation systems were used, the first one in France and the second one in the Netherlands. • The Viper s650 and Viper s850 from Omron Adept Technologies, Inc. (Pleasanton, California, United States) are 6 axis industrial manipulators, depicted in Fig. 1.5a. The robots communicate with the workstation through a FireWire (IEEE 1394) connection. They were used to hold and actuate the needle or to hold the 3D ultrasound (US) probe. They were also used to apply motions to the phantom in order to simulate patient motions. • The UR3 and UR5 from Universal Robots A/S (Odense, Denmark) are 6 axis table-top robots, depicted in Fig. 1.5b. Both robots were connected to a secondary workstation and communicated through Ethernet using Robot Operating System (ROS) (Open Source Robotics Foundation, Mountain View, USA). UR3 was used to hold and actuate an insertion device described in the following. UR5 is a larger version of UR3 and was used to apply a motion to the phantom. We also used a 2 degrees of freedom needle insertion device (NID), visible in Fig. 1.5b, designed at the Surgical Robotics Laboratory [START_REF] Shahriari | Design and evaluation of a computed tomography (CT)compatible needle insertion device using an electromagnetic tracking system and CT images[END_REF], which controls the insertion and rotation of the needle along and around its axis. A Raspberry Pi 2 B (Raspberry Pi foundation, Caldecote, United Kingdom) along with a Gertbot motor controller board (Fen logic limited, Cambridge, United Kingdom) were used to control the robot through pulse-width-modulation (PWM). Motor encoders were used to measure the position and rotation of the needle, allowing to know its effective length that can bend outside the NID. The NID was connected to the end effector of the UR3 through a plastic link, as can be seen in Fig. 1.5b, allowing the control of the 3D pose of the NID with the UR3. Visual feedback systems We used two different modalities to provide a visual feedback on the needle and phantom position. Cameras were used for the evaluation of the performances of the control framework and ultrasound (US) probes were used to validate the framework using a clinical modality. We used in France two Point Grey FL2-03S2C cameras from FLIR Integrated Imaging Solutions Inc. (formerly Point Grey Research, Richmond, BC, Canada), which are color cameras providing 648 x 488 images with a frame rate up to 80 images per second. Each camera was coupled with a DF6HA-1B lens from Fujifilm (Tokyo, Japon), which has a 6 mm focal length with manual focus. The cameras send the acquired images to the workstation through a FireWire (IEEE 1394) connection. This system was used only with translucent gelatin phantoms to enable the observation of the needle for validation purposes. Both cameras and a gelatin phantom can be seen in Fig. 1.6. A white screen monitor or a piece of paper were used to offer a uniform background behind the phantom that facilitates the segmentation of the needle in the images. Two different US systems were used for the experiments. For the experiments performed in France, we used a 4DC7-3/40 convex 4D US probe (see Fig. 1.7a) from BK Ultrasound (previously Ultrasonix Medical Corporation, Canada), which is a wobbling probe with frequency range from 3 MHz to 7 MHz, transducer radius of 39.8 mm and motor radius of 27.25 mm. This probe was used with a SonixTOUCH research US scanner from BK Ultrasound (see Fig. 1.7b). The station allows an access to raw data via an Ethernet connection, such as radio frequency data or pre-scan B-mode data. For the experiments performed in the Netherlands, we used a 7CF2 Convex Volume 4D/3D probe (see Fig. 1.7c) from Siemens AG (Erlangen, Germany), which is a wobbling probe with frequency range from 2 MHz to 7 Mhz, transducer radius of 44.86 mm and motor radius of 14.84 mm. This probe was used with an Acuson S2000 US scanner from Siemens (see Fig. 1.7d). This station does not give access to raw data nor online access to transformed data. Pre-scan 3D US volumes can be retrieved offline using the digital imaging and communications in medicine (DICOM) format. Nevertheless, 2D images were acquired online using a USB frame grabber device from Epiphan Video (Ottawa, Ontario, Canada) connected to the video output of the station. Phantoms Different phantoms were used for the experiments. Porcine gelatin was used in all phantoms, either alone or while embedding ex-vivo biological tissues. We used either porcine or bovine liver as biological tissues. The gelatin and tissues were embedded in transparent plastic containers of different sizes. Various artificial targets were also embedded in some phantoms, in the form of raisins or play-dough spheres of different sizes, ranging from 4 mm to 8 mm. Workstations All software developments were made using the C++ language. We used the ViSP library [START_REF] Marchand | Visp for visual servoing: a generic software platform with a wide class of robot control skills[END_REF] as a basis for the majority of the control framework, image processing, graphics user interface and communications. CUDA library was used for optimization of the post-scan conversion of 3D ultrasound volumes with a Nvidia GPU. Eigen library was used for fast matrix inversion for the needle modeling. For the experiments performed in France we used a workstation running Ubuntu 14.04 LTS 64-bit and equipped with 32 GB memory, a Intel R Xeon R E5-2620 v2 @2.10GHz × 6 CPU and a NVIDIA R Quadro R K2000 GPU. For the experiments performed in the Netherlands we used a personal computer running Fedora 24 64-bit and equipped with 16GB memory and a Intel R Core TM i7-4600U @2.10 Ghz × 4 CPU. Needles We summarize the characteristics of the different needles used in the experiments in Table 1.1. A picture of the needle used in France and a zoom on the beveled tip can be seen in Fig. 1.8. Force sensor For the experiments performed in the Netherlands, we used a Nano 43 force torque sensor from ATI Industrial Automation (Apex, USA), which is a sixaxis sensor measuring forces and torques in all 3 Cartesian directions with a resolution of 1.95 mN for forces and 25 µN.m for torques. The sensor was placed between the UR3 robot and the needle insertion device to measure the interaction efforts exerted at the base of the needle, as depicted in Fig. 1.9a. Electromagnetic tracker For the experiments performed in the Netherlands, we used an Aurora v3 electromagnetic (EM) tracking system from Northern Digital Inc. (Waterloo, Canada), which consists in a 5 degrees of freedom EM sensor (see Fig. 1.9b) placed in the tip of the needle and an EM field generator (see Fig. 1.9c). The system is used to measure the 3D position and axis alignment of the needle tip, with an position accuracy of 0.7 mm and an orientation accuracy of 0.20 • , at a maximum rate of 65 measures per second. Thesis outline In this chapter we presented the clinical and scientific context of this thesis. We defined our general objective as being the robotic insertion of a flexible needle in soft tissues under ultrasound (US) guidance and we described the associated challenges. A list of the equipments used in the various experiments presented in this thesis was also provided. The remaining of this manuscript is organized as follows. Chapter 2: We present an overview of needle-tissue interaction models. A review of different families of models is first provided, with a classification of the models depending on their complexity and intended use for pre-operative or intra-operative purposes. We then propose a first contribution on the 3D modeling of a beveled-tip needle interacting with soft tissues consisting of two numerical models that can be used for real-time applications and offering the possibility to consider the case of moving tissues. The performances of both models are evaluated and compared through experiments. Chapter 3: We address the issue of tracking the body of a curved needle in 3D US volumes. The general principles of the acquisition process of US images and volumes are first described. Then we present an overview of recent detection and tracking algorithms used to localize the whole needle body or only the needle tip in 2D or 3D US sequences. We then propose a new contribution to 3D needle tracking that exploits the natural artifacts appearing around the needle in US volumes. Finally we also propose a method to update our needle model using the measures acquired during the insertion to take into account lateral tissue motions. The updated model is used to predict the new position of the needle and to improve needle tracking in the next acquired US volume. Chapter 4: We focus on the core topic of this thesis which is the robotic steering of a flexible needle in soft tissues under visual guidance. We first provide a review of current work on flexible needle steering, from the low level control of the needle trajectory to the planning of this trajectory. We then present the main contribution of this thesis, which consists in a needle steering framework that has the particularity to include several steering strategies and which is independent of the robotic manipulator used to steer the needle. The performances of the framework are illustrated through several ex-vivo experimental scenarios using cameras and 3D US probes as visual feedback. Chapter 5: We consider the issue of patient motions during the needle insertion procedure. An overview of motion compensation techniques during needle insertion is first presented. We further extend our steering framework proposed in chapter 4 and we exploit the model update method proposed in chapter 3 in order to handle needle steering under lateral motions of the tissues. We provide experimental results obtained by using the proposed framework to perform needle insertion in a moving soft tissue phantom. These experiments were performed using several information feedback modalities, such as a force sensor, an electromagnetic tracker as well as 2D US. Conclusion: Finally we provide the conclusion of this dissertation and present perspectives for possible extensions and applications. Chapter 2 Needle insertion modeling This chapter provides an overview of needle-tissue interaction models. The modeling of the behavior of a needle interacting with soft tissues is useful for many aspects of needle insertion procedures. First it can be used to predict the trajectory of the needle tip, before inserting the real needle. This can be of great help to the clinicians in order to find an adequate insertion entry point that optimizes the chances of reaching a targeted region inside the body, while reducing the risks of damaging other sensitive regions. Secondly, using thinner needles allows decreasing the patient pain and the risk of bleeding [START_REF] Gill | Does needle size matter?[END_REF]. However, the stiffness of a thin needle is greatly reduced and causes its shaft to bend during the insertion. This makes the interaction between the needle and tissues more complex to comprehend by the clinicians, since the position of the needle tip is not directly known from the position and orientation of the base, contrary to rigid needles. The introduction of a robotic manipulator holding the needle and controlling its trajectory can be of great help to unburden the operator of the needle manipulation task. This removes a potential source of human error and leaves the clinicians free to focus on other aspects of the procedure [START_REF] Abolhassani | Needle insertion into soft tissue: A survey[END_REF]. Needle-tissue interaction models are a necessity for the usage of such robotic systems, in order to know how they should be controlled to modify the needle trajectory in the desired way. In the following, we first provide a review of needle-tissue interaction models. We address the case of kinematic models (section 2.1), which only consider the trajectory of the tip of the needle, and the case of finite element modeling (section 2.2) that can completely model the behavior of the needle and the surrounding tissues. Then we present mechanics-based models (section 2.3) used to represent the body of the needle without modeling all the surrounding tissues. We further extend on this topic and propose two new 3D models of a needle locally interacting with soft tissues (section 2.4). Finally, in section 2.5 we compare the trajectories of the needle tip obtained with both models to the trajectories obtained during the insertion of a real CHAPTER 2. NEEDLE INSERTION MODELING needle. The work done using both models was published in two articles presented in international conferences [CKB16a] [START_REF] Chevrie | Online prediction of needle shape deformation in moving soft tissues from visual feedback[END_REF]. Kinematic modeling During the insertion of a needle, a force is applied to the tissues by the needle tip to cut a path in the direction of the insertion. In return the tissues apply reaction forces to the needle tip and the direction of this forces depends on the geometry of the tip, as illustrated in Fig. 2.1. In the case of a symmetric needle tip, the lateral forces tends to negate each other, leaving only a force aligned with the needle. The needle tip trajectory then follows a straight line when the needle in inserted. However when the needle tip has an asymmetric shape, as for example in the case of a beveled or pre-curved tip, inserting the needle results in a lateral reaction force. The needle trajectory bends in the direction of the reaction force. The exact shape of the trajectory depends on the properties of the needle and tissues. The stiffness of the needle introduces internal forces that naturally act against the bending of the shaft. The deformations of the tissues also creates forces all along the needle body, which modify its whole shape. Kinematic modeling is used under the assumption that the tissues are stationary and no lateral motion is applied to the needle base, such that the different forces are directly related to the amount of deflection observed at the tip. The value of all these forces are ignored in this case and only the trajectory of the tip is represented from a geometric point of view. The whole needle shaft is ignored as well and the insertion and rotation along and around the needle axis are assumed to be directly transmitted to the tip. This way the modeling is limited to the motion of the tip during the insertion or rotation of the needle. Note that this kind of representation is limited to asymmetric geometries of the tip, since a symmetric tip would only produce a straight trajectory that does not require a particular modeling. Kinematic modeling of the behavior of a needle during its insertion was Unicycle model: The 2D unicycle consists in modeling the tip as the center of a single wheel that can translate in one direction and rotate along another normal direction, as illustrated in Fig. 2.2a. During the needle insertion, the needle tip is assumed to follow a circular trajectory. The ratio between the translation and rotation is fixed by the natural curvature K nat of this circular trajectory and depends on the needle and tissue properties such that    ẋ = v ins cos(θ) ẏ = v ins sin(θ) θ = K nat v ins , (2.1) where x and y are the coordinates of the wheel center, i.e. the needle tip, θ is the orientation of the wheel and v ins is the insertion velocity. Bicycle bicycle: The 2D bicycle model uses two rigidly fixed wheels at a distance L w from each other, such that the front wheel lies on the axis of the rear wheel and is misaligned by an fixed angle φ, as illustrated in Fig. 2.2b. The point representing the needle tip lies somewhere between the two wheels, at a distance L t from the rear wheel. In addition to the rotation and the velocity in the insertion direction observed with the unicycle model, the tip is also subject to a lateral translation velocity, directly linked to the CHAPTER 2. NEEDLE INSERTION MODELING distance L t . The trajectory of the tip is then described according to        ẋ = v ins cos(θ) -Lt Lw tan(φ) sin(θ) ẏ = v ins sin(θ) + Lt Lw tan(φ) cos(θ) θ = tan(φ) Lw v ins , (2.2) where x and y are the coordinates of the needle tip, θ is the orientation of the rear wheel and v ins is the insertion velocity. This model is equivalent to the unicycle model when the tip is at the center of the rear wheel, i.e. L t = 0. Rotation around the needle axis: The rotation around the needle axis is also taken into account in kinematic models. They were first mainly used in the 2D case, such that the needle tip stays in a plane [RMK + 11]. The tip can then only describe a curvature toward the right or the left, such that a change of direction corresponds to a 180 • rotation of a real 3D needle. The tip trajectory is thus a continuous curve made up of a succession of arcs. However, a better modeling of the needle insertion is achieved by considering the 3D case, where the rotation along the needle axis is continuous. In this case the orientation of the asymmetry fixes the direction in which the tip trajectory describes a curve during the needle insertion. This can lead to a greater variety of motions, such as helical trajectories [HAC + 09]. Discussion: Kinematic modeling is easy to implement since it needs few parameters and is not computationally expensive. However the relationship between the model parameters and the real needle behavior is difficult to model since they depend on the needle geometry and tissue properties. In practice these parameters are often identified after performing some preliminary insertions in the tissues. Since this is not feasible in practice for real surgical procedures, online estimation of the natural curvature of the tip trajectory can be performed, for example by using a method based on a Kalman filter as proposed by Moreira et al. [START_REF] Moreira | Needle steering in biological tissue using ultrasound-based online curvature estimation[END_REF]. It can also be observed that the trajectories obtained with both unicycle and bicycle models are limited. For example they are continuous when a rotation without insertion is performed between two insertion steps. The two successive parts of the trajectory are tangent if the unicycle model is used and are not tangent if the bicycle model is used. However, both models fail to describe the trajectory of a pre-bent needle, for which a translational offset is also added when the needle is rotated. Hence modifications have to be made to account for the fact that the tip is not aligned with the axis of the rotation [RKA + 08]. Another point is that kinematic models do not take into account the interaction between the body of the needle and the tissues. The shaft is 2.2. FINITE ELEMENT MODELING assumed to exactly follow the trajectory of the needle tip and has no influence on this trajectory. This assumption can only hold if the needle is very flexible and the tissues are stiff, such that the forces due to the bending of the needle are small enough to cause very little motion of the tissues. The tissues must also be static, such that they do not modify the position of the needle during the insertion. These assumptions are easy to maintain during experimental research work, but harder to maintain in clinical practice due to patient physiological motions and variable tissue stiffness. An extension of the bicycle model that takes into account additional lateral translations of the needle tip is possible [FKR + 15]. This allows a better modeling of the tip motion, but it requires additional parameters that need to be estimated and can vary depending on the properties of the tissues, which limits its practical use. Finite element modeling Finite element modeling (FEM) is used to model the whole tissue and needle. In addition to the effect of the needle-tissue interaction on the needle shape, the resulting deformations of the tissues are also computed. The method consists in using a finite set of elements interacting with each other, each element representing a small region of interest of the objects being modeled, as can be seen in Fig. 2.3a. This allows the modeling of the needle deformations as well as the motion of a targeted region in the tissues, due to the needle interaction or due to external manipulation of the tissues [THA + 09][PVdBA11]. This requires a description of the geometry of the tissues and the needle as well as a certain amount of physical parameters for all of them that depends on the chosen complexity of the mechanical model. In general the computational complexity of such models is high when compared to other modeling methods. The time required for the computations increases with the level of details of the model, i.e. the number CHAPTER 2. NEEDLE INSERTION MODELING of elements used to represent each object, and the number and complexity of the phenomena taken into consideration. Modeling the exact boundary conditions and properties of real in vivo objects, such as different organs in the body, is also a challenging task. This makes FEM hard to use for real-time processing without dedicated hardware optimization and limits its use to pre-planning of needle and target trajectories [AGPH09] [START_REF] Hamzé | Preoperative trajectory planning for percutaneous procedures in deformable environments[END_REF]. However, it offers a great flexibility on the level of complexity, which can be chosen independently for the different components in the model. We provide in the following a short overview of different models that can be used for the needle and the tissues. Needle: Various complexity can be chosen for the needle model. A 1D beam model is often used under various forms. It can for example be a rigid beam [START_REF] Dimaio | Needle insertion modeling and simulation[END_REF], a flexible beam [START_REF] Dimaio | Interactive simulation of needle insertion models[END_REF] or a succession of rigid beams linked by angular springs [GDS09] [START_REF] Haddadi | Development of a dynamic model for bevel-tip flexible needle insertion into soft tissues[END_REF]. The needle geometry can also be modeled entirely in 3D to accurately represent its deformations and the effect of the tip geometry [MRD + 08][YTS + 14]. Tissues: Tissues can also be modeled with different levels of complexity, ranging from a 2D rectangular mesh with elastostatic behavior [START_REF] Dimaio | Interactive simulation of needle insertion models[END_REF] to 3D mesh with real organ shape [CAR + 09] (see Fig. 2.3b) and dynamic nonlinear behavior [START_REF] Tang | Constraint-based soft tissue simulation for virtual surgical training[END_REF]. The complexity of the interactions between the needle and the tissues can also vary. In addition to interaction forces due to the lateral displacements of the needle, tangential forces are often added as an alternation between friction and stiction along the needle shaft [DGM + 09], introducing a highly non-linear behavior. The complexity of the tissue cutting at the needle tip and along the needle shaft can also greatly vary. It usually involves a change in the topology of the model [CAK + 14], which can be simple to handle if the needle is modeled as a 1D beam [START_REF] Goksel | Haptic simulator for prostate brachytherapy with simulated needle and probe interaction[END_REF] or more complex when using a 3D modeling of the needle and non-linear fracture phenomenon in the tissues [START_REF] Oldfield | Detailed finite element modelling of deep needle insertions into a soft tissue phantom using a cohesive approach[END_REF][YTS + 14]. Mechanics-based modeling Mechanics-based models are used to model the entire needle shaft of the needle and its interactions with the surrounding tissues. The needle is thus often modeled as a 1D beam with a given flexibility that depends on the mechanical properties of the real needle. On the other hand, the tissues are not entirely modeled as is done with finite element modeling (FEM) but only the local interaction with the needle is taken into account. MECHANICS-BASED MODELING Bernoulli beam equations: A first way to model the interaction between the needle shaft and the tissues is to use a set of discrete virtual springs placed along the shaft of the needle, as was done in 2D by Glozman et al. [START_REF] Glozman | Image-guided robotic flexible needle steering[END_REF]. The needle is cut into multiple flexible beams and virtual springs are placed normal to the needle at the intersection of the beam extremities, as depicted in Fig. 2.4a. Knowing the position and orientation of the needle base and the position of the springs, the shape of the needle can be computed using the Bernoulli beam equations for small deflections. Concerning the interaction of an asymmetric needle tip with the tissues, a combination of axial and normal virtual springs can also be used to locally model the deflection of the tip [START_REF] Dorileo | Needle deflection prediction using adaptive slope model[END_REF]. Instead of using discrete springs, the Bernoulli equations can also be applied when the needle-tissue interaction is modeled using a distributed load applied along the needle shaft [KFR + 15], as illustrated in Fig. 2.4b. This allows a continuous modeling of the interaction along the needle shaft, resulting in a smoother behavior compared to the successive addition of discrete springs. CHAPTER 2. NEEDLE INSERTION MODELING Energy-based method: An energy-based variational method can also be used, instead of directly using the Bernoulli equations to solve the needle shape. This method, known as the Rayleigh-Ritz method and used by Misra et al. [MRS + 10], consists in computing the shape of the needle that minimizes the total energy stored in the system. It has been shown that this energy is mainly the sum of the bending energy stored in the needle, the deformation energy stored in the tissues and the worksthat are due to tissue cutting at the tip and the insertion force at the base. This method can be combined with different models of the interaction of the needle with the tissues, as long as a deformation energy can be computed. For example, a combination of virtual springs along the needle shaft and continuous load at the end of the needle can be used [START_REF] Roesthuis | Mechanicsbased model for predicting in-plane needle deflection with multiple bends[END_REF]. Different methods are also available to define these continuous loads. They can be computed depending on the distance between the needle and the tissues, as a continuous version of the virtual springs. In this case the position of the tissues can be taken depending on a previous position of the needle shaft [MRS + 10] or tip [KFR + 15]. The continuous load can also directly be estimated online [START_REF] Wang | Mechanics-based modeling of needle insertion into soft tissue[END_REF]. The two methods stated above can also be used along with pseudocontinuous models of the needle instead of the continuous one. In this case the needle is modeled using a succession of rigid rods linked by angular springs which are used to model the compliance of the needle [START_REF] Goksel | Modeling and simulation of flexible needles[END_REF]. Such a model is more simple since the parameters to describe its shape only consist in the angles observed between successive rods, without requiring additional parameters for the shape of these rods. Dynamic behavior: The different models presented in this section mainly allow modeling the quasi-static behavior of a needle inserted in the tissues. The dynamics of the insertion can also be modeled by adding a mass to the needle beams, some visco-elastic properties to the elements modeling the tissues and a model of friction along the shaft [YPY + 09][KRU + 16]. The friction mainly occurs during the insertion of the needle, nevertheless it has been shown that a rotation lag between the needle base and the needle tip could also appear [START_REF] Reed | Modeling and control of needles with torsional friction[END_REF]. Hence a model of torsionnal friction and needle torsion can be added when the needle rotates around its axis [START_REF] Swensen | Torsional dynamics of steerable needles: Modeling and fluoroscopic guidance[END_REF]. However, as stated in previous section 2.1 for kinematic models, each additional layer of modeling requires the knowledge or estimation of new parameters, in addition to the increased computational complexity. Hence, the number of modeled phenomena that can be included depends on the intended use of the model: a high number for offline computations, hence approaching FEM models, or a reduced number to keep real-time capabilities, like kinematic models. Generic model of flexible needle In this section we describe and compare two models that we propose for the 3D modeling of a flexible needle with an asymmetric tip interacting with moving soft tissues. These models were designed to provide a quasi-static representation of the whole body of the needle that can be used in a real-time needle steering control scheme. They both use a 1D beam representation for the needle and a local representation for the tissues to keep the computational cost low enough. The first model is inspired from the virtual springs approach presented in section 2.3. This approach is extended to 3D and is used with the addition of a reaction force at the needle tip to take into account an asymmetric geometry of the tip. The second model is a two-body model where the needle interacts with a second 1D beam representing the cut path generated by the needle tip in the tissues. Note that we use 3D models to account for all the phenomena occurring in practice. It would be possible to maintain the trajectory of the needle base in a 2D plane using a robotic manipulator, however, the motions of the tissues occur in all directions and can not be controlled. Therefore the body of a flexible needle can also move in any direction, such that 3D modeling is necessary. Needle tissue interaction model with springs We describe here the first model that we propose and which is inspired from the 2D virtual springs model used in [START_REF] Glozman | Image-guided robotic flexible needle steering[END_REF]. Interaction along the needle shaft: The interaction between the needle and the tissues is modeled locally using n 3D virtual springs placed all along the needle shaft. We define each 3D spring, with index i ∈ [[1, n]], using 3 parameters: a scalar stiffness K i , a rest position p 0,i ∈ R 3 defined in the world frame {F w } (see Fig. 2.5) and a plane P i that contains p 0,i (see Fig. 2.5). The rest position p 0,i of the spring with index i corresponds to the initial location of one point of the tissues when no needle is pushing on it. The plane P i is used to define the point of the needle p N,i ∈ R 3 on which the spring with index i is acting. Each time the model needs to be recomputed, the plane P i is reoriented such that it is normal to the needle and still passes through the rest position p 0,i . This way the springs are only used to model the normal forces F s,i ∈ R 3 applied on the needle shaft, without tangential component. We use an elastic behavior to model the interaction between the needle and the tissues, such that the force exerted by the spring on the point p N,i can be expressed according to F s,i = -K i (p N,i -p 0,i ). (2.3) CHAPTER 2. NEEDLE INSERTION MODELING Figure 2.5: Illustration of the mechanical 3D model of needle-tissue interaction using virtual springs and a reaction force at the tip. The stiffness K i of each spring is computed such that it approximates a given stiffness per unit length K T such that K i = K T l i , (2.4) where l i is the length of the needle that is supported by the spring with index i. This length l i can vary depending on the actual distance between the points p N,i-1 , p N,i and p N,i+1 . For simplicity we consider here that the tissue stiffness per unit length K T is constant all along the needle. However it would also be possible to change the value of K T depending on the depth of the spring in the tissues and therefore consider the case of inhomogeneous tissues or variable tissue geometry. In practice this parameter should be estimated beforehand or by using an online estimation method to adapt to unknown stiffness changes. However online estimation of K T will not be considered in this work. The needle is then modeled by a succession of n + 1 segments such that the extremities of the segments lay on the planes P i of the virtual springs, except for the needle base that is fixed to a needle holder and the needle tip that is free. Each segment is approximated in 3D using a polynomial curve c j (l) of order r so that c j (l) = M j [ 1 l . . . l r ] T , (2.5) where j ∈ [[1, n + 1]] is the segment index and c j (l) ∈ R 3 is the position of a point of the segment at the curvilinear coordinate l ∈ [0, L j ], with L j the total length of the segment. The matrix M j ∈ R 3×(r+1) contains the coefficients of the polynomial curve. Interaction at the tip: The model defined so far is sufficient to take into account the interaction with the tissues along the needle shaft. However the specific interaction at the tip of the needle still needs to be added. We represent all the normal efforts exerted at the tip of the needle with an equivalent normal force F tip and an equivalent normal torque T tip exerted at the extremity of the needle, just before the beginning of the bevel. In order to model a beveled tip, these force and moment are computed using a model of the bevel with triangular loads distributed on each side of the tip, as proposed by Misra et al. [MRS + 10]. Let us define α as the bevel angle, b as the length of the face of the bevel, a as the length of the bottom edge of the needle tip and β as a cut angle that indicates the local direction in which the needle tip is currently cutting in the tissues as depicted in Fig. 2.6. Note that the point O in Fig. 2.6 corresponds to the last extremity of the 3D curve used to represent the needle. The equivalent normal force F tip and torque T tip exerted at the point O can be expressed as F tip = K T b 2 2 tan(α -β) cos α - K T a 2 2 tan β y, (2.6) T tip = K T a 3 6 tan β - K T b 3 6 tan(α -β) 1 - 3 2 sin(α) 2 x, (2.7) where x and y are the axis of the tip frame {F t } as defined in Fig. 2.6. Tip orientation around the shaft: We assume that the orientation of the base frame {F b } (see Fig. 2.5) is known and that the torsional bending of the needle can be neglected. The first assumption usually holds in the case of robotic needle manipulation, where the needle holder can provide a feedback on its pose. The second assumption however can be debated since it has been shown that stiction along the needle can introduce a lag and an hysteresis between the base and tip rotation [START_REF] Abayazid | 3d flexible needle steering in soft-tissue phantoms using fiber bragg grating sensors[END_REF]. However inserting the needle is usually sufficient to break the stiction and reset this lag [START_REF] Reed | Modeling and control of needles with torsional friction[END_REF]. Hence we assume that the orientation of the tip frame {F t } around the tip axis can directly be computed from the base orientation and the needle shape. Computation of the needle shape: In order to maintain adequate continuity properties of the needle, second order continuity constraints are added, namely defined as c j (L j ) = c j+1 (0), (2.8) dc j dl l=L j = dc j+1 dl l=0 , (2.9) d 2 c j dl 2 l=L j = d 2 c j+1 dl 2 l=0 . (2.10) The total normal force F j at the extremity of the segment j can be calculated from the sum of the forces exerted by the springs located from this extremity to the needle tip, so that F j = Π j   F tip + n k=j F s,k   , (2.11) where Π j stands for the projection onto the plane P j . The projection is used to remove the tangential part of the force and to keep only the normal component. This normal force introduces a constant shear force all along the segment and using Bernoulli beam equation we have EI d 3 c j dl 3 (l) = -F j , (2.12) GENERIC MODEL OF FLEXIBLE NEEDLE with E the needle Young's modulus and I its second moment of area. Note that in the case of a radially symmetric needle section the second moment of area is defined as I = Ω x 2 dx dy, (2.13) where the integral is performed over the entire section Ω of the needle. For a hollow circular needle, I can be calculated from the outer and inner diameters, d out and d in respectively, according to I = π 64 (d 4 out -d 4 in ). (2.14) Finally the moment due to the bevel force gives the following boundary condition: EI d 2 c n+1 dl 2 l=L n+1 = T tip × z, (2.15) where T tip is the torque exerted at the tip defined by (2.7) and z is the axis of the needle tip frame {F t } as defined in Fig. 2.6. In practice we expect real-time performances for the model, so the complexity should be as low as possible. We use here third order polynomials (r = 3) to represent the needle such that each polynomial curve is represented by 12 coefficients. It is the lowest sufficient order for which the mechanical equations can directly be solved. From a given needle base pose and a given set of virtual springs, the shape of the needle can then be computed. The needle model is defined by 12 × (n + 1) parameters, corresponding to the polynomial segments coefficients. The continuity conditions provide 9 × n equations. The fact that the segments extremities have to stay in the planes defined by the springs adds n equations and the springs forces in the planes define 2 × n equations. The base position and orientation give 6 additional boundary equations. The tip conditions also give 6 equations due to the tip force and tip moment. So the final shape of the needle is solved as a linear problem of 12 × (n + 1) unknowns and 12 × (n + 1) equations. In practice we used the Eigen C++ library for sparse linear problem inversion. Insertion of the needle: During the insertion, springs are added regularly at the tip to account for the new amount of tissues supporting the end of the needle. Once the last segment of the needle reaches a threshold length L thres , a new spring is added at the tip. The rest position of the spring is taken as the initial position of the tissue before the tip has cut through it, corresponding to point A in Fig. 2.6. The next section presents a second model that we propose, where the successive springs are replaced by a continuous line. A different method is used to solve the needle parameters, allowing a decoupling between the number of elements used to represent the needle and the tissues. CHAPTER 2. NEEDLE INSERTION MODELING Needle tissue interaction model using two bodies In this section we model the interaction between the needle and the tissues as an elastic interaction between two one-dimensional bodies. Needle and tissues modeling: One of the bodies represents the needle shaft and the other one represents the rest position of the path that was cut in the tissues by the needle tip during the insertion (see Fig. 2.7). Note that the needle body (depicted in red in Fig. 2.7) actually represents the current shape of the path cut in the tissues, while the tissue body (depicted in green in Fig. 2.7) represents this same cut path without taking into account the interaction with the needle, i.e. the resulting shape of the cut after the needle is removed from the tissues. Both bodies are modeled using polynomial spline curves c, such that c(l) = n i=1 c i (l) , l ∈ [0, L] , (2.16 ) c i (l) = χ i (l) M i      1 l . . . l r      , ( 2.17) where c(l) ∈ R 3 is the position of a point at the curvilinear coordinate l, L is the total length of the curve, M i ∈ R 3×(r+1) is a matrix containing the coefficients of the polynomial curve c i and χ i is the characteristic function of the curve, that takes the value 1 on the definition domain of the curve and 0 elsewhere. Parameters n and r represent respectively the number of curves of the spline and the polynomial order of the curves. Both can be tuned to find a trade-off between model accuracy and computation time. In the following we add the subscripts or superscripts N and T on the different parameters to indicate that they respectively corresponds to the needle and tissues. Computation of the needle shape: For simplicity we assume that the tissues have a quasi-static elastic behavior, i.e. the force exerted on each point of the needle is independent of time and proportional to the distance between this point and the rest cut path. This should be a good approximation as long as the needle remains near the rest cut path, what should be ensured in practice to avoid tissue damage. We note K T the interaction stiffness per unit length corresponding to this interaction. Given a segment of the needle between curvilinear coordinates l 1 and l 2 , the force exerted on it by the tissues can thus be expressed as F (l 1 , l 2 ) = -K T l 2 l 1 c N (l) -c T (l)dl. (2.18) GENERIC MODEL OF FLEXIBLE NEEDLE Figure 2.7: Illustration of the whole needle insertion model (left) and zoom on the tip for different tip geometries (right). Needle segments are in red and the rest position of the path cut in the tissues is in green. New segments are added to the cut path according to the location of the cutting edge of the tip. + 09] that the quasi-totality of the energy stored in the needle-tissue system consists in the bending energy of the needle E N and the deformation energy of the tissues E T . We use the Rayleigh-Ritz method to compute the shape of the needle which minimizes the sum of these two terms. It has been shown in previous work [MRS According to the Euler-Bernouilli beam model, the bending energy E N of the needle can be expressed as E N = EI 2 L N 0 d 2 c N (l) dl 2 2 dl, (2.19) where E is the Young's modulus of the needle, I is its second moment of area (see the definition in (2.13)) and L N is its length. By tuning the parameters E and I according to the real needle, both rigid and flexible needles can be represented by this model. CHAPTER 2. NEEDLE INSERTION MODELING The energy stored in the tissues due to the needle displacement can be expressed as E T = K T 2 L T 0 c N (L f ree + l) -c T (l) 2 dl, (2.20) where L f ree is the length of the free part of the needle, i.e. from the needle base to the tissue surface, and L T is the length of the path cut in the tissues. We add the constraints imposed by the needle holder, which fix the needle base position p b and direction d b , so that c N (0) = p b , (2.21) dc N dl (0) = d b . (2.22) Continuity constraints up to order two are also added on the spline coefficients c N i (l i ) = c N i+1 (l i ), (2.23) dc N i dl l=l i = dc N i+1 dl l=l i , (2.24) d 2 c N i dl 2 l=l i = d 2 c N i+1 dl 2 l=l i , (2.25) where l i is the curvilinear coordinate along the needle spline corresponding to the end of segment c N i and the beginning of segment c N i+1 . In order to take into account the length of the tip, which can be long for example for pre-bent tips or beveled tips with small bevel angle, the tip is modeled as an additional polynomial segment added to the needle spline, as can be seen in Fig. 2.7. The corresponding terms are added to the bending energy (2.19) and tissue energy (2.20), similarly to the other segments. The system is then solved as a minimization problem under constraints, expressed as min m E N + E T Am = b, (2.26) where m is a vector stacking all the coefficients of the matrices M i and with matrix A and vector b representing the constraints (2.21) to (2.25). In practice this minimization problem reduces to the inversion of a linear system, so that we also used the Eigen C++ library for sparse linear problem inversion. Tip orientation around the shaft: Similarly to the previous model (see section 2.4.1), we assume that there is no lag between the tip rotation and the base rotation along the needle shaft. This way the orientation of the tip can be computed from the orientation of the base and the shape of the needle. A more complex modeling of the torsional compliance of the needle could however be necessary in the case of a pre-bent tip needle for which the shape of the tip could cause a higher torsional resistance. Insertion of the needle: As the needle progresses in the tissues and the length of the cut path increases, we update the modeled rest cut path by adding new segments to the spline curve. Each time the model is updated, if the needle was inserted more than a defined threshold L thres , a new segment is added such that its extremity corresponds to the location of the very tip of the needle, i.e. where the cut occurs in the tissues. This way the model can take into account the specific geometry of the needle tip. In the case of a symmetric tip, the cut path will stay aligned with the needle axis. On the other hand it will be shifted with respect to the center line of the needle shaft when considering an asymmetric tip, as is depicted in Fig. 2.7, leading to the creation of a force that will pull the needle toward the direction of the cut. It can be noted that external tissue deformations can be taken into account with this kind of modeling. Indeed, deformations of the tissues created by external sources, like tissue manipulation or natural physiological motions (heartbeat, breathing, . . . ), induce modifications of the shape and position of the rest cut path. This, in turn, changes the shape of the needle via the interaction model. External tissue motions will be further studied in the section 3.5 of the next chapter. Another advantage of this model is that the number of polynomial curves of the needle spline is fixed and is independent of the number of curves of the tissue spline. This leads to a better control over the computational complexity of the model compared to the virtual springs approach presented in section 2.4.1. Using the virtual springs model, the number of parameters to compute increases as the needle is inserted deeper into the tissues, due to the progressive addition of springs and needle segments. This is an important point for the use of the model in a real-time control framework. In the next section we compare the performances of both models in terms of accuracy of the obtained tip trajectories and computation time. Validation of the proposed models In this section we compare the performances of the models defined previously in terms of accuracy of the representation of the needle behavior. We compare the simulated trajectories of the needle tip obtained with both models to the real trajectories of a needle inserted in soft tissues under various motions of the needle base. We first describe the experiments performed to acquire the trajectories of the base and tip of the needle and then we provide the comparison of these trajectories to the ones generated using both models. Experimental conditions (setup in the Netherlands): We use the needle insertion device (NID) attached to the end effector of the UR3 robot to insert the Aurora biopsy needle in a gelatin phantom, as depicted in Fig. 2.8. The needle is 8 cm outside of the NID and its length does not vary during the insertion. The position of the needle tip is tracked and recorded using the Aurora electromagnetic (EM) tracker embedded in the tip and the field generator. The pose of the needle base, at the tip of the NID (center of frame {F b } in Fig. 2.8), is recorded using the odometry of the UR3 robot. The phantom has a Young modulus of 35 kPa and is maintained fixed during the experiments. Experimental scenarios: Different trajectories of the needle base are performed to test the models in any possible direction of motion. The different insertions are performed at the center of the phantom, such that they do not cross each other. We use 12 different insertion scenarios and repeat each scenario 3 times, leading to a total of 36 insertions. Each scenario is decomposed as follows. The needle is first placed perpendicular to the surface of the phantom such that the tip barely touches the surface. Then the needle is inserted 1 cm in the phantom, by translating the robot along the needle axis. Then a motion of the needle base is applied before restarting the insertion for 5 cm. The applied motion is expressed in the frame of the needle base An example of the measured tip trajectories for each type of base motions can be seen in solid lines in Fig. 2.9 to 2.13. The tip position is expressed in the initial frame of the tip, at the surface of the phantom. Generation of model trajectories: In order to generate the different trajectories of the needle tip using both models, we first set their parameters according to the physical properties of the needle. The needle length is set to 8 cm and the other parameters are set according to the properties of the Aurora needle given in Table 1.1. The polynomial order of the curves is set to r = 3 for both models and the length threshold defining the addition of a virtual spring or tissue spline segment is set to L thres = 1 mm. The length of the needle segments for the two-body model is set to 1 cm, resulting in a total of n = 8 segments. We recall that the number of segments for the virtual springs model varies with the number of springs added during the insertion. One tip trajectory is then generated for both models and each experiment by applying the motion of the base that is recorded during the experiment to the base of the model. The value of the model stiffness per unit length K T of both models is optimized separately such that the final error between the simulated tip positions and the measured tip positions is minimized. Since the insertions are performed in the same phantom and at similar locations, the same value of K T is used for all experiments. The best fit is obtained with K T = 49108 N.m -2 for the two-body model and K T = 56868 N.m -2 for the virtual springs model. As mentioned in previous section, in clinical practice this parameter can be difficult to estimate beforehand and would certainly need to be estimated online. It can be observed in Fig. 2.9 to 2.13 that the tip trajectories measured for similar base motions in symmetric directions are not symmetric. This is due to a misalignment between the axis of the NID, in which are performed the motions, and the real axis of the needle. This misalignment corresponds to a rotation of 1.0 • around axis 0.5 0.86 0 T in the base frame {F b }. Similarly an orientation error of 4.1 • is observed between the orientation of the NID around the needle axis and the orientation of the bevel. A correction is thus applied to the needle base pose measured from the robot odometry to obtain the pose that is applied to the modeled needle base. An example of the simulated tip trajectories for each type of base motions can be seen in Fig. 2.9 to 2.13, with long-dashed lines for the virtual springs model and short-dashed lines for the two-body model. z tip (cm) 0 • +3 • -3 • (b) z tip (cm) 0 • +3 • -3 • (b) Figure 2.12: Tip position obtained when a rotation is applied around the y axis of the base frame between two insertion steps along the z axis. Measures are shown with solid lines, virtual springs model with long-dashed lines and two-body model with short-dashed lines. Results: We can observe that both models follow the global behavior of the needle during the insertion. Let us first focus on the effect of the asymmetry of the beveled tip. We can see that during an insertion without lateral base motions, the deviation due to the bevel is well taken into account, as for example in z tip (cm) 0 • +90 • -90 • 180 • (b) Figure 2.13: Tip position obtained when a rotation is applied around the z axis of the base frame between two insertion steps along the z axis. Measures are shown with solid lines, virtual springs model with long-dashed lines and two-body model with short-dashed lines. Fig. 2.9b. We observe the same kind of constant curvature trajectories that are usually obtained with kinematic models. However it can also be seen that during the first few millimeters of the insertion, a lateral translation of the tip occurs, which does not fit the constant curvature trajectory appearing later. This effect is due to the fact that the bevel is cutting laterally while the needle body is not yet embedded in the gelatin. The reaction force generated at the bevel is thus mostly compensated by the stiffness of the needle, which is low due to the length of the needle. This effect can usually be reduced in practice by using a sheath around the body of the flexible needle, such that it can not bend outside the tissues [START_REF] Webster | Design considerations for robotic needle steering[END_REF]. However in a general case, this kind of effect is not taken into account by kinematic models, while it can be represented using our mechanics-based models. Let us now consider the influence of lateral base motions on the behavior of the needle. We can see that the tip trajectory is modified in the same manner for both models and follows the general trajectory of the real needle tip. Therefore both models can be used to provide a good representation of the whole 3D behavior of a flexible needle inserted in soft tissues. This is a great advantage over kinematic models, which do not consider lateral base motions at all. Concerning the accuracy of the modeling, some limitations seem to appear when the base motion tends to push the surface of the bevel against the tissues. This is for example the case for a positive translation along the y axis of the base frame (green curves in Fig. 2.10b) or a negative rotation around the x axis (blue curves in Fig. 2.11b). In these cases both models seem to amplify the effect of the base motions on the following trajectory of the tip. However this could also be due to the experimental conditions. A small play between the needle and the NID could indeed cause an attenuation of the motion transmitted to the needle base. 1 2 3 4 5 6 7 N o n e T x = + 2 m m T x = - 2 m m T y = + 2 m m T y = - 2 m m R x = + 3 • R x = - 3 • R y = + 3 • R y = - 3 • R z = + 9 0 • R z = -9 0 • R z = + 1 8 0 • M e a n Accuracy comparison: Let us now compare to each other the performances of both models in terms of accuracy. The absolute final error between the tip positions simulated by the models and the tip positions measured during the experiments are summarized in Fig. 2.14. Mean values are provided across the 3 insertions performed for each scenario. The average position error over the insertion process is provided as well in Fig. 2.15. The average is taken over the whole insertion and across the 3 insertions performed for each scenario. We can see that the two-body model provides in each scenario a better modeling accuracy on the trajectory of the needle tip. While both models tends to give similar results in the general case, the virtual springs model seems to particularly deviate from the measures when rotations around the needle axis are involved. This is also clearly visible in Fig. 2.13a. Several reasons may be invoked to explain this result. First it is possible that the discrete nature of the springs has a negative effect on the modeling accuracy compared to a continuous modeling of the load applied on the needle shaft. However we believe that this effect is not predominant here since the thresholds chosen for both models (distance between successive springs in one case and length of the segment added to the cut path spline in the other case) were the same and had small values compared to the curvature of the needle. The second possible reason is that the model to compute the force and torque at the needle tip is not the best way to represent the interaction with the tissues. Indeed, the computation of the continuous loads applied on the sides of the tip does not take into account the real 3D shape of the tip, which has a circular section. The force magnitude is also independent of the orientation of the bevel, which might not be true during the rotation of the needle around its axis, leading to a wrong orientation of the tip when the insertion restarts. Concerning the use of the model in clinical practice, we can see that the final tip positioning error obtained with the two-body model is around 2 mm in each case. This can be sufficient to reach tumors with standard size in open-loop control. However this assumes that the stiffness parameter K T was well estimated beforehand, which might be difficult in practice and could require the use of an online estimation method. 0 0.5 1 1.5 2 2.5 3 N o n e T x = + 2 m m T x = - 2 m m T y = + 2 m m T y = - 2 m m R x = + 3 • R x = - 3 • R y = + 3 • R y = -3 • R z = + 9 0 • R z = -9 0 • R z = + 1 8 0 • M e a n Computation time comparison: Let us finally compare the time required to compute both models depending on the number of elements used to represent the tissues, i.e. the number of springs for the virtual springs model and the number of tissue spline segments for the two-body model. The computation times are acquired during a simulation of the needle insertion using the same parameters as in the previous experiments. The results are depicted in Fig. 2. 16. It is clearly visible that the number of virtual springs increases the computation time, as could be expected from the fact that the number of needle parameters to compute directly depends on the number of springs. On the contrary the number of tissue spline segments of the two-body model does not have a significant influence on the computation time since the number of parameters of the needle spline is fixed and chosen independently, hence the size of the linear problem to solve is also fixed. This is a clear advantage of the two-body model since the computation time can then be independent of the state of the insertion and can also be tuned beforehand to obtain the desired real-time performances. Note that we obtained a computation time around 2 ms, which is sufficient for real-time computation in our case, however this could greatly vary and be tuned depending on the available hardware and the number of segments chosen to represent the needle. In conclusion, we will use the two-body model in the following of the experiments, because it can provide an accurate estimation of the 3D behavior of the needle and can be used for real-time processing due to its deterministic computation time. It is also easier to adapt to different kinds of tip geometry and the motion of the tissue spline can be used to model external displacement of the tissues. Conclusion We presented a review of needle-tissue interaction models separated into three categories corresponding to different cases of use. Typically, kinematic models ignore the interaction of the needle body with the surrounding tissues and only consider the trajectory of the needle tip. Hence they are computationally inexpensive and are well adapted for real-time control of the insertion of needles with asymmetric tips. On the other side, complete models of the needle and tissues based on finite element modeling offer an accurate but complex modeling of insertion procedures. They usually require far more resources, which limits their use to applications where real-time performances are not a priority, such as needle insertion pre-planning or surgical training simulations. In-between these two categories are mechanics-based models, which uses local approaches to model the full behavior of a needle being inserted in soft tissues while keeping aside the full modeling of the tissues. They provide a more complete modeling than kinematic models while maintaining good performances for a real-time use. In section 2.4 we have proposed two 3D mechanics-based models that give a good representation of the 3D behavior of a needle during its insertion is soft tissues. In particular, the two-body model that we designed offers a good accuracy for all kinds of motions applied to the base of the needle. Its complexity can also be chosen and is constant during the insertion, which allows tuning the required computation time to achieve desired real-time performances. Therefore we will use this model as a basis for a real-time needle steering framework in chapter 4. The accuracy of the modeling of the needle tip trajectory is however dependent on the estimation of the interaction stiffness between the needle and the tissues. Real tissues are usually inhomogeneous and pre-operative estimation of the stiffness can be difficult, such that online estimation techniques would have to be studied for a use in real clinical practice. In this chapter we only considered the case of stationary tissues, while our model can also handle moving tissues by modifying the position of the curve representing the tissues (rest cut path). Contrary to the motions of the needle base, which can be controlled by the robotic manipulator, the motions of the patient can not be controlled. Therefore an information feedback is necessary to estimate these motions and update the state of the model 2.6. CONCLUSION accordingly. Visual feedback is usually used in current practice to monitor the whole needle insertion procedure and provide a way to see both the needle and a targeted region. Hence we will focus on this kind of feedback in order to design an update method for our model. In particular, 3D ultrasound (US) can provide a real-time feedback on the whole shape of the needle, which can be used to ensure that our model stays consistent with the real state of the insertion procedure. A first step is to design an algorithm to extract the localization of the needle body in the US volumes, which is a great challenge in itself, due to the low quality of the data provided by the US modality. This will be a first point of focus of the next chapter 3. The performances of needle tracking algorithms in terms of accuracy and computation time can be greatly improved by using a prediction of the needle location. Therefore we will also use our model for this purpose, since we have shown that it could predict the trajectory of the needle with a good accuracy. Chapter 3 Needle localization using ultrasound In this chapter we focus on the robust detection and tracking of a flexible needle in 3D ultrasound (US) volumes. In order to perform an accurate realtime control of a flexible needle steering robotic system, a feedback on the localization of the needle and the target is necessary. The 3D US modality is well adapted for this purpose thanks to its fast acquisition compared to other medical imaging modalities and the fact that it can provide a visualization of the entire body of the needle. However the robust tracking of a needle in US volume is a challenging task due to the low quality of the image and the artifacts that appear around the needle. Additionally, even though intraoperative volumes can be acquired, the needle can still move between two volume acquisitions due to its manipulation by the robotic system or the motions of the tissues. A prediction of the needle motion can thus be of a great help to improve the performances of needle tracking in successive volumes. We first provide in section 3.1 several points of comparison between the imaging modalities that are used in clinical practice to perform needle insertions and we motivate our choice of the 3D US modality. We then describe the principles of US imaging and the techniques used to reconstruct the final 2D images or 3D volumes in section 3.2. We present in section 3.3 a review of the current methods used to detect and track a needle in 2D or 3D US. We then propose a new needle tracking algorithm in 3D US volumes that takes into account the natural artifacts observed around the needle. We focus on the estimation of the motions of the tissues in section 3.5 and propose a method to update the needle model that we designed in the previous chapter using different measures available on the needle. Tests and validation of the method are then provided in section 3.6. The updated model is then used to improve the performances of the needle tracking across a sequence of US volumes. The work presented in this chapter on the model update from visual feedback was published in an article presented in international conference [START_REF] Chevrie | Online prediction of needle shape deformation in moving soft tissues from visual feedback[END_REF]. Introduction Needle insertion procedures are usually performed under imaging feedback to ensure the accuracy of the targeting. Many modalities are available, among which the most used ones are magnetic resonance imaging (MRI), computerized tomography (CT) and ultrasound (US). In the following we present a comparison of these modalities on several aspects that has led us to consider the use of the US modality instead of the others. Image quality: The main advantage of MRI and CT is that they provide high contrast images of soft tissues in which the targeted lesion can clearly be distinguished from the surrounding tissues. On the other side, the quality of US images is rather poor due to the high level of noise and interference phenomena. However MRI and CT are sensitive to most metallic components, that creates distortions in the images. The presence of a metallic needle in their field of view is thus a source of artifacts that are disturbing for MRI [SCI + 12] or CT [SGS + 16] when compared to US artifacts [START_REF] Reusz | Needlerelated ultrasound artifacts and their importance in anaesthetic practice[END_REF]. This alleviates the main drawback of the US modality. Robotic design constraints: MRI and CT have additional practical limitations compared to US imaging, since their scanners are bulky and require a dedicated room. These scanners are composed of a ring inside which the patient is placed for the acquisition, which reduces the workspace and accessibility to the patient for the surgical intervention. This adds heavy constraints on the design of robotic systems that can be used in these scanners [MGB + 04] [ZBF + 08], in addition to the previously mentioned incompatibility with metallic components, that can even cause security issues in the case of MRI. On the other side, US probes are small, can easily be moved by hand and the associated US stations are easily transportable to adapt to the workspace. Additionally they do not pose particular compatibility issues and can thus be used with a great variety of robotic systems. Acquisition time: The acquisition time of MRI and CT is typically long compared to US and makes them unsuitable for real-time purposes. Using non real-time imaging requires to perform the insertion in many successive steps, alternating between image acquisitions and small insertions of the needle, as classically done with MRI [MvdSvdH + 14] or CT [SHvK + 17]. In addition to the increased duration of the intervention, patients are often asked to hold their breath during the image acquisition to avoid motion blur in the image. Therefore, discrepancies arise between the real position of the needle and the target and their position in the image because of the motions of the tissues as soon as the patient restarts breathing. For these reasons, real-time imaging is preferred and can be achieved using the US modality. High acquisition rates are usually obtained with 2D US probes, as tens of images can typically be acquired per second. A 3D image, corresponding to an entire volume of data, can also be acquired at a fast rate using matrix array transducers or at a lower frame rate using more conventional motorized 3D US probes. In conclusion, US remains the modality of choice for real-time needle insertion procedures [START_REF] Chapman | Visualisation of needle position using ultrasonography[END_REF]. Hence in the following of this chapter we will focus on the detection and tracking of a needle using the US feedback acquired by 3D probes. In the next section we present the general principles of US imaging. Ultrasound imaging Physics of ultrasound Ultrasound (US) is a periodic mechanical wave with frequency higher than 20 kHz that propagates by producing local changes of the pressure and position in a medium. The principle of US imaging is to study the echos reflected back by the medium after that an initial US pulse has been sent. Wave propagation: Most imaging devices assume that soft tissues behave like water, due to the high proportion of water they contain. The speed c of US waves in liquids can be calculated according to the Newton-Laplace equation c = K ρ , (3.1) where K is the bulk modulus of the medium and ρ its density. Although variations of the local density of the tissues introduce variations of the speed of ultrasound, it is most of the time approximated by a constant c =1540 m.s -1 . When the wave encounters an interface between two mediums with different densities, a part of the wave is reflected back while the rest continues propagating through the second medium. The amplitudes of the reflected and transmitted waves depend on the difference of densities. This is the main phenomenon used in US imaging to visualize the variations of the tissue density. Image formation: An US transducer consists in an array of small piezoelectric elements. Each element can vibrate according to an electric signal sent to them, creating an US wave that propagates through the medium in the form of a localized beam. Each beam defines a scan line on which the variation of the tissue density can be observed. The elements also act as receptors, creating an electric signal corresponding to the mechanical deformations applied to them by the returning echos. These electric signals are recorded for a certain period of time after a short sinusoidal pulse was applied to an element, giving the so-called radio frequency signal. Considering an interface that is at a distance d from the wave emitter, an echo will be observed after a time T = 2d c , (3.2) corresponding to the time needed by the pulse to propagate to the interface and then come back to the transducer. The position corresponding to an interface can thus directly be calculated from the delay between the moment the US pulse was sent and the moment the echo was received by the transducer, as illustrated in Fig. 3.1. The radio frequency signal can then be transformed into a suitable form for an easy visualization of the tissue density variations along each scan line. Acquisition frequency: The acquisition frequency depends on the number of piezoelectric elements n p and the desired depth of observation d o , i.e. the length of the scan lines. The radio frequency signals corresponding to 3.2. ULTRASOUND IMAGING each scan line must be recorded one after another in order to avoid mixing the echos corresponding to adjacent scan lines. The time T line required to acquire the signal along one scan line is given by T line = 2d o c . (3.3) The total time T acq of the 2D image acquisition is then T acq = n p T line . (3.4) A typical acquisition using a transducer with 128 elements and an acquisition depth of 10 cm would take 16.6 ms, corresponding to a frame rate of 60 images per second. This is a fast acquisition rate that can be considered real-time for most medical applications. Image resolution: The axial resolution of the US imaging system is the minimum distance between two objects in the wave propagation direction that allows viewing them as two separate objects in the image. This directly corresponds to the wavelength of the wave propagating in the medium. Wavelength λ and frequency f are directly related to each other by the speed of the wave according to λ = c f . (3.5) Therefore a higher axial resolution can be achieved through a higher frequency. Standard US systems usually use a frequency between 1 MHz and 20 MHz, corresponding to an axial resolution between 1.54 mm and 77 µm. The lateral resolution is the minimum distance between two objects perpendicular to the wave propagation direction that allows viewing them as two separate objects in the image. A threshold value for this resolution is first set by the distance between the different scan lines, which depends on the geometry of the transducer. For linear transducers, the piezoelectric elements are placed along a line, such that all scan lines are parallel and the threshold directly corresponds to the distance between the elements. For convex transducers, the elements are placed along a circular surface, such that the scan lines are in a fan-shape configuration and diverge from the transducer, as illustrated in Fig. 3.2. The threshold thus corresponds to the distance between the elements at the surface of the transducer but then grows with the depth due to the increasing distance between the scan lines. Hence, this configuration allows a larger imaging region far from the transducer, but at the expense of a resolution decreasing with the depth. However another factor that determines the real lateral resolution is the width of the US beam. This varies with depth and depends on the size of the piezoelectric elements and the wave frequency. The wave focuses into a narrow beam only for a short distance from the emitter, called the near zone or Fresnel's zone. Then the wave tends to diverge, leading to a wide beam in the far zone or Fraunhofer's zone, as illustrated in Fig. 3.3. The width of the beam in the near zone is proportional to the size of the piezoelectric element, while the length of the zone decreases with this size. The length of the near zone can be increased by using a higher frequency. The lateral resolution is also often modified by using several adjacent elements with small delays instead of only one at a time. This generates a beam focused at a specified depth, but also causes the far zone width to increase faster after the focus depth. Similarly, an out of plane resolution can be defined, which determines the actual thickness of the tissue slice that is visible in the image. This resolution also varies with depth depending on the size of the piezoelectric elements and the frequency, as illustrated in Fig. 3.3. An acoustic lens is often added to the surface of the probe to focus the beams at a given depth. ULTRASOUND IMAGING Observation limitations: Other factors can modify the quality of the received US wave. Attenuation: The viscosity of soft tissues is responsible for a loss of energy during the US wave propagation [START_REF] Wells | Absorption and dispersion of ultrasound in biological tissue[END_REF]. This loss greatly increases with the wave frequency, such that a trade-off has to be made between the spatial resolution and the attenuation of the signal. Speckle noise: Due to the wave characteristics of US, diffraction and scattering also occur when the wave encounters density variations in the tissues that have a small size compared to its wavelength. The US beam is then reflected in many directions instead of one defined direction. This results in textured intensity variations in the US images known as speckle noise, as can be seen in Fig. 3.1. While it has sometimes been used for tissue tracking applications as in [START_REF] Krupa | Real-Time Tissue Tracking with B-Mode Ultrasound Using Speckle and Visual Servoing[END_REF][KFH09], speckle noise is generally detrimental to a good differentiation between the different structures in the tissues, such that filtering is often performed to reduce its intensity. Shadowing: At the level of an interface between two media with very different densities it can be observed that the US wave is almost entirely reflected back to the transducer. The intensity of the transmitted signal is then very low, such that the intensity of the echos produced by the structures that are behind the interface are greatly reduced. This causes the appearance of a shadow in the US image that greatly limits the visibility of the structures behind a strong reflector. This can be seen on the bottom of the US image in Fig. 3.1, which is mostly dark due to the presence of reflective interfaces higher in the image. Particular artifacts can also appear around a needle due to its interaction with the US wave. These artifacts can greatly affect the appearance of the needle in 2D or 3D US images, such that they should be taken into account to accurately localize the needle. This kind of artifacts will be the focus of section 3.3.1. Now that we have seen the principles of US signals acquisition, we describe in the following how the acquired data are exploited to reconstruct the final image or volume. Reconstruction in Cartesian space The radio frequency signal acquired by the piezoelectric elements must be converted into a form that is suitable for the visualization of the real shape of the observed structures in Cartesian space. This conversion should take into account the natural attenuation of the ultrasound (US) signal during its travel in the tissues as well as the geometric arrangement of the different scan lines. We describe in the followings the process that is used to transform the radio frequency signal into a 2D or 3D image. Reconstruction of 2D images We first consider the case of a reconstructed 2D image, called B-mode US image. The signal is first multiplied by a depth-dependent gain to compensate for the attenuation of the US signal during its travel through the tissues. The amplitude of the signal is then extracted using envelop detection. This removes the sinusoidal shape of the signal and only keeps the part that depends on the difference between the density of the tissues. The signal is then sampled to enable further digital processing. The sampling frequency is chosen to respect the Nyquist sampling criterion, i.e. it should be at least twice as much as the frequency of the US wave. This frequency is typically of 20 MHz or 40 MHz for current 2D US probes. A logarithmic compression of the level of intensity is then usually applied to the samples to facilitate the visualization of high and low density variations on the same image. The samples can be stored in a table with each line corresponding to the samples of a same scan line. The resulting image is called the pre-scan image. The samples need then to be mapped to their corresponding position in space to reconstruct the real shape of the 2D slice of tissue being observed, which constitute the post-scan image. Let N s be the number of samples along a scan line and N l the number of scan lines. Each sample can be attributed two indexes i ∈ [[0, N s -1]] and j ∈ [[0, N l -1]] corresponding to their placement in the pre-scan image, with j the scan line index and i the sample index on the scan line. The shape of the reconstructed image depends on the geometry of the arrangement of the piezoelectric elements. For linear probes, the piezoelectric elements are placed on a straight surface, such that the scan lines are parallel to each other. In this case the coordinates x and y of a sample in the post-scan image (see Fig. 3.4) can directly be calculated with a scaling and offset such that x = L s i, (3.6) y = L p j - N l -1 2 , (3.7) where L s is the physical distance between two samples on a scan line and L p is the distance between two piezoelectric elements of the transducer. The physical distance L s between two samples on a scan line depends on the sampling frequency f s such that For a convex probe with radius R, the physical position of a sample in the probe frame can be expressed in polar coordinates (r, θ) according to L s = c f s . ( 3 r = R + L s i, (3.9) θ = L p R j - N l -1 2 . (3.10) This can be converted to Cartesian coordinates using x = r cos(θ), (3.11) y = r sin(θ). (3.12) In practice the post-scan image is defined in Cartesian coordinates with an arbitrary resolution, such that the mapping should actually be done the opposite way. Each pixel (u, v) in the image corresponds to a physical position (x, y) in the imaging plane of the probe such that u = x -x min s , (3.13) v = y -y min s , (3.14) where (x min , y min ) is the position of the top left corner of the image in the probe frame, as depicted in Fig. 3.4, and s is the pixel resolution of the image. The value of each pixel is computed by finding the value in the prescan image at position (i, j) corresponding to the physical position (x, y). For linear probes this is computed according to i = x L s , (3.15) j = N l -1 2 + y L p , (3.16) while for convex probes it results in i = x 2 + y 2 -R L s , (3.17) j = N l -1 2 + atan2(y, x)R L p . (3.18) In practice this process gives non-integer values for i and j, while the pre-scan data are only acquired for integer values of i and j. Therefore an interpolation process is necessary to compute the actual value I post (u, v) of the pixel in the post-scan image I post from the available values in the pre-scan image I pre . Different interpolation techniques can be used: • Nearest neighbor interpolation: the pixel value is taken as the value of the closest prescan sample: I post (u, v) = I pre ([i], [j]), (3.19) where [.] denotes the nearest integer operator. This process is fast but leads to a pixelized aspect of the post-scan image. • Bi-linear interpolation : the pixel value is computed using a bi-linear interpolation between the four closest neighbors: I post (u, v) = (1-a)(1-b) I pre ( i , j ) + a (1-b) I pre ( i + 1, j ) +(1-a) b I pre ( i , j +1) + a b I pre ( i + 1, j +1), (3.20) with a = i -i , (3.21) b = j -j , (3.22) and where . denotes the floor operator. This process provides a smoother resulting image while still remaining relatively fast to compute. • Bi-cubic interpolation : the pixel value is computed using a polynomial interpolation between the 16 closest neighbors. This process provides a globally smoother resulting image and keeps a better definition of edges than bi-linear interpolation. However it involves longer computation times. ULTRASOUND IMAGING Reconstruction of 3D volumes The 2D US modality only provides a feedback on the content of a planar zone in the tissues. In order to visualize different structures in 3D, it is necessary to move the US probe along a known trajectory and reconstruct the relative 3D position of the structures. In the case of needle segmentation, it is possible that only a section of the needle is visible, when the needle is perpendicular to the imaging plane. In this case it is difficult to know exactly which point along the needle corresponds to this visible section. Even when a line is visible in the image, it is possible that the needle is only partially visible and partially out of the imaging plane. It can lead to erroneous conclusions about the real position of the needle tip, which could lead to dramatic outcomes if used as input for an automatic needle insertion algorithm. To alleviate this issue, automatic control of the probe position can be performed to maintain the visibility of the needle in the image [CKM13] [START_REF] Mathiassen | Visual servoing of a medical ultrasound probe for needle insertion[END_REF]. However this is not always possible if the needle shaft does not fit into a plane due to its curvature, which is often the case for the flexible needles that we use in the following. Three-dimensional US probes have been developed to provide a visual feedback on an entire 3D region of interest [START_REF] Huang | A review on real-time 3d ultrasound imaging technology[END_REF]. This way entire 3D structures can be visualized without moving the probe. Two main technologies are available: matrix array transducers and motorized transducers. Matrix array transducers: This technology is a 3D version of the classical 2D transducers. It uses a 2D array of piezoelectric elements placed on a surface instead of only one line. Similarly to 2D probes, the 3D matrix probes can be linear or bi-convex depending on whether the surface is planar or curved. They also provide the same fast acquisition properties, with a volume acquisition rate that is proportional to the number of elements of the array. However, due to the complexity of manufacturing, current probes only have a limited number of piezoelectric elements in each direction compared to 2D transducers, which limits the resolution and field of view that can be achieved. Motorized transducers: Also known as wobbling probes, these probes consist in a classical 2D US transducer attached to a mechanical part that applies a rotational motion to it. A series of 2D US images are acquired and positioned into a volume using the known pose of the transducer at the time of acquisition. In the following we consider the case of a sweeping motion such that the imaging plane of the transducer moves in the out of plane direction. In this case, the resolution of the volume is different in all directions. It corresponds to the resolution of the transducer in the imaging plane, while the resolution in the sweeping direction depends on the frame rate of the 2D transducer and the velocity of the sweeping. The acquisition rate is also limited by the duration of the sweeping motion, such that a trade-off has to be made between the resolution in the sweeping direction and the acquisition rate. Since an indirect volume scanning is made, some motion artifacts can appear in the volume, due to the motion of the tissues or the probe during the acquisition. Similarly to the 2D case, a post-scan volume with arbitrary voxel resolution can be reconstructed from the acquired pre-scan data. Each voxel (u, v, w) in the post-scan volume corresponds to a physical position (x, y, z) in the probe frame, such that u = x -x min s , (3.23) v = y -y min s , (3.24) w = z -z min s , (3.25) where (x min , y min , z min ) is the position of the top left front corner of the reconstructed volume in the probe frame and s is the voxel resolution of the image. The value of each voxel is computed by finding the value in the prescan image at position (i, j, k) corresponding to the physical position (x, y, z). It should be noticed that the center of the transducer does usually not lie on the axis of rotation of the motor, such that all acquired scan lines do not cross at a common point, which increases the complexity of the geometric reconstruction of the volume. We define R m the radius of the circle described by the center point on the transducer surface during the sweeping. The position (x, y, z) of a point in space will be defined with respect to the center of rotation O m of the motor, which is fixed in the probe frame, while the center of the transducer O t can translate, as depicted in Fig. 3.5. This leads to x = (r cos(θ) -R + R m ) cos(φ), (3.26) y = r sin(θ), (3.27) z = (r cos(θ) -R + R m ) sin(φ), (3.28) where r and θ are the polar coordinates of the sample in the transducer frame and φ is the current sweeping angle of the motor (see Fig 3 .5). The volume can be reconstructed by assuming that equiangular planar frames are acquired. However, since the scan lines are acquired one after another by the transducer, the sweeping motion introduces a small change in the direction of successive scan lines. Therefore the scan lines are not co-planar and some motion artifacts can appear in the reconstructed volume. In order to avoid these artifacts, we reconstruct the volume using the exact orientation of the scan lines, such that r, θ and φ can be computed according to [START_REF] Lee | Intensity-based visual servoing for non-rigid motion compensation of soft tissue structures due to physiological motion using 4d ultrasound[END_REF]: r = R + L s i, (3.29) θ = L p R j - N l -1 2 , (3.30) φ = δφ k + j N l - N f N l -1 2N l , (3.31) where N f is the number of frames acquired during one sweeping motion, δφ is the angular displacement of the transducer between the beginning of two frame acquisitions and is equal to 1 if the sweeping motion is performed in the positive z direction and -1 in the negative one. Finally the position (i, j, k) in the pre-scan data corresponding to the CHAPTER 3. NEEDLE LOCALIZATION USING ULTRASOUND physical position (x, y, z) in the probe frame can be calculated according to i = r -R L s , (3.32) j = N l -1 2 + R L p θ, (3.33) k = N f N l -1 2N l - j N l + φ δφ , (3.34) with r = R -R m + x 2 + z 2 2 + y 2 , (3.35) θ = atan2 y, R -R m + x 2 + z 2 , (3.36) φ = atan2(z, x). (3.37) As in the 2D case, voxel interpolation is necessary. The same techniques can be used: nearest neighbor interpolation still requires only one voxel while tri-linear interpolation requires 8 voxels and tri-cubic interpolation requires 64 voxels. Due to the high number of voxels and the increased dimension of the interpolation, the conversion to post-scan can be time consuming and often requires hardware optimization to parallelize the computations and achieve reasonable timings. Once the US volume has been reconstructed, the different structures present in the tissues can then be observed. In particular, the real 3D shape of a needle can be detected in Cartesian space. Needle detection in ultrasound Robust needle tracking using ultrasound (US) has the potential to make possible robotic needle guidance. Due to the high density of a metallic needle, a strong echo is generated, such that the needle appears as a very bright line in US images. However detecting a needle in US images is still a challenging task due to the overall noisy nature of the images. In this section we present the common factors that may hinder a detection algorithm, as well as an overview of current ultrasound-based methods used for the detection and tracking of a needle in 2D or 3D US image feedback. Ultrasound needle artifacts We describe here several phenomena that are typically observed in ultrasound (US) images and that are specific to the presence of a needle in the field of view of the probe [START_REF] Reusz | Needlerelated ultrasound artifacts and their importance in anaesthetic practice[END_REF]. These phenomena creates artifacts that can limit the performances of a needle detection algorithm. An illustration of the different artifacts is shown in Fig. 3.6 and a picture of a needle observed in 3D US can be seen in Fig. 3.7. Reflection: The direction in which a US beam is reflected at an interface between two media with different densities depends on the angle of incidence of the wave on the interface. In some cases the beam can be reflected laterally such that the echo never returns to the transducer [START_REF] Reusz | Needlerelated ultrasound artifacts and their importance in anaesthetic practice[END_REF]. This effect reduces the visibility of the needle when the insertion direction is not perpendicular to the propagation of the US wave. This can be particularly visible with convex probes, for which the beam propagation direction is not the same at the different locations on the image, resulting in a variation of the intensity of the observed needle. This effect can be reduced by using echogenic needles with a surface coating that reflects the US beam in multiple directions. Special beam steering modes are also available on certain US probes, for which all elements of the transducer are activated with small delays to create a wave that propagates in a desired direction. This can be used to enhance the visibility of the needle when its orientation is known, as was done in [START_REF] Hatt | Enhanced needle localization in ultrasound using beam steering and learning-based segmentation[END_REF]. Reverberation/Comet tail artifact: The high difference between the density of the needle and the density of soft tissues induces a high reflection of the US wave at the interface. This occurs on both sides of the needle and in each direction, such that a part of the wave can be reverberated multiple times between the two walls of the needle. Multiple echos are subsequently sent back to the transducer with a delay depending on the distance between the walls and the number of reflections inside the needle. Since the image is reconstructed using the assumption that the distance from the probe is proportional to the time needed by the wave to come back to the transducer Side lobes artifacts Reverberation / Comet tail artifact Re¡ection attenuation (see (3.2)), the echos created by the reflections inside the needle are displayed as if they came from an interface located deeper after the needle. Hence a comet tail artifact can be observed in a cross-sectional view of the needle, due to the appearance of a bright trailing signal following the real position of the needle. Beam width/Side lobes artifact: Due to the width of the US beam, it is possible that the needle is hit by several beams corresponding to different scan lines. This results in a needle that apparently spreads laterally and is larger than its real diameter. Similarly, the piezoelectric elements can emit parasitic side beams in addition to the main beam. The amplitude of the wave in the side beams is usually smaller than the amplitude of the main beam, which limits the influence that they have on the final image due to the attenuation in the tissues. However, strong reflectors like a needle may reflect the quasi-totality of the side beams, creating strong echos coming back to the transducer which are interpreted as returning from the main beam during the reconstruction of the image. This creates further lateral spread of the apparent position of the needle, as can be seen in Fig. 3.7. Needle detection algorithms Many image processing techniques have been proposed over the last decade to detect a needle in 2D or 3D ultrasound (US) images. Needle detection in 2D US is challenging because of the missing third dimension. The needle can be only partially visible and it is not always possible to ensure that it is entirely in the imaging plane. On the opposite, while the data acquired by a 3D US probe usually require some processing to be visualized in a comprehensible way by a human operator, they can easily by used directly by a computer process to detect the 3D shape of the needle. However it usually requires more computation due to the increased dimension of the image. Tracking algorithms have also been proposed to find the position of the needle across a sequence of images. These algorithms usually use a detection algorithm that is applied on each newly acquired image. The result is enhanced by using a temporal filtering of the output of the detection algorithm, typically a Kalman filter, or a modification of the detection algorithm to take into account the position of the needle in the previous images. In the following we present an overview of the general techniques that are used for needle tracking in 2D or 3D. Needle detection algorithms generally follow the same order of steps performed on the image: • a pre-filtering of the image to enhance the needle visibility and remove some noise, • a binarization of the image to select a set of potential points belonging to the needle, • a shape fitting step to find the final localization of the needle. Image pre-filtering: Smoothing of the image is often performed to filter out the speckle noise that is present in the image. This process also reduces the sharpness of the edges, which can be detrimental to find the boundaries of the needle. Median filtering is sometimes preferred to achieve noise smoothing while keeping a good definition of the edges. In order to enhance the separation between the bright needle and the dark background, a modification of the pixel intensity levels can then be used. This can be achieve in many ways, such as histogram equalization [PZdW + 14] or exponential contrast enhancement [WRS + 16]. In the case where the needle is co-planar with the imaging plane and a guess of its orientation is known, a specific filter can be used to enhance the visibility of the linear structures with a given orientation. For example an edge-detector can be used in a given direction [OEC + 06]. Gabor filtering is also often used in 2D [START_REF] Kaya | Needle localization using gabor filtering in 2d ultrasound images[END_REF] or in 3D [PZdW + 14]. Image binarization: A threshold is applied to the pre-processed image to keep only the points that have a good probability of belonging to the needle. Otsu's thresholding method can be used to automatically find an optimal threshold that separates two classes of intensities [START_REF] Otsu | A threshold selection method from gray-level histograms[END_REF]. However this method can yield poor performances if the background itself has several distinct levels of intensity. This can occur on non-filtered images when low reflective structures, with a black appearance, are present within normal tissues, with a gray appearance. Therefore the value of the threshold is mostly tuned manually to a pre-defined value or such that a certain percentage of the total number of point is kept, which can introduce a great variability in the performances of the algorithms. In [START_REF] Neubach | Ultrasound-guided robot for flexible needle steering[END_REF] needle tracking is performed using the difference between two successive US images. The resulting image presents a bright spot at the location where the needle tip progressed in the tissues, allowing to obtain a small set of pixels corresponding to the position of the needle tip after the thresholding process. However this method can only be used if the probe and the tissues are stationary, such that the motions in the images are only due to the needle. Doppler US imaging is used to display the velocity of moving tissues instead of their density variations. Therefore, it can be used to naturally reduce the number of structures with the same intensities as the needle by applying fast vibrations to the needle. An active vibrator attached near the base of the needle is used in [START_REF] Adebar | 3d ultrasoundguided robotic needle steering in biological tissue[END_REF] to produce these vibrations. The rotation of the needle around its axis is used in [START_REF] Mignon | Using rotation for steerable needle detection in 3d color-doppler ultrasound images[END_REF] to create the same amount of motion all along the needle, avoiding the attenuation of the vibrations along the needle shaft that can be observed when using a vibrator at the needle base. Needle shape fitting: Given a set of segmented needle points, a first decimation is often performed to remove obvious outliers. This typically involves morphological transformations, like a succession of erosions and dilatations, to remove the groups of pixels that are too small to possibly represent a needle. Fusion with Doppler US modality can also be used to get additional information in the needle location and remove outliers. Many methods can then be used to find the position of the needle depending on its configuration in the image. When using a 2D US probe, the needle can first be perpendicular to the imaging plane, such that only a section of the shaft is visible and can be tracked. In [WRS + 16] the set of pixels corresponding to the measured needle section is used as the input for a Kalman filter to estimate the point of the set that represents the real center of the needle cross section. The comet tail artifact exhibited by the needle can also be exploited to find the needle position. In [VAP + 14] and [AVP + 14] needle tracking is achieved by using the Hough transform applied to the pixel set to find the best line fitting the tail of the artifact. The center of the needle is then taken as the topmost extremity of the line and translated by a length corresponding to the radius of the needle. A similar process is used in [SRvdB + 16] where Fourier descriptors are used to find the center of the needle in the comet tail artifact. In the case where the needle shaft is in the imaging plane of a 2D probe or the field of view of a 3D probe, line detection algorithms are used to find the best fit of the needle. A Hough transform is used in [OEC + 06] to find the best group of points that fits a linear shape and remove all of the other outliers. A polynomial fitting is then performed with the remaining points to find the final shape of the needle. A now wide-spread method for line or polynomial fitting is the Random Sample Consensus (RANSAC) algorithm. The principle of the algorithm is to take a random sample of points and to build a polynomial fitting from this sample. The quality of the sample is assessed by the total number of points of the set that fits the obtained polynomial. The process is repeated many times and the fitting containing the maximum of points is taken as the result of the needle detection process. This algorithm is quite robust to outliers since a polynomial fitting from a sample containing outliers is likely to fit poorly to the real inliers. Such an algorithm was used in 2D US in [START_REF] Kaya | Needle localization using gabor filtering in 2d ultrasound images[END_REF] after a Gabor filtering and Otsu's automatic thresholding. It can also easilly be applied to 3D US, as done in [START_REF] Uherčík | Model fitting using ransac for surgical tool localization in 3-d ultrasound images[END_REF] after applying a simple threshold. Due to the stochastic nature of the algorithm, there is no guaranty that the final result of the algorithm contains only inliers, and inconsistent results can be obtained if the algorithm does not run for a sufficient amount of time. The algorithm can be made faster and more consistent by minimizing the number of outliers present in the initial set of points. This can for example be done by using a needle enhancing pre-filtering like a 3D Gabor filter [PZdW + 14]. For needle tracking in a US sequence, a temporal filtering can be used to filter the output of the RANSAC algorithm and to predict a region of interest in which the needle should lie. A Kalman filter was used for 3D needle tracking in [START_REF] Chatelain | Real-time needle detection and tracking using a visually servoed 3d ultrasound probe[END_REF] and improved with a mechanics-based model to predict the motion of the needle between two acquisitions in [START_REF] Mignon | Beveled-tip needlesteering using 3d ultrasound, mechanical-based kalman filter and curvilinear roi prediction[END_REF]. Direct approach: Some approaches use directly the image intensity to localize the needle without relying on a prior thresholding of the image. For example, projective methods consist in calculating the integral of the image intensity along a curve that represents an underlying model of the object sought in the image. The curve with the highest value for the integral is selected as the best representation. The most known projective method is the Hough transform, which uses straight lines as the model. The generalized Radon transform, using polynomials, can be used to track a needle in a 3D US volume, as performed in [START_REF] Neshat | Real-time parametric curved needle segmentation in 3d ultrasound images[END_REF]. Due to the high number of possible configurations for the model in the image, projective methods are highly computationally expensive, especially in 3D. In [OEC + 06] detection rays are first traced perpendicular to an estimation of the needle direction and an edge detector is run along each ray to only keep one pixel along each ray. This way the set of possible pixels belonging to the needle has a fixed and relatively small size. The Hough transform is then use to find a line approximation fitting the maximum of points. A polynomial fitting is finally performed to find the best shape of the needle. The pixel intensities can also be used directly with template matching to provide a fast tracking of the needle tip, as is done in [START_REF] Kaya | Visual tracking of biopsy needles in 2d ultrasound images[END_REF]. An artificial neural network is used in [START_REF] Rocha | Flexible needles detection in ultrasound images using a multi-layer perceptron network[END_REF] to directly compute for each pixel in a region of interest the probability that this pixel belongs to the needle. A particle filter is used in [START_REF] Chatelain | 3d ultrasoundguided robotic steering of a flexible needle via visual servoing[END_REF] to locate a needle in 3D US. Each particle consists in a 3D polynomial curve that is directly projected in the 3D US volume. The probability that a particle corresponds to the real needle in the volume is computed using the sum of the intensities of the voxels along the curve and an additional term for the tip detection. In the following section, we present the needle tracking algorithm that we use in order to take into account the different points mentioned previously. We choose a direct approach to avoid the tuning of a threshold for a segmentation step, and we use a method that only considers a limited set of points in the image in order to keep a reduced computational complexity. Intensity-based needle tracking In this section we present the tracking algorithms that we designed to localize the 3D position of the needle shaft using stereo cameras or 3D ultrasound (US). Both algorithms use directly the intensity value of the pixels or voxels located near the previous position of the needle to find its new best position. Their local behavior allows for fast computations while using directly the intensities make them independent of the quality of a prior segmentation of potential points belonging to the needle. We first present the tracking using stereo cameras and then we focus on the design of the algorithm to track a needle in 3D US volumes. Tracking with camera feedback We present here the algorithm that we designed to track the 3D shape of a needle embedded in a translucent gelatin phantom using two stereo cameras. Since cameras are not clinically relevant to observe a needle embedded in real tissues, it will be used for the validation of other aspects of the insertion, such as the control of the needle trajectory. The experimental conditions are Figure 3.8: Illustration of the reconstruction of a 3D point from its position observed in two images acquired by two different cameras. The red dots represent the 2D position of the object seen in both images and the green dot is the estimation of the 3D position of the object. thus optimized such that the algorithm can provide an accurate and reliable measure of the needle position. A uniform background is provided such that the needle is clearly visible in the images. We describe in the following the different steps allowing the measure of the 3D position of the needle from the 2D images acquired by the cameras. Camera registration: The two cameras are placed orthogonally to each other to provide a 3D feedback on the position of the needle and the phantom. The intrinsic parameters of each camera are first calibrated using the ViSP library [START_REF] Marchand | Visp for visual servoing: a generic software platform with a wide class of robot control skills[END_REF] and a calibration grid made of circles. These parameters comprise the position of the optical center in the image, the ratio between the focal length and the size of a pixel as well as two parameters to correct for radial distortion of the image. Once these intrinsic parameters are known, a mapping can be determined between each object in the image and a corresponding line in 3D space on which the object is supposed to lie. The relative pose between the cameras (translation and rotation) is then calibrated using the same calibration grid viewed by both cameras [START_REF] Marchand | Pose estimation for augmented reality: A hands-on survey[END_REF]. Any object observed in both images at the same time can be mapped to two different lines in space using the intrinsic parameters. The position of the object in 3D space can then be estimated by finding the closest point to the two lines, as illustrated in Fig. 3.8. The 3D accuracy of this tracking system, calculated from the size of the pixels and the distance of the needle from the cameras, is approximately 0.25 mm around the location of the needle. During the insertion procedure it can also be necessary to express the Needle detection in 2D images: We use a gradient-based tracking algorithm to track the needle seen in each image. The principle of the algorithm is depicted in Fig. 3.9. The needle shape is approximated in the image with a third order polynomial curve defined by four equi-spaced control points. After the tracking has been initialized and a new image has been acquired, a line is drawn for each control point such that it is normal to the previous polynomial curve and passes by the control point. The two edges of the needle shaft are found as the points corresponding to the maximum and minimum values of the gradient along the normal line. The new position of the control point is taken as the center between these two edges. A Kalman filter is used to temporally smooth the position of each control point to avoid abrupt changes than may correspond to a bubble or another object with sharp edges near the control point. A new polynomial curve is then computed from the new positions of the control points. An edge detection is finally performed along a line tangent to the extremity of the curve to find the new position of the needle tip. 3D needle reconstruction: After the tracking as been done in each image, two 2D polynomial curves are available, which correspond to the projection of the 3D needle on the images. Several points are sampled along one of the 2D curves and then matched to their corresponding point on the 2D curve in the second image using epipolar geometry, i.e. using the intrinsic parameters of the cameras and their relative pose to deduce the possible correspondences between different pixels in both images. A 3D point is then reconstructed from each pair of matching 2D points along the needle (see Fig. 3.8). Finally the 3D needle is reconstructed by fitting a 3D polynomial curve to the set of 3D points. The new 3D needle curve is then projected back onto each image to initialize the tracking in the next images. This allows a further smoothing of the motion of the 2D curves in the images and provides a way to recover if one of the 2D tracking in the image partially fails. Iterative tracking using ultrasound needle artifacts We describe here the new tracking algorithm that we use to track a flexible needle in 3D ultrasound (US) volumes. We first mention different points that drove us toward the development of the algorithm and then we present the algorithm itself. Needle artifacts: The presence of a needle in the field of view of an US probe leads to strong echos reflected back to the transducer due to the high difference of density between the needle and the tissues. This results in a bright region observed in the reconstructed US volume. Therefore a majority of needle tracking algorithms are designed to find the location of the needle in the middle of a bright zone. However the bright signal observed around the needle can mostly be due to US artifacts, like reverberation artifacts or lateral resolution degradation due to side lobes and beam width, as was presented in section 3.3.1. The effects of lateral resolution degradation are usually symmetric, such that the needle can effectively be found in the center of the bright zone in the lateral direction. However in the beam propagation direction, only the first received echo corresponds to the first wall of the needle, while subsequent echos are either due to the second wall or reverberations inside the needle. Therefore the real position of the center of the needle is located just after the first echo and not in the center of the bright signal. Detecting the needle in the center of the bright zone would result in an erroneous estimation of the real position of the needle. For this reason the algorithm that we propose in the following is optimized to take into account such artifacts. Segmentation: Some algorithms first perform a segmentation of the US volume to isolate the voxels that are likely to belong to the needle. A second algorithm, typically a Random Sample Consensus (RANSAC) algorithm, is then used to find among those voxels the largest set that fits the best a predefined geometrical model of the needle shape. Hence the performances of this second algorithm highly depend on the quality of the segmentation step, both in terms of accuracy and processing time. However the segmentation of the needle is also usually heavily dependent on a threshold that determines if a given voxel belongs or not to the needle. A too high value of the threshold may lead to ignore some parts of the needle that are less bright than the rest, due to shadowing from other structures or a too large angle of incidence with the US beam. On the contrary, with a low value of the threshold too many voxels may be included, belonging to bright structures, background noise or needle artifacts. In practice the best tuning of the threshold may depend on the actual content of the volume, which can change over time during a same operation. In order to avoid these issues, we use directly the intensity of the voxels without prior segmentation step. Computation time: In order to perform real-time control, the tracking algorithm should be able to provide an estimation of the position of the needle than is not too outdated with the real position of the needle. The acquisition and reconstruction of a 3D US volume in Cartesian space already introduces a delay, such that any further delay should be reduced to the minimum. Time consuming algorithms, like projective algorithms, usually perform heavy computations on a large set of voxels. These approaches are usually optimized using parallelization to achieve good timing performances. However this require specialized hardware, which can increase the cost of a needle tracking system. On the opposite, local algorithms only consider a limited set of voxels in the vicinity of an initial guess of the needle position. This initial guess is then refined iteratively until the new position of the needle is found. The actual result of such methods depends on their initialization, however they can perform with great speed and accuracy when the initial guess is not too far from the real needle. By exploiting only the data in a small region, they also ignore most outliers present in the volume, like other linear structures that could be mistaken for a needle by a global detection algorithm. Therefore in the following we choose to use a local approach to perform needle tracking. Iterative tracking using needle artifacts: In order to address the different points mentioned previously, we propose to detect the position of the shaft of the needle in 3D US using a local iterative algorithm that directly uses the voxels intensities and takes into account the artifacts that are specific to the needle. The algorithm is initialized around a 3D polynomial curve that represents a prediction of the needle body position in the US volume. The curve is defined by N control points equi-spaced along the curve. Several polynomial curve candidates are then sampled all around the first one by displacing each of the control points by a given step in the directions normal to the needle. Five positions are thus generated for each control points, leading to a total of 5 N curve candidates. The best curve is selected among the candidates to maximize a cost function calculated from the voxels intensities. The algorithm is then repeated around the new selected curve, until no better curve can be found around the current best one. In the following we note c i the polynomial curve candidates, with i ∈ [[1, 5 N ]], and V (c i (l)) the intensity of the voxel at position c i (l), with l the curvilinear coordinate along the curve. In order to take into account the different points mentioned previously, the cost function J(c i ) associated to a curve c i is defined as follows J(c i ) = J 3 (c i ) L 0 (J 1 (l) + J 2 (l)) dl, (3.38) where L is the length of the curve c i and J 1 , J 2 , J 3 are different sub-cost functions. Figure 3.10 provides an illustration of the different sub-cost functions used in the algorithm. J 1 is used to detect the first wall of the needle in the beam propagation direction and to place the curve at a distance corresponding to the radius of the needle under this edge: J 1 (l) = - 0 -L d w(s) V (c i (l) + (s -r N ) d(l)) ds (3.39) + L d 0 w(s) V (c i (l) + (s -r N ) d(l)) ds, where L d defines the amount of voxels taken to perform the integrals, w is a weighting function used to give more importance to the voxels near the center of the integration zone, d(l) ∈ R 3 is the beam propagation direction at needle point c i (l) and r N denotes the radius of the needle expressed in voxels. We used a triangular profile for w, defined such that w(s) =      L d +s L 2 d if -L d < s < 0 L d -s L 2 d if 0 ≤ s < L d 0 otherwise . (3.40) J 2 is used to promote the curves that are laterally centered in the bright zone, i.e. bright portions that spread in a normal direction to the US beam where L n defines the amount of voxels taken to perform the integral, and n(l) ∈ R 3 is a unit vector normal to the needle curve and beam propagation direction at the needle point c i (l) defined such that J 2 (l) = n(l) = d(l) × dc i dl (l) d(l) × dc i dl (l) . (3.42) The parameters L d and L n can be tuned to set the number of voxels taken into account around the curve candidates. Low values can be used to decrease the computations but the algorithm becomes more sensitive to noise in the volume. On the contrary, high values increase the computation time but introduce a better filtering of the noise. A trade-off can be achieved INTENSITY-BASED NEEDLE TRACKING by choosing intermediate values corresponding to the expected dimensions of the cross section of the needle. Finally J 3 is used to penalize curves with high curvatures that may result from fitting adjacent background noise J 3 = + 1 L L 0 d 2 c i dl 2 (s) ds (3.43) where is a parameter used to define a curvature threshold from which the curvatures are penalized. Tip tracking: Once the curve has been laterally fitted, the location of the needle tip p t is sought in the alignment of the extremity of the best curve c best to maximize the following cost function J 4 = 0 -Lt w(s) V p t + s dc best dl (L) ds (3.44) - Lt 0 w(s) V p t + s dc best dl (L) ds, where L t defines the amount of voxels taken to perform the integral. The parameter L t can be tune similarly to L d and L n to find a trade-off between computational cost and sensitivity to noise. Due to the local and iterative nature of the algorithm, its performances in terms of timing and detection accuracy depend on the quality of the initialization of the needle position. With a proper initialization, the algorithm can perform fast and fit the exact shape of the needle. This can for example be obtained by using a model-based estimation of the needle motion between two acquisitions of the US volume. The tracking and timing performances of the algorithm are evaluated in the next section. Experimental validation We propose to illustrate the performances of our needle tracking algorithm in 3D ultrasound (US) during the insertion of a needle. We compare the tracking result with an algorithm that uses the Random Sample Consensus (RANSAC) algorithm after an intensity-based binarization of the volume, in order to show the limitations that can appear with such algorithm. Experimental conditions (setup in France): We use the wobbler probe and US station from BK Ultrasound to record a sequence of US volumes during the insertion of the Angiotech biopsy needle in a gelatin phantom. The needle is inserted using the Viper s650 and the US probe is held and The acquisition parameters of the US probe are set to acquire 31 frames during a sweeping motion with an angle of 1.46 • between successive frames. Due to the strong reflectivity of the walls of the container and the low attenuation in gelatin, a reverberation of the US wave occurs between the surface of the probe and the opposite wall. The acquisition depth is set to 15 cm, which is larger than the container, in order to remove the artifacts created by this reverberation from the region where the needle is in the volume. This results in the acquisition of one volume every 900 ms and a maximal resolution of 0.3 mm × 1 mm × 2 mm at the level of the needle, which is approximately 5 cm away from the probe. The spacial resolution of the post-scan volume is set to 0.3 mm in all directions and linear interpolation is used for the reconstruction. A focus length of 5 cm is set for the transducer to obtain a good effective resolution near the needle. The needle is inserted slowly at 1 mm.s -1 such that the needle position is only slightly different between two volumes. Tracking algorithm: We compare our intensity-based tracking algorithm to the result obtained with a tracking using RANSAC algorithm. For our algorithm, we set the size of the integration regions L d , L n and L t to 10 voxels (see (3.39), (3.41) and (3.44)), corresponding to a distance of 3 mm around the needle. A manual initialization of both tracking algorithms is performed in the first volume after the needle as been inserted 1.5 cm in the phantom. The threshold for the volume binarization necessary for the RANSAC algorithm is chosen just after the initialization. The maximum intensity level along the needle is computed and the threshold is set to 80% of this value. The robustness of the RANSAC algorithm is increased by rejecting obvious outliers during the sampling process, which are identified if the length of the detected needle is lower than 90% of the length of the needle detected in the previous volume. Results: Both algorithms can track the needle without failing in a sequence of 3D US volumes. However they yield different shapes of the tracked needle at the different steps of the insertion. We detail these differences in the following. Limited needle intensity: Figure 3.12 shows two cross sections of a volume acquired near the beginning of one insertion. Due to the location and orientation of the needle with respect to the probe, a great part of the US beam reflected by the needle shaft does not return to the transducer, resulting in a low intensity along the needle. On the contrary, the needle tip is more visible and some strong reflections also occur near the surface. Hence, after applying a threshold to the image for the RANSAC algorithm, only the tip and the artifacts due to the insertion point remains. The needle tip can still be found thank to the rejection of short fitting curves in the RANSAC algorithm, without which the best linear fit would be the artifact in this case. However the result does still overfit the artifact, leading to a global tracking that does not correspond to the real shape of the needle. On the other hand, our algorithm can accurately fit the shape of the needle in spite of the low intensity along the needle shaft. This shows that using a threshold to binarize the volume does not allow an adaptation to the variations of intensity along the needle. On the opposite, taking into account all levels of intensities allows exploiting all the information available on the edge of the needle, leading to a better tracking. Let us now consider the cases where higher intensities are available along the needle shaft and not only at the needle tip. Artifact fitting: Figure 3.13 shows three cross sections of a volume acquired in the middle of the insertion process. This time a part of the needle is almost normal to the beam propagation direction, such that a strong echo is reflected and results in a clearly visible bright region. Reverberation and side lobes artifacts are clearly visible in this case. The algorithm based on RANSAC tends to center in the middle of the bright region, which mainly contains reverberation artifacts. The resulting tracking is thus shifted with respect to the real position of the needle shaft. On the contrary our algorithm can fit to the first echo produced by the needle. Conclusion: These experiments have shown that our intensity-based tracking algorithm allows taking into account needle artifacts, created by reverberation or beam width, to accurately detect the position of the needle body in 3D US volumes. Using directly the voxels intensities allows adapting to variations of intensities along the needle shaft. This point is important for real applications since the needle intensity may vary due to different phenomena, such as reflection outside of the transducer or shadowing from other Figure 3.12: Tracking of the needle at the beginning of the insertion. The needle tracked using the proposed algorithm is represented in green and the needle tracked using the RANSAC algorithm is represented in red. Due to the large incidence angle with the ultrasound beam, the intensity along the needle shaft is reduced. Thresholding the image for the RANSAC algorithm yield only the needle tip and the strong reflections near the surface, leading to inaccurate needle shaft detection. On the contrary, taking all voxels into account leads to a better detection of the edges of the needle. Figure 3.13: Tracking of the needle in the middle of the insertion. The needle tracked using the proposed algorithm is represented in green and the needle tracked using the RANSAC algorithm is represented in red. Reverberation artifact is visible along the needle shaft in the beam direction (approximately x axis) resulting in a comet tail that can be seen in the needle cross section view (xz view). Side lobes artifacts normal to this direction can also be seen on each side of the needle (along the z axis). Some parts of these artifacts are included in the binarized volume after thresholding, resulting in a biased tracking with the RANSAC algorithm. On the contrary, the tracking taking artifacts into account fits the first echo and ignores the reverberation artifact. structures. Therefore, this tracking algorithm will be used in the following for all experiments performed under 3D US feedback. Nevertheless, we tested the tracking using a slow insertion speed such that the needle motion was small between two acquisitions. The local tracking could then perform smoothly. In practice it is possible that motions with greater amplitude occur between two acquisitions, either due to a faster manipulation of the needle base or due to some movements of the tissues induced by physiological motions of the patient. The first point can be addressed by using a model that can predict the new position of the needle after a given motion has been applied to its base, like the model that we proposed in the previous chapter 2. The second point, however, requires to estimate the motions of the tissues, which will be the focus of the next section. Tissue motion estimation During needle insertion, physiological motions of the patient, like breathing, can induce a displacement of the tissues around the needle. This can modify the needle shape and the future trajectory of the needle tip. The effect of lateral tissue motions is all the more important when using flexible needles. Such needles indeed tend to follow the motions of the tissues without applying a lot of resistance. The modification of the future trajectory is also amplified when the part of the needle that is outside of the tissue is long, mainly at the beginning of the insertion. Therefore the interaction model needs to be updated online in order to account for such tissue motions and be able to provide a good estimation of the current state of the insertion. In this section we present the method that we propose and have validated to update the model. The position of the model of the tissues presented in previous chapter (section 2.4.2) is estimated using an unscented Kalman filter (UKF). We first give a general presentation of Bayesian filtering and the formulations of particle filters and UKF. In a second part we provide more details to explain how we adapted the UKF to different kinds of available measurements including needle position feedback, provided by visual or electromagnetic tracking, and force feedback. Multimodal estimation Bayesian filtering In this section we present and develop the general principles of Bayesian filtering that leads to the design of the unscented Kalman filter (UKF) and particle filter (PF). In the following sections and chapters, the UKF will be used for state estimation and applied to the case of needle-tissue interaction modeling. TISSUE MOTION ESTIMATION System modeling: Bayesian filtering is a general approach used to estimate the state of a system given some observations of this system. The first step is to provide a model of the evolution of the state of the system over time. Let us consider a system that can be fully parameterized at each instant using a state vector x ∈ R Nx containing N x state variables. The system can also be controlled using an input vector u ∈ R Nu containing N u components. The evolution of the system can generally be modeled with a state equation such that x k+1 = f k (x k , u k , w k ), (3.45) where k represents the time index, w k ∈ R Nw is a process noise of dimension N w with covariance matrix Q k ∈ R Nw×Nw and f k : R Nx ×R Nu ×R Nw → R Nx is a function to model the deterministic behavior of the system. Let y ∈ R Ny be a vector of N y measures on the system such that y k = h k (x k , u k , ν k ), (3.46) where ν k ∈ R Nν is a measurement noise of dimension N ν with covariance matrix R k ∈ R Nν ×Nν and h k : R Nx × R Nu × R Nν → R Ny is a function representing the deterministic measurement model. General principles: Bayesian filtering consists in estimating the probability density function (pdf) p(x k |y k , . . . , y 0 ) of the current state knowing the current and past measurements. In the following we slightly develop the computations that are used to provide a recursive estimation of p(x k |y k , . . . , y 0 ). It can be shown using Bayes law that we have the following relationship: p(x k |y k , . . . , y 0 ) = p(y k |x k , y k-1 , . . . , y 0 ) p(x k |y k-1 , . . . , y 0 ) p(y k |y k-1 , . . . , y 0 ) , (3.47) where p(y k |x k , y k-1 , . . . , y 0 ) is the pdf of the current measure knowing the current state of the system and the past measures, p(x k |y k-1 , . . . , y 0 ) is the pdf of the current state of the system knowing the past measures and p(y k |y k-1 , . . . , y 0 ) is the pdf of the current measure knowing the past measures. First it can be seen that the denominator p(y k |y k-1 , . . . , y 0 ) does not depend on x k and is thus equivalent to a scaling factor for p(x k |y k , . . . , y 0 ). Since the integral of a pdf is always equal to 1, it is sufficient to compute and normalize the numerator, so that this scaling factor does not need to be computed and can be dropped. In addition, in order to simplify the derivation of the recursive filter, it is assumed that the system follows a first order Markov process, i.e. the state x k of the system at time k only depends on the previous state x k-1 and is independent of the other states before that It can also be noted that p(x k |y k-1 , . . . , y 0 ) can be further developed using the chain rule: p(x k |y k-1 , . . . , y 0 ) = p(x k |x k-1 )p(x k-1 |y k-1 , . . . , y 0 )dx k-1 , (3.49) where p(x k |x k-1 ) is the pdf of the current state knowing the previous state of the system and p(x k-1 |y k-1 , . . . , y 0 ) is the pdf of the previous state knowing the past measures. Finally, we get the recursive formula p(x k |y k , . . . , y 0 ) ∝ p(y k |x k )p(x k |y k-1 , . . . , y 0 ) (3.50) ∝ p(y k |x k ) p(x k |x k-1 )p(x k-1 |y k-1 , . . . , y 0 )dx k-1 . (3.51) An graphical illustration of this equation is provided in Fig. 3.14. In practice p(x k-1 |y k-1 , . . . , y 0 ) is known from the previous step of the recursive method, p(x k |x k-1 ) can be estimated using the evolution model (3.45) and p(y k |x k ) can be estimated using the the measurement model (3.46). Hence most Bayesian filters proceed in two steps: a prediction step, where a prediction of the state is made based on the previous estimate, i.e. p(x k |y k-1 , . . . , y 0 ) is computed using (3.49), and an update step, where the new measure is integrated to correct the prediction, i.e. p(x k |y k , . . . , y 0 ) is computed using (3.50). TISSUE MOTION ESTIMATION Figure 3.15: Illustration of the pdf approximations used by Kalman filters (KF) and particle filters (PF). Implementations: There exists many families of Bayesian filters that use different methods to get the estimations of the different pdfs and perform the prediction and update steps. For an overview of Bayesian filters we invite the reader to refer to [START_REF] Van Der Merwe | The unscented particle filter[END_REF] or [START_REF] Chen | Bayesian filtering: From kalman filters to particle filters, and beyond[END_REF]. The family of the particle filters uses a finite set of samples to approximate the pdf. This allows a good estimation of the pdfs but requires more computational resources, especially for a high dimensional state space. The family of the Kalman filters (KFs) uses the Gaussian approximation, i.e. all the pdfs are Gaussian. This greatly reduces the computations but may lead to approximations when the real pdfs are highly non-Gaussian. Figure 3.15 shows an illustration of these different approximations. In the following we briefly focus on particle filtering before detailing more thoroughly the Kalman filters. Particle filter The principle of the particle filters (PFs) is to use a large set of N p weighted samples X i , called particles, to approximate the different pdfs. The weights w i associated to each particle are a representation of the likelihood of the particle and are defined such that Np i=1 w i = 1. A pdf g(x) of a random variable x is thus approximated as g(x) ≈ Np i=1 w i δ (x -X i ) (3.52) where δ is the Dirac delta function. Using this approximation, the pdfs in (3.51) are reduced to finite sums. The main advantage of the PF is that it can be used with non-linear systems as well as non-Gaussian pdfs. However its performance depends on the number of particles used in the approximations. A high number of particles is usually required to obtain a good accuracy, especially when considering high dimensional state spaces, which increases its required computational load. Many variants of the PF exist depending on the method used to sample and update the particles [START_REF] Chen | Bayesian filtering: From kalman filters to particle filters, and beyond[END_REF]. On the contrary, Kalman filters offer a reduced complexity and are deterministic since they do not rely on a random sampling process. Therefore we will use these kind of filters in the following. Kalman filters Let us develop the case of the KFs a bit further. Under the Gaussian assumption, each pdf can be entirely characterized using only their mean µ and covariance matrix P , such that they take the form p(x) = 1 (2π) Nx |P | e -1 2 (x-µ) T P -1 (x-µ) , (3.53) with |P | the determinant of P and . T the transpose operator. An estimate of the state at the end of each step can directly be built using the mean of the state pdf and the covariance matrix gives the uncertainty on this estimate. In the following we note xk|k-1 and P x,k|k-1 the mean and covariance matrix, respectively, of the pdf of the state at the end of the prediction step, and xk and P x,k the same at the end of the update step. We also introduce the prediction of the measures at the end of the prediction step ŷk . It can be shown that the update step can be reduced to xk = xk|k-1 + K k (y k -ŷk ), (3.54) P x,k = P x,k|k-1 -K k P ỹ,k K T k , (3.55) K k = P xy,k P -1 ỹ,k , (3.56) where K k ∈ R Nx×Ny is called the Kalman gain, P xy,k ∈ R Nx×Ny is the covariance matrix between x k and y k , and P ỹ,k ∈ R Ny×Ny is the covariance of the innovation ỹk = y k -ŷk . Different versions of KFs can be derived depending on the method used to propagate the pdfs through the evolution and observation equations. For completeness we briefly describe the most known classical KF and extended Kalman filter (EKF) before detailing the UKF. Kalman filter and extended Kalman filter: For both KF and EKF, the propagations of the pdfs are done by directly propagating the means through the system equations, (3.45) and (3.46), and the covariance matrices through a linearized version of the equations. The prediction step is thus TISSUE MOTION ESTIMATION computed according to xk|k-1 = f k (x k-1 , u k-1 , 0), (3.57) P x,k|k-1 = F k-1 P x,k-1 F T k-1 , + W k-1 Q k-1 W T k-1 (3.58) ŷk = h k (x k|k-1 , u k , 0), (3.59) where F k = ∂f k ∂x x=x k ∈ R Nx×Nx , (3.60) W k = ∂f k ∂w x=x k ∈ R Nx×Nw . (3.61) The update step is performed as stated previously in (3.54) and (3.55) with the values P xy,k = P x,k|k-1 H T k , (3.62) P ỹ,k = H k P x,k|k-1 H T k + G k R k G T k , (3.63) where H k = ∂h k ∂x x=x k|k-1 ∈ R Ny×Nx , (3.64) G k = ∂h k ∂ν x=x k|k-1 ∈ R Ny×Nν . (3.65) The difference between the KF and EKF is that the KF makes the additional assumptions that the system is linear, while the EKF can be used with non-linear systems. This way no linearization is required for the KF, which reduces the computational complexity and makes it easy to implement. In this case the system equations become x k+1 = F k x k + B k u k + W k w k , (3.66) y k = H k x k + D k u k + G k ν k , (3.67) with B k ∈ R Nx×Nu and D k ∈ R Ny×Nu . Unscented Kalman filter: The UKF proposed by Julier et al. [START_REF] Julier | A new extension of the kalman filter to nonlinear systems[END_REF] is a sample-based KF hence approaching from a PF. It uses a small number of weighted state samples, called sigma points, to approximate the Gaussian pdfs. The propagation of the pdfs through the system is done by propagating the sigma points in the system equations. The advantage of this method is that it does not linearize the equations around one point but instead propagates the sigma points through the non-linearities. This way the propagation can be achieved with a higher order of approximation than with the EKF in the case of a highly non-linear system [WVDM00] while also being less computationally demanding than PF. In the case of a numerical model for which the linearization in the EKF can not be done analytically, the UKF requires similar computations as the EKF. Therefore, due to its better performances and simplicity, we will use the UKF to update our numerical interaction model instead of the other filters presented previously. We develop its principle a bit further in the following. Augmented state: Usually, the process and observation noises, w and ν, are incorporated with the state x in an augmented state x a ∈ R Nx+Nw+Nν such that x a =   x w ν   , P a x =   P x P xw P xν P xw Q P wν P xν P wν R   (3.68) with P xw ∈ R Nx×Nw the covariance between the state and process noise, P xν ∈ R Nx×Nν the covariance between the state and the measurement noise and P wν ∈ R Nw×Nν the covariance between the process noise and the measurement noise. Under this form, the UKF allows taking into account non-linear incorporation of correlated noises. For simplicity of notation and computation, we assume in the following that the process and measurement noises w and ν are independent additive noises. This way P xw = 0, P xν = 0, P wν = 0 and the system equations take the form x k+1 = f k (x k , u k ) + w k , (3.69) y k = h k (x k , u k ) + ν k , (3.70) This simplification allows us to follow the UKF steps using only the state x instead of the augmented state x a . Prediction and update steps: At each iteration, a set of 2N x + 1 sigma points X i , i ∈ [[0, 2N x ]] , is sampled from the current state pdf according to the weighted unscented transform, so that    X 0 = xk-1 , X i = xk-1 + ( √ N x αP x ) i , i = 1, . . . , N x , X i = xk-1 -( √ N x αP x ) i-Nx , i = N x + 1, . . . , 2N x , (3.71) where α is a positive scaling factor than is used to control the spread of the sigma points and () i denotes the i th column of a matrix. Using a large value for α leads to wide spread sigma points and a small value leads to sigma points close to each other. Tuning this parameter may be difficult as it should depend on the shape of the non-linearity that is encountered. Close sigma points may be equivalent to a linearization of the non-linearity while spread sigma points may be too far from the non-linearity of interest, which may lead to a reduced quality of the filtering in both cases. The prediction step is performed by propagating each sigma point through the evolution equation (3.69): X i ← f k-1 (X i , u k-1 ) i = 0, . . . , 2N x . (3.72) The mean xk|k-1 and covariance matrix P x,k|k-1 of the Gaussian pdf associated to the new propagated set can then be computed using weighted sums along the new propagated sigma points: xk|k-1 = 2Nx i=0 W (m) i X i , (3.73) P x,k|k-1 = Q + 2Nx i=0 W (c) i (X i -xk|k-1 )(X i -xk|k-1 ) T , (3.74) with W (m) 0 = (α 2 -1) α 2 , (3.75) W (c) 0 = (α 2 -1) α 2 + 3 -α 2 , (3.76) W (m) i = W (c) i = 1 4α 2 , i = 1, . . . , 2N x . (3.77) For the update step, a corresponding estimate of the measures Y i is then associated to each sigma point using the measure equation (3.70): Y i = h k (X i , u k ) , i = 1, . . . , 2N x . (3.78) The standard update step ((3.54)-(3.56)) is finally performed to obtain the final estimate of the new state mean and covariance. The different terms CHAPTER 3. NEEDLE LOCALIZATION USING ULTRASOUND are estimated as weighted sums along the sigma points: ŷk = 2Nx i=0 W (m) i Y i , (3.79) P xy,k = 2Nx i=0 W (c) i (X i -xk|k+1 )(Y i -ŷk ) T , (3.80) P ỹ, k = R + 2Nx i=0 W (c) i (Y i -ŷk )(Y i -ŷk ) T . (3.81) An illustration of the different steps of the UKF is provided in Fig. 3.16. Discussion: As a side remark, it can be noted that all the operations performed in the KFs assumes that the variables lie in a vector space. In the case where one of the variables lies in a manifold that does not reduce to a vector space, the vector operations, such as addition or multiplication by a matrix, lose their signification. The Gaussian pdfs are also harder to define on manifolds. This can typically be the case when considering orientations in the state space or measurements space. In that case we use the manifold version of the KFs as described for the UKF by Hauberg et al. [START_REF] Hauberg | Unscented kalman filtering on riemannian manifolds[END_REF]. This method basically consists in mapping the variables (sigma points or their associated measure estimates) to a tangent space at some point of the manifold using the logarithm map. This way all the linear operations developed previously can be used on this tangent space, which is a vector space. Note also that the covariance matrices only make sense on the tangent space. Once the calculations have been performed, the resulting estimates of the state or measures can be mapped again on the manifold using the exponential map. At each prediction step the tangent space of the state manifold is taken at the current state estimate xk , corresponding to the center sigma point X 0 . The remaining sigma points are sampled in this tangent space according to (3.71). Similarly, at each update step the tangent space of the measure manifold is taken at the measure estimate of the center sigma point Y 0 = h k (X 0 , u k ). The measures associated to each sigma point are then all mapped to this tangent space. The covariance matrices can then be computed using (3.80) and (3.81) by replacing in the equations the sigma points and measure estimates by their corresponding mapping on the tangent spaces. An illustration of the logarithm and exponential map as well as the different steps of the UKF on manifold spaces can be found in Fig. 3.17. Now that the general formulation of the UKF has been presented, next section develops how we make use of it to update the state of our needletissue interaction model. Figure 3.17: Illustration of the unscented Kalman filter on manifolds. Tissue motion estimation using unscented Kalman filter In this section we present how we use the unscented Kalman filter (UKF) to estimate the tissue motions and update our needle-tissue interaction model presented in section 2.4.2. We will consider two kinds of measurements: measurements on the geometry of the needle, such as position or direction of some point of the needle shaft, and measurements of the force and torque exerted at the base of the needle. The method is described in such a way that it is independent of the method actually used in practice to provide the measurements. Position and direction feedback can for example be provided by an electromagnetic (EM) tracker placed somewhere inside the needle or through shape reconstruction using fiber Bragg grating [PED + 10]. It can also be provided by a needle detection algorithm that runs on some visual feedback; the visual feedback itself can be of various nature, as for example a sequence of 2D or 3D images provided by cameras [START_REF] Bernardes | Robot-assisted automatic insertion of steerable needles with closed-loop imaging feedback and intraoperative trajectory replanning[END_REF], ultrasound [START_REF] Kaya | Visual tracking of biopsy needles in 2d ultrasound images[END_REF], computerized tomography [HGG + 13] or magnetic resonance imaging [PvKL + 15]. We do not consider the case where the position of the tissues is directly provided, for example by using an EM tracker or a visual marker placed on the tissue surface. Although the method could also be used with such measures, it poses additional issues that will be observed and discussed later in section 3.6.2. Evolution equation Let us define the state of the UKF as the position x ∈ R 3 of the tissues in the two-body model. We take this state as the position of the extremity of the tissue spline near the tissue surface (entry point) and expressed in the world frame {F w }, as illustrated in Fig. 3.18. We consider here that the tissue spline can not deform, such that the state x is then sufficient to characterize the whole motion of the tissue spline. In the case where prior information is known on the tissue motions, this can be included in the evolution model by choosing an adequate function f k in (3.45). For example a periodic model of breathing motion [HMB + 10] can be used when needle insertion is performed near the lungs and the patient is placed under artificial breathing, leading to x k = a + b cos 2n πt k T + φ , (3.82) where T is the period of the motion, a ∈ R 3 is the initial position, b ∈ R 3 is the amplitude of the motion, φ ∈ R is the phase of the motion, n ∈ N is a coefficient used to tune the shape of the periodic motion and t k is the time. Using a model for tissue motions has the advantage that the process noise in the filter can be tuned with lower values of uncertainties in the covariance matrix, leading to an overall better smoothing of the measures. It can also be used to provide a prediction of the future position of the tissues. However, if the model does not fit the real motion, it may lead to poor filtering performances. In most situations, the exact motion of the tissues is not known and additional parameters to estimate need to be added to the state, such as the motion amplitude b, the period T or the phase φ. This, however, adds a layer of complexity to the model and can induce some 3.5. TISSUE MOTION ESTIMATION observability issues if the number of measurements is not increased as well. In clinical practice, patients are rarely placed under artificial breathing or even general anesthesia for needle insertion procedures such a biopsies. Breathing motion can then be hard to model perfectly since it may have amplitude or frequency varying over time. It may also happen that the patients suddenly hold their breath or simply move in a way that is not expected by the model. In this case the prediction can be far from the reality and may cause the state estimation to diverge. In order to take into consideration the previous remarks and be able to account for any kind of possible motions, we choose a simple random walk model. This offers great flexibility but at the expense of reduced prediction capabilities on the tissue motions. The corresponding evolution equation can be written as x k+1 = x k + w k . (3.83) The advantage of this form is that the equation is linear and the noise is additive. This way it is not required to use the unscented transform to perform the prediction step, which reduces to xk|k-1 = xk-1 , (3.84) P x,k|k-1 = P x,k-1 + Q k-1 . (3.85) The sigma points can then be sampled using xk|k-1 and P x,k|k-1 for the update step that we describe in the following. Measure equation One advantage of the UKF is that we can use our interaction model to provide a numerical way to compute the measure function h k without analytic formulation. We consider the case where the needle is constantly held by a needle holder that provides a pose feedback of its end effector thanks to mechanical odometry. The pose of the needle base in the model is thus regularly updated using this feedback during the insertion. This way, even without tissue motions, it is possible that the shape of the needle changes. Therefore the function h k relating the estimated state to the measurements can greatly vary between two successive update steps and provides some prediction of the measures. Needle position: Let us first consider the case where the measurements consist in a set of points belonging to the needle. Let us define a set of M points p j , j ∈ [[1, M ]], located at some given curvilinear coordinates l j on the needle. In that case the measure vector can be written as y =    p 1 . . . p M    . (3.86) From the model of the needle, the estimates pj of the measured needle points can be computed according to pj = c N (l j ), (3.87) where we recall that c N is the spline curve representing the needle in the model. Note that it is possible to change the dimension of the measure vector y and the curvilinear coordinates l j depending on the measures that are available. For example if a needle tracking algorithm is used, points can be added when and where the needle is clearly visible in the image, while fewer points may be available when and where the needle is almost not visible. The dimensions of the measurement noise vector ν k and its covariance matrix R k will also vary accordingly. Needle direction: In some cases the direction of the body of the needle at some given curvilinear coordinates l d can also be measured. This is typically the case when using a 5 degrees of freedom EM tracker embedded in the tip of the needle. In that case the measure vector can be written as y = d =   d x d y d z   , (3.88) where d ∈ S 2 is a unit vector tangent to the body of the needle at the curvilinear coordinates l d and S 2 denote the unity sphere in R 3 . From the model of the needle, the estimates of the needle body direction at curvilinear coordinate l d can be computed according to d = dc N (l) dl l=l d . (3.89) In that case, since S 2 is not a vector space, we need to use the version of the UKF on manifold that was discussed in section 3.5.1.3. The tangent space of S 2 is taken at the measure estimate Y 0 associated to the center sigma point. In this particular case the logarithm map of a measure point Y i is the angle-axis rotation vector θu representing the rotation between Y 0 and this measure point. This can be computed using Log Y 0 (Y i ) = θu, (3.90) with u = Y 0 × Y i Y 0 × Y i , (3.91) θ = atan2( Y 0 × Y i , Y 0 .Y i ), (3.92) where × denotes the cross product between two vectors, u is the axis of the rotation and θ is the angle between the two vectors Y 0 and Y i . The exponential map of an angle-axis rotation vector θu in the tangent space is obtained by rotating Y 0 according to this rotation vector, such that Exp Y 0 (θu) = cos(θ)Y 0 + sin(θ)u × Y 0 . (3.93) TISSUE MOTION ESTIMATION Efforts at the needle base: Let us now consider the measures of the force and the torque exerted at the needle base. Since our needle model does not take into account any axial compression or torsion, it can not be used to provide estimates of the axial force and torque exerted on the base. So we only consider the measures of the lateral forces and torques, which are sufficient to estimate the lateral motions of the tissues. The corresponding measure vector can be written as y = f b t b . (3.94) where f b ∈ R 2 is the lateral force exerted at the base of the needle and t b ∈ R 2 is the lateral torque exerted at the base of the needle. From the model of the needle, the estimates of the force f b and torque tb can be computed according to the Bernoulli equations f b = EI d 3 c N (l) dl 3 l=0 , (3.95) tb = EI d 2 c N (l) dl 2 l=0 × z, ( 3.96) where we recall that E is the Young's modulus of the needle, I is the second moment of area of a section of the needle, c N is the spline curve representing the needle in the model and z is the axis of the needle base. Update step: A complete measure vector first needs to be chosen as a combination of the different measurements defined previously, as for example a vector stacking the force measures provided by a force sensor and the position and direction measures provided by an electromagnetic tracker. Let us now describe how is performed the update step at each new acquisition of the measures. The state of the whole needle-tissue model is first saved at the moment of the acquisition. The sigma points X i are then sampled using (3.71) around the estimate of tissue position obtained at the prediction step. A new needle-tissue model is then generated for each sigma point and the position of each spline c T representing the position of the tissues is modified according to the sigma point X i . The new needle shape of each model is then computed and the estimates of the measures Y i can be generated from the model as defined previously in (3.87), (3.89), (3.95) or (3.96). Since the actual spread of the sigma points depends on the covariance P x,k|k-1 , it can happen that a high uncertainty leads to unfeasible states. For example if the distance between the current state estimate and one of the sigma points is greater than the length of the needle, it is highly probable that the model of the needle corresponding to this sigma point can not interact with the model of the tissues anymore. Such sigma point should thus be rejected to avoid failure of the computation of the model or at least avoid irrelevant estimates of the measures. Therefore the value of α is tuned at each update step to avoid such numerical issues (see (3.71)). A small value α = 10 -3 is chosen as the default, as is typically done in a lot of works using the UKF [START_REF] Wan | The unscented kalman filter for nonlinear estimation[END_REF]. We then adaptively reduce the value of α when needed such that the sigma points do not spread further than 1 mm from the current estimated position of the tissues. The new state estimate and state covariance can finally be updated according to the update step equations defined previously ((3.54)-(3.56) and (3.79)-(3.81)). Finally, the position of the whole tissue spline in the model is updated according to the value of xk computed by (3.54). Now that we have described a method to estimate the position of the tissues from measures provided on the needle, we propose in the following to use this method and assess its performances in different experiments. Tissue update validation In this section we present different experimental scenarios to evaluate the performances of our tissue motion estimation algorithm using the unscented Kalman filter. We first present the results obtained using the effort feedback provided by a force sensor and the position feedback on the needle tip provided by an electromagnetic tracker. Then we consider the case of position feedback on the needle shaft provided by cameras. Finally, we estimate the position of the tissues using the position feedback provided by a 3D ultrasound probe and use this estimation to improve the robustness of the needle tracking algorithm. Update from force and position feedback We consider in this section the update of the model using the force and torque feedback on the needle base as well as the position and direction feedback on the needle tip. Experimental conditions (setup in the Netherlands): The setup used in these experiments is depicted in Fig. 3.19. We use the needle insertion device (NID) attached to the UR3 robot. The Aurora biopsy needle with the embedded electromagnetic (EM) tracker is placed inside the NID and inserted in a gelatin phantom. The UR5 robot is used to apply a known motion to the phantom. The ATI force torque sensor is used to measure the interaction efforts exerted at the base of the needle and the Aurora EM tracker is used to measure the position and direction of the tip of the needle. We use the two-body model presented in section 2.4.1 with polynomial needle segments of order r = 3 to represent the part of the needle that is Registration: Registration of the position of the EM tracking system in the frame of the UR3 robot is performed before the insertions. The needle is moved at different positions and two sets of positions are recorded, one given by the UR3 odometry and one given by the EM tracker. Point cloud matching between the two sets is then used to find the pose of the EM tracking system in the frame of the UR3. The force torque sensor is used to measure the interaction efforts between the needle and the tissues. Since the sensor is mounted between the UR3 robot arm and the NID, it also measures the effect of the gravity due to the mass of the NID. Therefore the effect of gravity must be removed from the measures in addition to the sensor natural biases to obtain the desired measures. Note that we only apply small velocities and accelerations to the NID during our experiments and for this reason we choose to ignore the effects of inertia on the force and torque measurements. The details of the force sensor registration can be found in Appendix A. Experimental scenario: The force and EM data were acquired during the experiments on motion compensation that will be presented later in the thesis. In this section we only take into account the different measurements that were acquired during those experiments and we do not focus on the actual control of the needle that was performed. During those experiments, a known motion was applied to the phantom with the UR5 while the needle was inserted at constant speed with the NID. The UR3 was controlled to apply a lateral motion to the whole NID to avoid tearing the gelatin or breaking the needle. Update method: The length of the needle model is updated during the insertion to correspond to the real length of the part of the needle that is outside the NID, measured from the full length of the needle and the current translation of the NID. The pose of the simulated needle base is updated using the pose of the UR3 and the rotation of the needle inside the NID. The position of the modeled tissues is estimated using our update algorithm based on the unscented Kalman filter (UKF) presented in section 3.5.2. In order to determine the contribution of each component, in the following we consider three update cases: one using only the force and torque feedback at the needle base, one using only the position and orientation feedback of the needle tip and the last one using all the measures. In each case the different measures are stacked in one common measure vector that is then used in the UKF. The estimations for each kind of measures are computed as described in previous section 3.5.2.2, i.e. using (3.94) to (3.96) for the force and torque feedback, (3.86) and (3.87) for the position feedback and (3.88) to (3.93) for the orientation feedback. For each method, we consider that the measurements are independent, such that the measurement noise covariance matrix R in the UKF (used in (3.81)) is set as a diagonal matrix. The value of each diagonal element is set depending on the type of measure: (0.7) 2 mm 2 for the tip position, (2) 2 ( • ) 2 for the tip orientation, (0.2) 2 N 2 for the force and (25) 2 (mN.m) 2 for the torque. These values are chosen empirically, based on the sensors accuracy and the way they are implemented in the setup. The process noise covariance matrix Q (used in (3.74)) is also set as a diagonal matrix with diagonal elements set to (0.2) 2 mm 2 . Results on the filtering of the measures: We first compare the difference between the measured quantities and their values estimated using the model updated by the UKF. An example of tip positions as well as the position and orientation estimation errors obtained during one of the experiments is shown in Fig. 3.20. The position and orientation measures obtained from the EM tracking system are considered as the ground-truth for these experiments. We can see that the tip position and orientation are better estimated when using only the tip measurements, while using only the force and torque feedback tends to introduce a drift in the estimation that increases with the depth of the needle tip in the gelatin. This could be expected because of the flexible nature of the needle. Near the tissue surface the pose of the needle base has a great influence on the needle shape. On the other hand, the shape of a flexible needle is progressively determined by its interaction with the tissues as it is deeper inserted. The interaction force near the tip of the needle tends to be damped by the tissues and have little influence on the force measured at the needle base. Then, the more the needle is inserted, the less information about the needle tip is provided by the force and torque measured at the needle base. Force and torque measures are respectively shown in Fig. 3.21 and 3.22. The measures obtained from the force sensor are considered as the groundtruth for these experiments. We can observe that even when using only the force and torque feedback, the estimated measures of the torque does not seem to fit the real measures as well as expected. This can be explained by the low value of the torque measures compared to the value of the variance that was set in the UKF, such that the torque is almost not taken into account for the estimation in this case. The low value of the measures can be explained by the experimental conditions. Indeed, the needle can slide in and out the NID to modify its effective length, meaning that the effective base of the needle is not fixed to the NID. This introduces a play between the needle and the NID that causes a dead-zone in which torques are not transmitted correctly. On the other side, we can observe that the force is correctly estimated when using only the force and torque feedback, while some errors can appear when using only the tip position and orientation feedback. This can be explained as previously by the fact that the position of the tip provides little information on the force at the base once the needle is inserted in the tissues. Overall it can be observed that using all the measures to perform the update provides a trade-off between the fitting of the different measures by the model. Whichever the kind of measure chosen to update the model, we can see in Fig. 3.20 that the position of the needle tip can be well estimated with an error under 2.5 mm while a 1.5 cm lateral motion is applied to the tissues. This can be sufficient in clinical practice to reach standard size tumors in the body while the patient is freely breathing. Results on the tissue motion estimation: Finally, let us compare the estimation of the position of the tissues to the measure provided by the odometry of the robot moving the phantom. The estimated and measured positions are shown in Fig. 3.23, as well as the absolute estimation error. It can be seen that the overall shape of the tissue motion is well estimated. However, some lag and drift in the estimation can be observed for all combinations of the measures. In the case of the force measurements, the lag can be due to the play between the needle and the NID. Indeed, the tissues have to move from a certain amount and displace the needle before any force can actually be transmitted to the NID and be measured. This issue could be solved, along with the problem of torque measurement mentioned previously, by using a needle manipulator that provides a better fixing to the needle. In the case of the tip position measurements, the drift can be due to modeling errors on the shape of the spline curve simulating the path cut in the tissues. Indeed, the extremity of this spline is progressively updated according to the position of the simulated needle tip during the insertion. However, modeling errors can lead to an incorrect shape of the spline, such that the estimation of the rigid translation of the tissues can not be done properly. A first solution could be to allow some deformations of the spline once it has been created, however this would introduce many additional parameters that need to be estimated. This can create observability issues and may require additional sensors, which is not desirable in practice. Another solution would be to directly use the position feedback provided on the needle tip to update the extremity of the spline. This solution will be explored in the following when using visual feedback on the entire needle shaft. Conclusions: We have seen that the update of the position of the tissues in our model could be done using a method based on the UKF with measures provided by force and torque feedback at the needle base and/or EM position feedback on the tip. Both modalities could provide good results by themselves such that it may not be required to use both at the same time. However they provide different kinds of information that may be used for different purposes, such as accurate targeting for the EM tracker and reduction of the forces applied on the tissues for the force sensor. An additional advantage of using the force sensor is that it does not require a specific modification of the needle, contrary to the EM tracker that must be integrated into the needle before the insertion and removed before injecting something through the lumen of the needle. Nevertheless, neither of them can provide a feedback on the position of a target in the tissues, such that an additional modality is required for needle insertion procedures. On the contrary, medical imaging modalities can provide a feedback on a target as well as the position of the needle. Therefore in the next section we focus on the estimation of the tissue motions in our model by using the position feedback provided by an imaging modality. Update from position feedback In this section, we propose to test our tissue motion estimation method to update our interaction model using a 3D position feedback on the needle shaft. We focus here on the visual feedback provided by cameras to validate the algorithm. However it could be adapted to any other imaging modalities Figure 3.24: Experimental setup used to validate the performances of the tissue motion estimation algorithm when using the visual feedback provided by two cameras to detect the position of the needle body. that can provide a measure of the needle localization, as will be done with 3D ultrasound (US) in the next section. In the following we present the experiments that we performed to assess the quality of the model update obtained using the measures of the positions of several points along the needle. The performances are compared in terms of accuracy of the tip trajectory and estimated motions of the tissues. Experimental conditions (setup in France): The setup used for these experiments is depicted in Fig. 3.24. The Angiotech biopsy needle is attached to the end effector of the Viper s650 and inserted in a gelatin phantom embedded in a transparent plastic container. The needle is inserted in the phantom without steering, i.e. the trajectory of the base of the needle simply describes a straight vertical line. Lateral motions are applied manually to the phantom during the insertion. Visual feedback is obtained using the stereo cameras system. The whole needle shaft is tracked in real-time by the image processing algorithm described previously in section 3.4.1. The position of the phantom is measured from the tracking of two fiducial markers with four dots glued on each side of the container [START_REF] Horaud | An analytic solution for the perspective 4-point problem[END_REF] (see Fig. 3.24 and Fig. 3.27). We use the two-body model presented in section 2.4.1 with polynomial needle segments of order r = 3. We fix the length of the needle segments to 1 cm, resulting in a total of n = 13 segments and the last segment measuring 0.6 mm. The stiffness per unit length of the model is set to 3200 N.m -2 and the length threshold to add a new segment to the tissue spline is set to L thres = 0.1 mm. Model update: We propose to compare five different methods to represent the needle and to update the spline curve representing the path cut in the tissues in our model, as described in the following: • Method 1: the needle is modeled as a straight rigid needle. • Method 2: the needle is modeled using the two-body flexible needle model. The extremity of the tissue spline is updated using the cutting edge of the modeled bevel, as was described in the definition of the model in section 2.4.2. • Method 3: similar to method 2, except that the extremity of the tissue spline is updated using the visual feedback instead of the model of the bevel. The segment is added to link the last added segment to the current position of the real needle tip measured from the camera visual feedback. However the position of the whole tissue spline is not modified during the insertion. • Method 4: similar to method 2 with the addition of the proposed update algorithm based on unscented Kalman filter (UKF) to estimate the position of the tissue spline from the measured position of the needle. • Method 5: similar to method 3 with the addition of the proposed update algorithm based on UKF to estimate the position of the tissue spline from the measured position of the needle. For each method, the position of the simulated needle base is updated during the insertion using the odometry of the robot. For methods 4 and 5, we use the positions of several points along the needle as input for the update algorithm (as described by (3.86) and (3.87) in section 3.5.2.2). The points are extracted 5 mm from each other along the 3D polynomial curve obtained from the needle tracking using the cameras. The measurement noise covariance matrix R in the UKF is set as a diagonal matrix with diagonal elements equal to (0.25) 2 mm 2 , corresponding to the accuracy of the stereo system used to get the needle points. The process noise covariance matrix Q is set as a diagonal matrix with diagonal elements equal to (0.1) 2 mm 2 . Experimental scenario: Five insertions at different locations in the phantom are performed. The needle is first inserted 1 cm in the phantom to be able to initialize the needle tracking algorithm described in section 3.4.1. The insertion is then started, such that the needle base is only translated along the needle axis. The phantom is moved manually along different trajectories for each insertion, such that the motions have an amplitude up to 1 cm in the x and y directions of the world frame {F w }as depicted in Fig. 3.24. Results: We present now the results obtained during the experiments. We first consider the accuracy of the simulated tip trajectories and evaluate the effect of the update rate on this accuracy. The quality of the estimation of the motions of the tissues is then assessed and we discuss some limitations of the modeling. Comparison of tip trajectories: We first compare the tip trajectories obtained with the different model update methods. The average absolute position error between the measured and simulated needle tips calculated over time and across the different experiments is summarized in Fig. 3.25 and Table 3.1. An example of measured and simulated tip positions obtained during one experiment is shown in Fig. 3.26. Figure 3.27 shows the corresponding pictures of the needle acquired with the cameras at different steps of the insertion. The tissue spline corresponding to each model is overlaid on the images at each step. It is clearly visible from the simulated tip trajectories that updating the model while taking into account the motions of the tissues is crucial to ensure that the model remains a good representation of the real needle. It can also be observed from the mean absolute error over all the experiments in Fig. 3.25, that the more the model is updated, the better it fits the reality. However we can see that method 3 yields poor results since only the extremity of the tissue spline is updated by adding new segments that fit the measured positions of the tip. Since the lateral position of the spline is not updated to account for tissue motions, the resulting shape of the spline does not correspond to the reality, as can be seen in Fig. 3.27 (blue curve). On the contrary, modifying the whole position of the spline in addition to the update of its extremity allows taking into account the lateral motions of the tissues, as is done with methods 4 and 5. These results illustrate that a feedback on the needle is a necessity during insertion procedures. Indeed, a pre-operative planning would not be sufficient to predict the real trajectory of the flexible needle, as is illustrated by the trajectories of the non-updated models (method 1 and 2). Therefore, the association of the needle model and update algorithm that we propose proves to be a good method to accurately represent the current state of the insertion process. However, 3D medical imaging modalities typically have an acquisition time that is longer than the framerate of the cameras used in these experiments, such that the update can only be performed at a lower rate. Hence, we propose to compare the results obtained using two different update rates for the update methods that use the visual feedback (methods 3, 4 and 5). Effect of update rate: In order to simulate a slower imaging modality, like the 3D US that we will use in the following, we set the update rate to 1 Hz, meaning that the update of the tissue spline is performed only once every second. However, the update of the position of the needle base from the robot odometry is still performed at the fast rate available with the robot. The resulting error between the measured and simulated needle tip trajectories during the example experiment can be seen in Fig. 3.28. The average tip position errors calculated over time and across the different experiments are also summarized in Fig. 3.25 and Table 3.1. As expected, a higher update rate (30 Hz) provides better results than a lower update rate (1Hz), since more measures can be taken into account to estimate the position of the tissues. However, regularly updating the model even at a low rate still allows a good reduction of the modeling errors that occurred between two acquisitions, such that we can expect good results from the algorithm with 3D US. Estimation of tissue motions: Now that we have illustrated the importance of updating the interaction model to ensure a good modeling of the needle during the insertion procedure, we propose to evaluate the performances of the update algorithm to see if it can actually be used to estimate the real motions of the tissues. The position of the phantom is obtained by the tracking of the fiducial markers placed on the container, as can be seen in Fig. 3.27. The measured positions of the tissues during the previous insertion example are presented in Fig. 3.29 along with the estimation provided by the method 4. The results using the slower update rate are also shown. Overall the update method allows the tracking of the motions of the tissues and similar results are observed for both high and low update rates. This can also be observed in Fig. 3.27, on which it is visible that the updated tissue splines from method 4 and 5 follows the motion of the tissues around the needle. We further discuss the quality of the estimation in the following. Limitations of the model: Some tracking errors can still be observed on the position of the tissues when updating the model. They can be due to the accumulation of errors in the shape of the tissue spline, as was also discussed in previous section 3.6.1 when using force and position feedback. The same solution that was proposed could also be used here, consisting in updating the whole shape of the tissue spline instead of only its global translation. However, it is very likely to see observability issues appearing, due to the fact that different shapes of the tissue spline can lead to similar needle shapes. Additional phenomena can explain the tracking errors, such as the nonlinear properties of the tissues on which we briefly focus in the following. During some other of our experiments, some large lateral motions were applied to the base of the needle, such that the needle was cutting laterally in the gelatin and a tearing appeared at the surface. In this case the needle is moving inside the tissues without external motion of the tissues. The results of the tissue motion estimation using the update method 4 in this case are shown in Fig. 3.30. The tearing of the gelatin occurred at the beginning of the insertion, from t = 2.5s to t = 4.5s. We can see that the model is automatically updated according to the measures of the needle position, so that a drift appears in the estimated position of the tissues. Once the needle has stopped cutting laterally in the gelatin (at t = 4.5), the needle is embedded anew in the tissues. This is equivalent to changing the rest position of the cut path associated to the real needle and this is what is actually represented by the tissue spline of the updated model. Hence the following motions of the tissues are well estimated, although the drift remains. Figure 3.30: Example of tissue motions estimated using the update method 4 with the position feedback obtained from cameras. At the beginning of the insertion (blue zone from t = 2.5s to t = 4.5s), the needle base is moved laterally such that a tearing occurs at the surface of the gelatin. This creates an offset in the estimation of the motions of the tissues. Even if the tearing of the tissues is less likely to appears in real biological tissues, this example shows that our model and update method can lead to a wrong estimation of the real position of the tissues due to unmodeled phenomena. However, it can also be noted that if the simulated position of the tissues was updated according to an external position feedback on the real tissues, for example by tracking a marker on the surface of the tissues, the resulting state of the model would poorly fit the position of the real needle. On the contrary, our update algorithm using the position of the needle allows the model to fit the measures provided on the needle and to remain consistent with the way it locally represents the tissues. This can be seen as an advantage of the method since the goal of our model is to give a good estimation of the local behavior of the needle without modeling all the surrounding tissues. Conclusions: From the results of these experiments we can conclude that the method that we proposed to update the state of our model based on the UKF allows taking into account the effect of tissue motions on the shape of the needle. TISSUE UPDATE VALIDATION We have also seen that the non-linear phenomena occurring in the tissues, such as a lateral cutting by the needle, can have a great impact on the quality of the estimation of the real position of the tissues. In practice, real tissues are less prone to tearing than the gelatin used in the experiments and the needle will also be steered to avoid such tearing, however the hyper-elastic properties of real biological tissues may induce a similar drift in the estimation. Therefore, the update algorithm will not be used in the followings as a way to measure the exact position of the tissues, but only as a way to keep the model in a good state to represent the local behavior of the needle. We could also see that the method provides a good update even when considering the low acquisition rate that is available with a slower, but still real-time, imaging modality, such as 3D US. Hence, in the next section we use the update method as a way to increase the modeling accuracy of the needle insertion, such that it can be used as a prediction tool to improve the tracking of a needle in 3D US volumes. Needle tracking in 3D US with moving soft tissues In this section we propose to combine the model update method based on unscented Kalman filter (UKF) that was designed in section 3.5.2 with the needle tracking algorithm in 3D ultrasound (US) proposed in section 3.4.2. This combination is used to provide a robust tracking of a needle in a sequence of 3D US volumes during an insertion in moving tissues. In the previous section we used the visual feedback provided by cameras to track the needle and update the needle model to take into account the lateral motions of the tissues. However, the position of the tracking system was registered beforehand in the frame of the robotic needle manipulator by observing the needle in the acquired images, as described in section 3.4.1. In the case of a 3D US probe, a similar registration of the pose of the probe would require many insertions of the needle in the tissues to be able to observe its position in the US volume. This is not possible in a clinical context, in which multiple insertions should be avoided and where the registration process should be simple and not time consuming. Therefore, we propose to use a fast registration method performed directly at the beginning of the insertion procedure. In the following we present the results of the experiments that we performed to assess the performances of the tracking method combining our contributions. Experimental conditions (setup in France): The Angiotech biopsy needle is used and attached to the end effector of the Viper s850. The insertion is done vertically in a gelatin phantom embedded in a transparent plastic container. The container is fixed to the end effector of the Viper s650, which is used to apply a known motion to the phantom. We use the 3D US probe and US station from BK Ultrasound to grab online 3D US volumes. The US probe is fixed to the same table on which the phantom is placed, such that it is perpendicular to the needle insertion direction and remains in contact with the phantom. The acquisition parameters of the US probe are set to acquire 31 frames during a sweeping motion with an angle of 1.46 • between successive frames. The acquisition depth is set to 10 cm, resulting in the acquisition of one volume every 630 ms and a maximal resolution of 0.3 mm × 1 mm × 2 mm at the level of the needle, which is approximately 5 cm away from the probe. The spacial resolution of the post-scan volume is set to 0.3 mm in all directions and linear interpolation is used for the reconstruction. Tracking method: We use the tracking algorithm proposed in section 3.4.2 that exploits US artifacts to track the needle in the acquired sequence of US volumes. For each new volume acquisition, the tracking is initialized using three different methods described in the following: • Method 1: the tracking is initialized from the position of the needle tracked in the previous volume. No model of the needle is used in this case. • Method 2: the tracking is initialized using the projection of the needle model in the 3D US volume. We use the two-body model presented in section 2.4.2 with polynomial needle segments of order r = 3. We fix the length of the needle segments to 1 cm, resulting in a total of n = 13 segments and the last segment measuring 0.6 mm. The stiffness per unit length of the model is set to 3200 N.m -2 and the length threshold to add a new segment to the tissue spline is set to L thres = 0.1 mm. The model is updated between two volume acquisitions using only the odometry of the Viper s850 to defined the position of the simulated needle base. • Method 3: the same process as method 2 is used, except that the model is updated with the method presented in section 3.5.2 to take into account the motions of the tissues. Similarly to the experiments performed in previous section with camera feedback, we use the positions of several points separated by 5 mm from each other on the needle body as inputs for the UKF. The measurement noise covariance matrix R in the UKF is set with diagonal elements equal to (2) 2 mm 2 and the process noise covariance matrix Q with diagonal elements equal to (3) 2 mm 2 . TISSUE UPDATE VALIDATION Note that the needle model is defined in the frame of the robot, since the position of the simulated base is set according to the robot odometry. A registration between the US volume and the robot is thus necessary for the update method in order to convert the position of the needle body tracked in the volume to the robot frame. Registration: We describe here the registration method that we use to find the correspondence between a voxel in a 3D US volume and its real location in the needle manipulator frame. The US volumes are first scaled to real Cartesian space by using the size of a voxel, which is known from the characteristics of the probe and the process used to convert pre-scan data into post-scan data (as explained in section 3.2.2.2). In order to be in accordance with our objective of a reduced registration time and complexity, we use a fast registration method that can be used directly at the beginning the insertion procedure. After an initial insertion step, the part of the needle that is visible in the acquired US volume is manually segmented, giving both tip position and orientation. The pose of the volume is then computed by matching the measured tip position and orientation to the position and orientation obtained from the needle manipulator odometry and the needle model. The manual needle segmentation is also used for the initialization of the needle tracking algorithm. Note, however, that this method provides a registration accuracy that depends on the quality of the manual needle segmentation. Experimental scenario: We perform 10 straight insertions of 10 cm at different locations in the gelatin phantom with an insertion speed of 5 mm.s -1 . The needle is first inserted 1 cm in the phantom and manually segmented in the US volume to initialize the different tracking algorithms and register the probe pose. The insertion is then started at the same time as the motion applied to the phantom. For each experiment, a similar 1D lateral motion is applied to the container such that the phantom always stays in contact with both the table and the US probe. The motion follows a profile m(t) similar to a breathing motion [HMB + 10], expressed as m(t) = b cos 4 ( π T t - π 2 ), (3.97) where t is the time, b is the magnitude of the motion, set to 1 cm, and T is the period of the motion, set to 5 s. The insertion is performed for a duration of 18 s, roughly corresponding to 4 periods of the motion and the acquisition of 29 volumes. Results on tip tracking: An example of the positions of the tip tracked during one experiment using the different methods is shown in The insertion is performed along the y axis of the probe while the lateral motion of the tissues is applied along the z axis. One tracking method is initialized without model of the needle (blue), one is initialized from a model of the needle that does not take into account the motions of the tissues (green) and one is initialized from a model updated using the tissue motion estimation algorithm presented in section 3.5.2 (red). along with the ground-truth acquired by manual segmentation of the needle tip in the volumes. Figure 3.32 shows the result of the needle tracking algorithms in two orthogonal cross sections of the volume near the end of the insertion. We can observe that tracking the needle without any a priori information on the needle motion occurring between two volume acquisitions (method 1) leads to a failure of the tracking as soon as the beginning of the insertion. The tracking get stuck on the artifact appearing at the surface of the gelatin, due to the fast lateral velocity of the tissues as well as the low visibility of the needle at the beginning of the insertion. We can see that the tracking is able to follow the motion of the tissues (z axis in Fig. 3.31) since the artifact moves with the phantom. However the length of the inserted part of the needle is not provided to the algorithm, so that it stays at the surface without taking into account the insertion motion, as can be seen in Fig. 3.32. On the contrary, using the needle model to initialize the tracking allows taking into account the length of the needle that is currently inserted, such that both methods 2 and 3 provide a good estimation of the tip position along the y axis. However, when the model is only updated at its base (method 2), the tracking mostly fails due to the wrong lateral location of the initialization that does not take into account the motions of the tissues. We can see that the tracking is rather inconsistent in this case. Sometimes the tracking recovers the correct location of the needle when it is near the model, as can be seen in Fig. 3.31 from volume 13 to 16 (green curve); and when the tissues move far from their initial position, the tracking is initialized near other structures or artifacts, such that it fails to find the needle, as is the case in Fig. 3.32. On the other hand, updating the model according to the tracked position of the needle (method 3) allows taking into account the motions of the tissues. This way, the prediction of the needle localization in the following volume is of good quality and the tracking algorithm can accurately find the position of the needle. Overall the combination of the tracking algorithm with the updated model allows a good tracking of the needle tip with a mean accuracy of 3.1 ± 2.5 mm over the volume sequences of all the insertions, which may be sufficient for commonly performed needle insertions. Results on tissue motion estimation: As a final consideration, let us have a look at the estimated position of the tissues in the updated model that is provided in Fig. 3.33. We can see that the overall motions of the tissues are well estimated by the algorithm. However, we can observe a delay between the estimation and the measures. Although a part of this delay may be introduced by the filtering effect of the UKF, it is most probably due to the delay introduced by the acquisition time required to obtain the final 3D US volume. This issue could be solved by taking into account the known time required by the system to reconstruct the volume from the data acquired by the US transducer. We can also observe a slight drift of the estimation during the insertion. This can be due to the accumulation of modeling errors that can arise because of some local tearing of the gelatin when the phantom moves far from its initial position. It can also come from the fast registration method that we used in these experiments, which can introduce a difference between the real position of the needle and the measured position reconstructed from the tracking in the volume. These observations confirm the fact that has already been discussed in previous section, namely that updating the position of the tissues in the model should only be used as a way to get a good representation of the needle by the model and not an accurate measure of the tissue position. Conclusions: We provided a method for improving the robustness of the tracking of a flexible needle in 3D US volumes when the tissues are subject to lateral motions. Using a mechanics-based model allows a prediction of the motions of the needle tip and shaft due to the motions of the needle manipulator between two volume acquisitions. The prediction can then be used to provide a good initialization of an iterative needle tracking algorithm. Finally, updating the model thanks to the result of the tracking allows taking into account the motions of the tissues and improves the modeling accuracy and the subsequent prediction. The quality of the prediction of the needle location could even be further improved by using a fast information feedback to update the modeled position of the tissues between consecutive volume acquisitions. This could be done using a force sensor or an electromagnetic tracker, as we have demonstrated in section 3.6.1. However, an imaging modality remains a necessity to achieve the steering of the needle toward a target. Conclusion In this chapter, we started by a brief comparison of the imaging modalities traditionally used to perform needle insertions. From this we chose to focus on the ultrasound (US) modality and we presented the general principles of US imaging as well as the way to reconstruct 2D images or 3D volumes that can then be exploited. We also covered the case of several artifacts that are specific to the presence of a needle in the field of view of the US probe. A review of current detection and tracking methods used to localize a needle from 2D or 3D US feedback was then provided. We proposed a first contribution in this field consisting in an iterative algorithm that exploits the artifacts observed around a needle to accurately find the position of its whole body in a 3D US volume. The performances of the algorithm were illustrated through an experimental validation and a comparison to another state-of-the-art algorithm. Then we considered the case of a change of position of the tissues due to motions of the patient. We presented the concepts of Bayesian filtering and proposed an algorithm based on an unscented Kalman filter to update the state of the interaction model that we developed in chapter 2 using the different measures available on the needle. We have shown through various experimental scenarios that the update method could be used with several kinds of information feedback on the needle, such as force feedback, electromagnetic position feedback or visual position feedback, in order to take into account the lateral motions of the tissues. We then proposed to fuse our two contributions into one global method to mutually improve both tracking performances in 3D US and insertion modeling accuracy. Good localization of the needle and accurate modeling of the insertion are two important keys to provide an image-guided robotic assistance during an insertion procedure. Now that we have addressed these two points and have proposed a contribution for both of them, we will focus in chapter 4 on the design of a control framework for robotic needle insertion under visual guidance. Chapter 4 Needle steering In this chapter we address the issue of steering a flexible needle inserted in soft tissues. The goal of a needle insertion procedure is to accurately reach a targeted region embedded in the body with the tip of the needle. Achieving this goal is not always easy for clinicians due to the complex behavior exhibited by a thin flexible needle interacting with soft tissues. Robot assisted needle insertion can then be of great help to improve the accuracy of the operation and to reduce the necessity of repeated insertions. In chapter 2 we presented different ways of modeling the insertion of a flexible needle in soft tissues. In particular we have seen that kinematic and mechanics-based models offer a reasonable computational complexity that makes them suitable for real-time processing and control of a robotic system. In the following, we first provide in section 4.1 a review of current techniques used to steer different kinds of needles using a robotic system. Then we present different methods in section 4.2 used to define the trajectory that the needle tip must follow to reach a target and avoid obstacles. In section 4.3 we propose a new contribution consisting in a generic needle steering framework for closed-loop control of a robotic manipulator holding a flexible needle. This framework is based on the task function framework and can be adapted to steer different kinds of needles. It is formulated such that different kinds of sensing modalities can be used to provide a feedback on the needle and the target. We finally describe different experimental scenarios in section 4.4 that we use to assess the performances of our steering framework. Parts of the work presented in this chapter on the steering framework were published in two articles presented in international conferences [CKB16a] [CKB16b]. Steering strategies In this section we present a review of current techniques used to control the trajectory of the tip of a needle inserted in soft tissues. The techniques used to reach a target in soft tissues while avoiding other sensitive regions can be gathered into three main families. • Tip-based steering methods use a needle with an asymmetric design of the tip to create a deflection of the tip trajectory when the needle is inserted into the tissues without any other lateral motion of its base. • Base manipulation methods on the contrary use lateral translation and rotation motions of the needle base during the insertion to modify the trajectory of the needle tip. • Lastly, tissue manipulation is a special case in the sense that no needle steering is actually performed. Instead it uses deformations of the surrounding tissues to modify the position of the target and obstacles. We present each steering family in further detail in the following. Tip-based needle steering As described in the section on kinematic modeling 2.1, it can be observed that the presence of an asymmetry of the needle tip geometry, such as a bevel, leads to a deviation of the needle trajectory from a straight path, as illustrated in Fig. 4.1a. Considering this effect as a drawback, clinicians usually rotate the needle around its axis during the insertion to cancel the effect of the normal component of the reaction force created at the needle tip. This allows the trajectory of the needle tip to follow a straight line. However many research works have been conducted over the last two decades to use this effect as an advantage to steer the needle tip, leading to the creation of the tip-based steering strategies [APM07] [START_REF] Van De Berg | Design choices in needle steering-a review[END_REF]. Tip-based needle steering consists in controlling the orientation of the lateral component of the reaction force at the tip to face a desired direction. The behavior of the needle tip can usually be accurately modeled using kinematic models [WIKC + 06]. Needles used for tip-based control typically have a small diameter and are made of super-elastic alloys, such as Nitinol, to decrease the needle rigidity and to increase the influence of the tip force on the needle trajectory. This allows getting closer to the assumption that the needle is very flexible with respect to the surrounding tissues, which is required for the validity of kinematic models (see section 2.1). The control of the insertion of such needles is often limited to the insertion of the needle along its base axis and the orientation of the needle around this axis. Different control strategies have been developed to steer the needle tip using only these two degrees of freedom (DOF). A constant ratio between 4.1. STEERING STRATEGIES the rotation and insertion velocities of the needle can be used to obtain a helical trajectory [HAC + 09]. A low ratio leads to a circular trajectory with curvature corresponding to the natural curvature of the needle insertion. A high ratio leads to an almost straight trajectory. Duty-cycling: The duty cycling control strategy, first tested in [START_REF] Engh | Toward effective needle steering in brain tissue[END_REF] and later formalized in [START_REF] Minhas | Modeling of needle steering via duty-cycled spinning[END_REF], consists in using alternatively only the two extreme cases of the helical trajectories: pure insertion of the needle (maximal curvature of the trajectory) and insertion with fast rotation (straight trajectory). The resulting trajectory of the needle tip can be approximated by an arc of a circle with an effective curvature K ef f that can be tuned between 0 and the maximal curvature K nat . It has been shown that the relation between K ef f and the duty cycle ratio DC between the length of the phases could be approximated by a linear function [START_REF] Minhas | Modeling of needle steering via duty-cycled spinning[END_REF]: DC = L rot L rot + L ins , (4.1) K ef f = (1 -DC)K nat , (4.2) where L ins and L rot are the insertion lengths corresponding respectively to the pure insertion phase and the insertion phase with fast rotation. Similarly, in the case of a constant insertion velocity, the duty-cycle DC can be computed from the duration of each phase instead of their insertion length. This method has first been used only in 2D, using an integer number of full 2π rotations during the rotation phase [START_REF] Minhas | Modeling of needle steering via duty-cycled spinning[END_REF]. It was later extended to 3D by adding an additional angle of rotation before the insertion phase to orient the curve toward the desired direction [START_REF] Wood | Algorithm for three-dimensional control of needle steering via duty-cycled rotation[END_REF]. A 3D kinematic formulation was also proposed by Krupa [Kru14] and Patil et al. [START_REF] Patil | Needle steering in 3-d via rapid replanning[END_REF]. Duty-cycling control has also been extensively used in its 2D or 3D versions over the past decade, associated with various needle insertion systems, needle tracking algorithms and methods to define the trajectory of the tip (see for example [vdBPA + 11] [BAP + 11] [PBWA14] [CKN15] [MPT16] ). Trajectory planning will be covered in next section 4.2. Duty-cycling control presents some drawbacks that have to be addressed. First the natural curvature K nat must be known to compute the duty-cycle DC. This parameter is difficult to determine in practice and may even vary with the insertion depth, such that an online estimation can be required [START_REF] Moreira | Needle steering in biological tissue using ultrasound-based online curvature estimation[END_REF]. It may also not be possible to continuously rotate the needle along its axis. This is for example the case when using cabled sensors attached to the needle, such as electromagnetic trackers or optic fibers embedded in the needle. The duty cycling control has to be adapted in this case to alternate the direction of the rotation around the needle shaft [START_REF] Majewicz | Design and evaluation of duty-cycling steering algorithms for robotically-driven steerable needles[END_REF]. The effect of the bevel angle on the needle insertion has been studied in artificial [START_REF] Webster | Design considerations for robotic needle steering[END_REF], ex-vivo [START_REF] Majewicz | Evaluation of robotic needle steering in ex vivo tissue[END_REF] or in-vivo [MMVV + 12] tissues. It has been observed that it has a direct effect on the amount of deflection of the needle tip from a straight path. However the curvature of the tip trajectory is very low in biological tissues, which can limit the interest of using the duty-cycling control in clinical practice. The natural curvature of the needle can be increased by using a needle with a prebent [AGL + 16] or precurved tip [VDBDJVG + 17]. However this is not suitable for duty-cycling control since it also increases the damage done to the tissues during the rotation of the needle. Special design: Particular mechanical designs of the needle have been proposed to control the force created at the needle tip. Swaney et al. [START_REF] Swaney | Webster. A flexure-based steerable needle: High curvature with reduced tissue damage[END_REF] designed a specific flexure based needle tip to offer the high curvature of a prebent-tip needle during insertion while keeping the reduced tissue damage of a beveled-tip needle during rotations. Active tips were also designed to allow a modification of the lateral force intensity and orientation without using rotation of the needle around its axis. Burrows et al. [START_REF] Burrows | Smooth online path planning for needle steering with non-linear constraints[END_REF] use a needle made of multiple segments that can slide along each other, thus modifying the shape of the tip of the needle. Shahriari et al. [SRvdB + 16] use a tendon-actuated needle tip with 2 DOF, which acts as a pre-bent tip with a variable tip angle and orientation. The main drawbacks of tip-based steering are that the tip trajectory can only be modified by inserting the needle and that the amplitude of the obtained lateral motions is relatively small in real clinical conditions. Although special designs have been proposed to offer improved steering capabilities, these needles are still unsuitable for a fast and low cost integration into clinical practice. However, other steering methods can be used to steer traditional needles, as we will see in the following. Needle steering using base manipulation Base manipulation consists in controlling the needle tip trajectory using an adequate control of the 6 degrees of freedom (DOF) of the needle base. In the case of a symmetric tip needle, changing the trajectory of the needle tip from a straight path requires bending the needle and pushing laterally on the tissues, as illustrated in Fig. 4.1b. This is the natural way clinicians use to steer a needle when holding it by its base. Pioneer work on robotic control of a needle attached by its base to a robotic manipulator was performed by DiMaio et al. [START_REF] Dimaio | Needle steering and motion planning in soft tissues[END_REF]. The flexibility of the needle and its interaction with soft tissues was modeled using 2D finite element modeling (FEM) and was used to predict the motion of the needle tip resulting from a given needle base motion. The model was used to compute the trajectory of the needle base that would result in the desired tip trajectory. Due to the computational complexity of the FEM, only preplanning of the needle trajectory was performed and the actual insertion was performed in open-loop control. Closed-loop needle base manipulation was performed under fluoroscopic guidance by Glozman and Shoham [START_REF] Glozman | Image-guided robotic flexible needle steering[END_REF] and later under ultrasound guidance by Neubach and Shoham [START_REF] Neubach | Ultrasound-guided robot for flexible needle steering[END_REF]. The 2D virtual springs model was used in both cases to perform a pre-planning of the needle trajectory that also minimizes the lateral efforts exerted on the tissues. Additionally, this mechanics-based model enabled real-time performance and was used in the closed-loop control scheme to ensure that the real needle tip follows the planned trajectory. Despite being among the first work on robotic needle steering, base manipulation has been the subject of little research these past years compared to the amount of work on tip steerable needles. This can mainly be explained by the fact that bending the needle and pushing on the tissues to control the lateral motion of the tip can potentially induce more tissue damage than only inserting the needle. The efforts required to induce significant lateral tip motion also rapidly increase as the needle is inserted deeper into the tissues. This can limit the use of base manipulation to superficial targets. However it can also be noted that the 2 DOF used in tip-based control (translation and rotation along and around the needle axis) can also be controlled using base manipulation. Therefore it is also possible to use a base manipulation framework to perform tip-based steering of a needle with an asymmetric tip, as illustrated in Fig. 4.1c. Using only tip-based control, the needle base can only translate in one insertion direction and it is not possible to compensate for any lateral motions of the tissues that may arise from patient motion. On the contrary, using all 6 DOF of the needle base offers the advantage of keeping additional DOF if necessary. Therefore, due to its ability to handle both symmetric and asymmetric tips, base manipulation in the general sense is the steering method that we choose to explore in the following. Tissue manipulation Tissue manipulation consists in applying deformations on the internal parts of the tissues by moving one [THA + 09] or multiple points [START_REF] Mallapragada | Robotassisted real-time tumor manipulation for breast biopsy[END_REF][PVdBA11] of the surface of the tissues. This kind of control requires an accurate finite element modeling (FEM) model of the tissues, which is difficult to obtain in practice due to parameter estimation. The computational load of FEM is also an obstacle for real-time use, limiting it to pre-planning of the insertion procedure, which further enhances the need for an accurate modeling. This technique has only been used so far to align the target with a large rigid needle and no work has been conducted to explore the modification of the trajectory of a flexible needle. In addition, it can be observed that the motion of the tissue surface has a little influence on the motion of deep anatomical structures: tissue manipulation can then only be used to move superficial targets. Shallow targets are not the only kind of targets that we want to cover in our work, therefore we do not consider tissue manipulation in the following. Needle tip trajectory In section 4.1, we presented different methods to control the motion of the tip of a needle being inserted in soft tissues using a robotic manipulator. Once a type of needle and an associated control scheme has been chosen to control the needle tip, a strategy needs to be chosen to define the motion to apply to the needle tip. Two approaches are generally used, which are path planning and reactive control. The path planning approach uses some predictions of the behavior of the system and tries to find the best sequence of motions that needs to be applied to fulfill the general objective. On the contrary, the reactive control approach only relies on the current state of the system and intra-operative measures to compute the next motion to apply. Path planning Path planning is used to define the entire trajectory that needs to be followed by the needle tip to reach the target. This approach requires a model of the needle insertion process to predict the effect of the control inputs on the tip trajectory. It is mostly used in tip-based steering, for which the unicycle model (see section 2.1) can be used because of its simplicity and computational efficiency. Planning the natural trajectory: Duindam et al. [DXA + 10] planned the trajectory of the needle while considering a stop-and-turn strategy, thus alternating between rotation-only phases and insertion-only phases. Three insertion steps were considered, leading to a tip trajectory following three successive arcs with constant curvature. The best duration of each phase, i.e. the length of each arc, was computed such that the generated trajectory reached the target. Hauser et al. [HAC + 09] exploited the helical shape of the paths obtained when applying constant insertion and rotation velocities to the needle. The best velocities were computed by selecting the helical trajectory that allowed the final tip position to be the closest to the target. A model predictive control scheme was used, in which the best selected velocities are applied for a short amount of time and the procedure is repeated until the target has been reached. Rapidly-exploring random tree (RRT): Among the many existing path planning algorithms, the RRT algorithm [START_REF] Lavalle | Randomized kinodynamic planning[END_REF] has been widely used in needle steering applications. This probabilistic algorithm consists in randomly choosing multiple possible control inputs and generating the corresponding output trajectories. The best trajectory is then chosen and the corresponding control inputs are applied to the real system. The RRT can be used in many ways, depending on the underlying model chosen to relate the control inputs to the output tip trajectory. The first use of RRT for 3D flexible needle insertion planning was done by Xu et al. [START_REF] Xu | Motion planning for steerable needles in 3d environments with obstacles using rapidly-exploring random trees and backchaining[END_REF]. The kinematic model of the needle with constant curvature was used to predict the motion of the needle tip for given insertion and rotation velocities. Due to the constant curvature constraint the control inputs were limited to a stop-and-turn strategy. However, a lot of trajectories had to be generated before finding a good one: the algorithm was then slow and could only be used for pre-operative planning of the insertion. The introduction of the duty-cycling control allowed dropping the constant curvature assumption and consider the possibility of controlling the effective curvature of the tip trajectory. This simplified the planning and online intra-operative replanning could be achieved in 2D [BAP + 11] and 3D [PA10] [START_REF] Bernardes | 3d robust online motion planning for steerable needles in dynamic workspaces using duty-cycled rotation[END_REF]. The RRT was also used with 2D finite element model-123 CHAPTER 4. NEEDLE STEERING ing instead of kinematic modeling to provide a more accurate offline preoperative planning that takes into account the tissue deformations due to the needle insertion [START_REF] Patil | Motion planning under uncertainty in highly deformable environments[END_REF]. Planning under uncertainties: Since planning methods always rely on the predictions given by a model of the needle, inaccuracy of the model can diminish the performances of the planning if they are not taken into account. Stochastic planning methods have been proposed to consider uncertainties on the motion of the tip. Park et al. [PKZ + 05] used a path-of-probability approach, where a stochastic version of the kinematic model is used to compute the probability density function of the final position of the needle tip. This is then used to generate an set of tip trajectories that can reach the target. Alterowitz et al. [START_REF] Alterovitz | The stochastic motion roadmap: A sampling framework for planning with markov motion uncertainty[END_REF] used a stochastic motion roadmap to model the probability to obtain a given 2D pose of the needle tip starting from another tip pose. The optimal sequence of control inputs was then computed from the map to minimize the probability to hit an obstacle and to maximize the probability to reach the target. Fuzzy logic was also proposed as a way to cope with incertainty in the control inputs [START_REF] Lee | A probability-based path planning method using fuzzy logic[END_REF]. Even when modeling the uncertainty, unpredicted tissue inhomogeneities or tissue motions can greatly modify the trajectory of the tip or the geometry of the environment. Using pre-operative planning usually requires the use of reactive control during the real procedure to ensure that the planned trajectory can be accurately followed. Planning can be used to not only plan a feasible trajectory but also to design an optimal controller that can take into account the uncertainties on the model and the intra-operative measures during the procedure. In [vdBPA + 11] a linear-quadratic Gaussian (LQG) controller was designed to robustly follow the trajectory that was pre-planned using RRT. The controller could take into account the current state uncertainty to minimize the probability to hit an obstacle. Sun and Altrerovitz [START_REF] Sun | Motion planning under uncertainty for medical needle steering using optimization in belief space[END_REF] proposed to take into account the sensor placement directly during the design of the planning and LQG controller, in order to minimize the uncertainty on the tip location along the planned trajectory. This way, obstacles could be avoided without having to pass far away from them to ensure avoidance. Online re-planning: Online re-planning of the trajectory can also be used instead of considering uncertainties in a model. By regularly computing a new trajectory that takes into account the current state of the insertion, the control can directly compensate for modeling uncertainties [BAPB13] [PBWA14]. This offers the good prediction capabilities of the planning approach while maintaining a good reactivity to environment changes, which is one of the motivations behind the research on fast planning algorithms that could work in real-time. However, online re-planning is only possible when using simplified models like kinematic models. In the case of base manipulation control, the whole shape of the needle needs to be modeled, limiting the use of such model to pre-operative planning. Reactive control can then be used during the insertion to adapt to the changes in the environment. Reactive control Reactive control consists in using only a feedback on the current state of the system to compute the control inputs to apply. This kind of control usually uses inverse kinematics to compute the control inputs to obtain a desired output motion. If the approach does not rely on an accurate modeling of the system, it uses closed-loop control to compensate for modeling errors. Reactive control with tip-based control: In the case of beveled-tip needles, sliding mode control can be used to control the bevel orientation during the insertion such that the bevel cutting edge is always directed toward the target. The advantage of this method is that it does not rely on the parameters of an interaction model with the tissues. Rucker et al. [RDG + 13] demonstrated that an arbitrary accuracy could be reached with this method by choosing an appropriate ratio between insertion and rotation velocities. Sliding mode control have proven its efficiency with many feedback modalities, such as electromagnetic (EM) tracker [RDG + 13], fiber Bragg grating (FBG) sensors [START_REF] Abayazid | 3d flexible needle steering in soft-tissue phantoms using fiber bragg grating sensors[END_REF], ultrasound (US) imaging [START_REF] Abayazid | Integrating deflection models and image feedback for realtime flexible needle steering[END_REF][FRS + 16] or computerized tomography (CT)-scan fused with EM tracking [SHvK + 17]. For this reason, we will include it in our control framework in the following section 4.3. Reactive control can also be used to intra-operatively compensate for deviations from a trajectory that has been planned pre-operatively by another planning algorithm. Sliding mode control can for example be adapted to follow keypoints along the planned trajectory instead of directly pointing toward the target [AVP + 14]. The linear-quadratic Gaussian (LQG) control framework can also be used to take into account modeling errors and measure noise during the insertion [START_REF] Kallem | Image guidance of flexible tip-steerable needles[END_REF]. Since reactive control is expected to work in real-time, kinematic models are most often used. However such models are only applicable for needle with asymmetric tips whereas base manipulation must be used in the case of a symmetric tip needle. Reactive control with base manipulation: The first robotic needle insertion procedure using base manipulation [START_REF] Dimaio | Needle steering and motion planning in soft tissues[END_REF] proposed to use vector fields to define the trajectory that needed to be followed by the needle tip. The needle and tissues were modeled using 2D finite element modeling (FEM) and the vector field was attached to the tissue model, such that tissue deformations also induced a modification of the vector field. An attractive vector field was placed around the target and repulsive ones were placed around obstacles, defining in each point of the space the desired instantaneous velocity that the needle tip should follow. Inverse kinematics was computed from the current state of the model to find the local base motion that generates the desired tip motion. This was only performed in simulation and then applied in open-loop to a real needle due to computational complexity of the FEM. Mechanics-based models were also used with closed-loop feedback using fluoroscopic [START_REF] Glozman | Image-guided robotic flexible needle steering[END_REF] or US [START_REF] Neubach | Ultrasound-guided robot for flexible needle steering[END_REF] imaging, allowing the intra-operative steering of a flexible needle toward a target. Reactive control using visual feedback: Visual servoing is a kind of reactive control based on visual feedback. The method computes the control inputs required to obtain a desired variations of some visual features defined directly in the acquired images. In [START_REF] Krupa | A new duty-cycling approach for 3d needle steering allowing the use of the classical visual servoing framework for targeting tasks[END_REF] and [START_REF] Chatelain | 3d ultrasoundguided robotic steering of a flexible needle via visual servoing[END_REF] it was used to control the needle trajectory using 3D US imaging and the duty-cycling method. This approach offers a great accuracy and robustness to modeling errors due to the fact that the control is directly defined in the image. It is also quite flexible since many control behaviors can be obtained depending on the design of the visual features that are chosen. For these reasons we choose visual servoing as a basis for our needle steering framework and we will describe its principles in more detail in the following section 4.3.1. Needle steering framework This section presents our contribution to the field of needle steering in soft tissues. We propose a generic control framework that can be adapted to control the different degrees of freedom of a robotic system holding any kind of needle shaped tool with symmetric or asymmetric tip geometry. The proposed approach is based on visual servoing [START_REF] Espiau | A new approach to visual servoing in robotics[END_REF], which consists in controlling the system to obtain some desired variations of several features defined directly in an image, such as for example the alignment of the needle with a target in an ultrasound image. In order to offer a framework that can be adapted to many kinds of information feedback and that is not limited to visual feedback, we propose a formulation that uses the task function framework [START_REF] Samson | Robot Control: The Task Function Approach[END_REF], which is the core principle used in visual servoing. This way a single control law can be used to integrate the information on the needle and the target provided by several kinds of modalities, such as electromagnetic tracking, force feedback, medical imaging or fiber Bragg grating shape sensors. In the following we first present the fundamentals of the task function 4.3. NEEDLE STEERING FRAMEWORK framework in section 4.3.1 and the stability aspects of the control in section 4.3.2. We then describe in section 4.3.3 how we apply this framework to the case of needle steering by using the mechanics-based needle models that we proposed in section 2.4. Finally we present in section 4.3.4 the design of several task functions that can be used in the framework to steer the needle tip toward a target while maintaining a low amount of deformations of the tissues. Experimental validation of the framework in the case of visual feedback will be described in section 4.4. Task function framework A classical method used to control robotic systems is the task function framework [START_REF] Samson | Robot Control: The Task Function Approach[END_REF], that we describe in the following. General formulation: We consider a generic control vector v ∈ R m containing the m different input velocities that are available to control the system. This vector can typically contain the velocity of each joint of a robotic arm or the six components of the velocity screw vector of an end-effector. We note r ∈ R m the position vector associated to v, i.e. the position of the joints or the pose of the end-effector. In the task function framework, a task vector e ∈ R n is defined and contains n scalar functions that we want to control. In image-based visual servoing these tasks usually correspond to some geometrical features extracted from the images. At each instant the variations of the tasks can be expressed as ė(t, v) = de dt = ∂e ∂t + ∂e ∂r v. (4.3) The term ∂e ∂t represents the variations over time of the tasks that are not due to the control inputs. The tasks are linked to the control inputs by the Jacobian matrix J ∈ R n×m defined as J = ∂e ∂r . (4.4) Let us define ėd the desired value for the variation of the task functions. In all the following developments, the subscript . d will be used to describe the desired value of a certain quantity. The best control vector that allows fulfilling the tasks can be computed as v = J + ėd - ∂e ∂t , (4.5) where + stands for the Moore-Penrose pseudo-inverse operator [START_REF] Penrose | A generalized inverse for matrices[END_REF]. The variation ∂e ∂t is usually not directly available and an estimation ∂e ∂t is necessary to compute v = J + ėd -∂e ∂t . For simplicity, in the following we consider the case where ∂e ∂t = 0, which is usually associated to a static case where only the control inputs v have an action on the environment. This leads to the control law v = J + ėd , (4.6) which is the main control law that we will use in the experiments. Tasks and inputs priorities: When n < m, there are more degrees of freedom (DOF) than the number of tasks to fulfill. If the tasks are independent, i.e. the rank of J is equal to the number n of tasks, then there are infinite solutions to exactly fulfill all the tasks. In this case the Moore-Penrose pseudo-inverse gives the solution with the lowest euclidean norm. If the components of the input vector are not homogeneous, for example containing both translational and rotational velocities, the euclidean norm may actually have no physical meaning. A diagonal weighting matrix M ∈ R m×m can be used in this case to give specific weights to the different components: v = M -1 (J M ) + ėd . (4.7) Different methods of Jacobian normalization used to tune the weights of the matrix have been summarized by Khan et al. [START_REF] Khan | Jacobian matrix normalization -a comparison of different approaches in the context of multi-objective optimization of 6-dof haptic devices[END_REF]. When n > m, there are not enough DOF to control the different tasks independently. The same thing happens if the rank of J is lower than n, meaning that some of the tasks are not independent. A diagonal weighting matrix L ∈ R n×n can then be used in these cases to give specific weights to the different tasks depending on their priority. Hence, v = (LJ ) + L -1 ėd . (4.8) Both weighting matrices can also be used to deal with dependant tasks in an underdetermined system, leading to the weighted pseudo-inverse [START_REF] Eldén | A weighted pseudoinverse, generalized singular values, and constrained least squares problems[END_REF] expressed as v = M -1 (LJ M ) + L -1 ėd . (4.9) Note however that the weighted pseudo-inverse only achieves a trade-off between tasks, meaning that even high-priority tasks may not be exactly fulfilled. NEEDLE STEERING FRAMEWORK Hierachical stack of tasks: Absolute priority can be given to some tasks using hierarchical stack of tasks [START_REF] Siciliano | A general framework for managing multiple tasks in highly redundant robotic systems[END_REF]. In that case, each set of tasks with a given priority is added successively to the control output such that they do not disturb the previous tasks with higher priority. This is done by allowing the contribution of low priority tasks to lie only in the null space of the higher priority tasks. The control output that is obtained after adding the contributions of the tasks from priority level 1 to i (1 being the highest priority) is given by v i = v i-1 + P i-1 (J i P i-1 ) + ( ėi,d -J i v i-1 ) , (4.10) where J i is the Jacobian matrix corresponding to the task vector e i containing the tasks with priority level i, ėi,d still denotes the desired value of ėi and P i is the projector onto the null space of all tasks with priority levels from 1 to i. The projectors P i can be computed according to P i = I m -    J 1 . . . J i    +    J 1 . . . J i    , (4.11) where I m is the m by m identity matrix. Alternatively, these projectors can also be computed iteratively using [BB04] P 0 = I m P i = P i-1 -(J i P i-1 ) + (J i P i-1 ) . (4.12) For example, using only 2 priority levels, the control law thus becomes v = J + 1 ė1,d + P 1 (J 2 P 1 ) + ė2,d -J 2 J + 1 ė1,d , (4. 13) with P 1 = I m -J 1 + J 1 . An illustration of the hierarchical stack of tasks using this formulation can be seen in Fig. 4.2a. Singularities: One issue when using task functions is the presence of singularities. Natural singularities may first arise when one of the tasks becomes singular, meaning that the rank of the Jacobian matrix is lower than the number n of tasks. Algorithmic singularities can also arise when tasks with different priorities become dependent, i.e. when J i P i-1 becomes singular even if J i is not. While the pseudo-inverse is stable exactly at the singularity, it leads to numerical instability around the singularity. This numerical instability is easily illustrated using the singular value decomposition of the matrix: J = min(n,m) i=0 σ i u i v T i , (4.14) where the u i form an orthonormal set of vectors of R n , the v i form an orthonormal set of vectors of R m and σ i are the singular values of J . . Each E i is the set of control inputs for which the task i is fulfilled. Each S i is the input vector obtained using a single task i in the classical formulation (4.6). C is the input vector obtained using both tasks in the classical formulation (4.6). The same input vector C is obtained using the hierarchical formulation (4.13). Each R i is the input vector obtained using the singularity robust formulation (4.16) when the task i is given the highest priority. The contributions due to tasks 1 and 2 are shown with blue and red arrows, respectively, when the task 1 is given the highest priority and with green and yellow arrows, respectively, when the task 2 is given the highest priority. The pseudo-inverse of J is then computed as J + = min(n,m) i=0 τ i v i u T i with τ i = σ -1 i if σ i = 0 0 if σ i = 0 . (4.15) The matrix J is singular when at least one of the σ i is equal to zero. In this case the pseudo-inverse can still be computed since it sets the value of τ i to zero instead of inverting the singular value σ i . However in practice the matrix is almost never exactly at the singularity because of numerical inaccuracies. Around the singularity, the matrix is ill conditioned and one of the σ -1 i becomes very large, leading to very large velocity outputs, which are not desirable in practice. Algorithmic singularities can be avoided by using the singularity robust 4.3. NEEDLE STEERING FRAMEWORK formulation for the control law [START_REF] Chiaverini | Singularity-robust task-priority redundancy resolution for real-time kinematic control of robot manipulators[END_REF]: v i = v i-1 + P i-1 J + i ėi,d . (4.16) While this method entirely removes algorithmic singularities, it leads to distortions of the low priority tasks, even when they are almost independent of the higher priority ones. An illustration of the hierarchical stack of tasks using this formulation can be seen in Fig. 4.2b. In order to reduce the effect of singularities on the control outputs, damped least squares pseudo-inverse [START_REF] Deo | Overview of damped leastsquares methods for inverse kinematics of robot manipulators[END_REF] has been proposed using a different formulation of (4.15). Then τ i = σ i σ 2 i + λ 2 , (4.17) where λ is a damping factor. This method requires the tuning of λ and many methods have been proposed to limit the task distortions far from the singularity while providing stability near the singularity. Stability The task function framework is typically used to perform visual servoing, in which a visual sensor is used to provide some visual information on the system. The control of the system is performed by regulating the value of some visual features s ∈ R n , directly defined in the visual space, toward desired values s * ∈ R n . A typical approach is to design the task functions to regulate the visual features s toward the desired values with an exponential decay, such that e = ss * , (4.18) ėd = -λ s e, (4.19) where λ s is a positive control gain that tunes the exponential decrease rate of the task vector e. In this particular case the control law (4.6) becomes v = -λ s J + e. (4.20) In practice the real Jacobian matrix J can not be known perfectly because it depends on the real state of the system. An approximation J needs to be provided to the controller, such that the real control law becomes v = -λ s J + e. (4.21) Using this control law, it can be shown that the system remains locally asymptotically stable as long as the matrix J J + verifies [CH06] J J + > 0. (4.22) Note that this stability condition is also difficult to check since the real J is not known. However this condition is usually verified in practice if the approximation J provided to the controller is not too coarse. In the following we describe how we adapt the task function framework to the problem of needle steering and we present the method that we use to compute the estimation of the Jacobian matrix J corresponding to a given task vector. Task Jacobian matrices In the two previous sections we have presented the fundamentals of the task function framework. We now present how we adapt it to perform the control of a needle insertion procedure. As was presented in section 4.1.2, we choose to use the base manipulation method to control the 6 degrees of freedom (DOF) of the needle base. The generic input vector v that was defined in the task function framework (see beginning of section 4.3.1) is thus taken as the velocity screw vector v b ∈ R 6 of the needle base, containing three translational and three rotational velocities. We consider in this section that a task vector e ∈ R n has been defined to control the variations of a specific set of features s ∈ R n related to the needle. These features can for example consist in the position of a point along the needle shaft or the orientation of a beveled tip. The exact design of the different tasks to perform a successful needle insertion will be covered in detail in the following section 4.3.4. In order to use the task function framework, an estimation of the Jacobian matrices associated to each task should regularly be provided during the insertion process. We propose to compute online numerical approximations of these matrices using the mechanics-based models that we defined in section 2.4. We assume that the features s can be computed from the model and we use a finite different approach to compute the numerical approximations during the insertion. Let r ∈ SE(3) be the vector representing the current pose of the base of the needle model. We note J s ∈ R n×6 the Jacobian matrix associated to the feature vector s such that J s = ∂s ∂r . (4.23) Since the state of the model is computed from r and we assume that s can be computed from the model, then s directly depends on r. The finite difference approach consists in computing the value of s taken for several poses r i spread along each direction around the current pose r. Due to the non vector space nature of SE(3), we use the exponential map Exp r to compute each r i according to r i = Exp r (δtv i ), (4.24) where δt is a small time step and v i ∈ R 6 is a unit velocity screw vector corresponding to one DOF of the needle base. v i represents then a translation along one axis of the base frame for i = 1, 2, 3 and v i represents a rotation around one axis of the base frame for i = 4, 5, 6. Since each v i corresponds to only one DOF of the base, each column J s,j of the Jacobian matrix (j = 1, . . . , 6) can then be approximated using the forward difference approximation J s,j = s(r j ) -s(r) δt . (4.26) For more accuracy, we use instead the second order central difference approximation, although it doubles the number of poses r i for which s needs to be evaluated: J s,j = s(r j ) -s(r -j ) 2δt , (4.27) with r -j = Exp r (-δtv j ). (4.28) Note that s can also lie on a manifold instead of a vector space, for example when evaluating the Jacobian corresponding to the pose of a point along the needle shaft. In this case the logarithm map Log s(r) to the tangent space should be used and leads to J s,j = Log s(r) (s(r j )) -Log s(r) (s(r -j )) 2δt . (4.29) Note that δt should be chosen as small as possible to obtain a good approximation of the Jacobian but not to small to avoid numerical precision issues. Now that we have defined a method to compute numerical approximations of the Jacobian matrices from our numerical needle model, we focus in the following section on the design of different task functions to control the needle insertion, i.e. the definition of s. Task design for needle steering An important issue to control the behavior of the needle manipulator using the task function framework is the design of the different task functions stacked in the task vector e. In this section we consider the specific case of the needle insertion procedure where a needle is held by its base. The general objectives that we want to fulfill are first the control of the needle tip trajectory to reach a target and then the control of the deformations of the needle and the tissues to avoid safety issues. The main point of the task function formulation is then to hide the complexity of the control of the base motions and to translate it into the control of some easily understandable features. Each elementary task requires three components: the definition of the task function (see for example (4.18)), the computation of the Jacobian matrix associated to the task and the desired variation of the task function (see for example (4.19)). The difficulty is that many different task functions can be designed to fulfill a same general objective. It is also preferable that the dimension of the task vector remains as small as possible. This way we can avoid under-actuation of the system, where the tasks are not exactly fulfilled, and also decrease the probability of incompatibility between different tasks. In the following we first cover the design of the tasks that can be used for the steering of the needle tip toward a target and we then focus on the design of the tasks that can be used to avoid tearing the tissues or breaking the needle. Targeting tasks design The first and main objective to fulfill in an insertion procedure is that the needle tip reaches a target in the tissues. In this section we propose different task vectors that can be used to control the tip trajectory in order to reach the target and we present their different advantages and drawbacks. We first 4.3. NEEDLE STEERING FRAMEWORK start with general task vectors that give a full control over the tip trajectory and then we successively present simpler ones that control only individual aspects of the tip trajectory. The first task vectors can be used with any kind of needle tips, while the last task vector is more specific to beveled tips. We recall here that the subscript . d is used to define the desired value of a quantity. Tip velocity screw control: A first idea is to directly control the motion of the tip via its velocity screw vector v t ∈ R 6 . We denote r t ∈ SE(3) the pose of the needle tip and J tip ∈ R 6×6 the associated Jacobian matrix relative to the pose of the needle base r. The Jacobian matrix J tip is then defined such that J tip = ∂r t ∂r , (4.30) v t = J tip v b , (4.31) where the screw vectors v t and v t are defined in their respective frames {F t } and {F b } as illustrated in Fig. 4.4. Note that J tip is computed using the finite difference method defined on a manifold that was defined by (4.29) in section 4.3.3. This Jacobian matrix will be used as a basis for the other tasks in the following since it entirely describes the relation between the motion of the needle base and the motion of the needle tip. This relation can then directly be inverted according to (4.6) to allow the control of the desired tip motion v t,d , such that v b = J + tip v t,d . (4.32) One advantage of this control is that it translates the control problem from the needle base to the needle tip. It can thus allow an external human operator to directly control the desired motion of the tip v t,d without having to consider the complex interaction of the flexible needle with the tissues. However one drawback is that it constraints the six control inputs, meaning that no additional task can be added. Subsequently, in the case of the design of an autonomous control of the tip trajectory, the design of the desired variations of the tip motion v t,d should take into account its effect on the whole behavior of the needle to avoid unfeasible motions that would damage the tissues. This can thus be as difficult to design as directly controlling the motions of the needle base. However the tip screw vector can be written as v t = v t ω t , with v t ∈ R 3 the translational velocity vector and ω t ∈ R 3 the rotational velocity vector. In the following we propose different ways to separate the components of v t to obtain a task vector of lower dimension that is still adequate for the general targeting task and allows the addition of other task functions. Tip velocity control: A first solution that is better than the one proposed in previous paragraph is to limit the task vector to the control of the translational velocities v t of the tip. The corresponding Jacobian matrix is then J vt = I 3 0 3 J tip , (4.33) where I 3 and 0 3 are the 3 by 3 identity and null matrices, respectively, and J tip was defined by (4.30). This relation can also be directly inverted according to (4.6) to allow the control of the desired tip translations v t,d , such that v b = I 3 0 3 J tip + v t,d . (4.34) The main advantage of this control is that it allows a direct control of the tip trajectory and keeps some free degrees of freedom (DOF) to add additional task functions. This can also easily be adapted to follow a trajectory defined by a planning algorithm or to give the control of v t,d to an external human operator. NEEDLE STEERING FRAMEWORK In the case of an autonomous controller, a way to reach the target is to fix the desired variations of the task vector such it goes toward the target with a fixed velocity v tip . Noting p t = x t y t z t T the position of the target in the needle tip frame {F t } (see Fig. 4.4), we have v t,d = v tip p t p t . (4.35) Note that this is the main targeting task design that we use in the different experiments to test our framework. In practice p t can be computed from the tracking of the needle tip and the target using any modality that allows this tracking, such as for example an imaging modality and an electromagnetic tracker. One drawback of this task vector when it is used alone is that it does not explicitly ensure that the needle actually aligns with the target. It is thus possible that the tip translates in the direction of the target while the tip axis goes further away from the target, resulting in a motion of the needle shaft that is cutting laterally in the tissues. However, since this task vector does not constrains all the DOF of the base, an additional task function can be added to explicitly solve this issue, for example a safety task function that limits the cutting of the tissues (as will be designed later in section 4.3.4.2). Alternatively, another targeting task vector can also be designed to directly ensure that the needle aligns with the target. This is what we propose to explore in the following. Minimal set of targeting task functions: We propose to solve the two issues caused by the targeting task vectors that were designed in the previous paragraphs. The issue with the first task vector controlling the whole tip motion is that it constrains all the DOF of the needle base, such that no other task functions can be added. The second task vector controlling only the tip translations solves this issue but does not ensure that the needle aligns with the target. Therefore we propose to decompose the general targeting task into the two fundamental actions that allow reaching the target: inserting the needle and orienting the needle tip axis toward the target. Each action can be achieved using only one scalar task function. In the following we first present a task function to control the insertion of the needle and then we present two possible task functions to control the orientation of the needle tip axis. Needle insertion: The insertion of the needle can easily be controlled using only the velocity v t,z of the needle tip along its axis (z axis of the tip frame {F t } depicted in Fig. 4.4). The associated Jacobian matrix J vt,z ∈ R 1×6 can then be expressed as J vt,z = 0 0 1 0 0 0 J tip , (4.36) where J tip was defined by (4.30). The desired insertion velocity v t,z,d can then be set to a positive constant v tip (that was defined by (4.35)) during the insertion and can be set to zero once the target has been reached, such that v t,z,d = v tip if z t > 0 0 if z t ≤ 0 , (4.37) where we recall that z t is the distance from the tip to the target along the needle tip axis. Needle tip axis orientation: Orienting the needle tip axis toward the target can first be achieved by minimizing the angle θ between the needle tip axis and the axis defined by the tip and the target, as illustrated in Fig. 4.4. This angle can be expressed as θ = atan2 x 2 t + y 2 t , z t , (4.38) where we recall that x t , y t and z t are the components of the position p t of the target in the tip frame {F t } depicted in Fig. 4.4. The Jacobian matrix J θ ∈ R 1×6 corresponding to this angle can then be derived as follows: J θ = ∂θ ∂r = ∂θ =   -1 0 0 0 -z t y t 0 -1 0 z t 0 -x t 0 0 -1 -y t x t 0   . (4.41) Finally we obtain J θ =                         - x t cos 2 (θ) z t x 2 t + y 2 t - y t cos 2 (θ) z t x 2 t + y 2 t x 2 t + y 2 t x 2 t + y 2 t + z 2 t y t x 2 t + y 2 t - x t x 2 t + y 2 t 0                         T J tip , (4.42) where J tip was defined by (4.30). Aligning the needle axis with the target can then be achieved by regulating the value of θ toward zero, such that θd = -λ θ θ, (4.43) where λ θ is a positive control gain that tunes the exponential decrease rate of θ. Alternatively the distance d between the needle tip axis and the target can also be used as a feature to minimize in order to reach the target (see Fig. 4.4). This distance can be expressed as d = x 2 t + y 2 t . (4.44) The corresponding Jacobian matrix J d ∈ R 1×6 can be derived as follows: The different task functions can then be stacked together and used in (4.6), which leads to the following two possible control laws J d = ∂d ∂r = ∂d v b = J vt,z J θ + v t,z,d θd , (4.49) or v b = J vt,z J d + v t,z,d ḋd . (4.50) Both control laws allow the automatic steering of the needle tip toward the target, while letting several free DOF of the needle base to perform other tasks at the same time. Note that these control laws give the same priority to both scalar tasks: different priorities could also be given by using a hierarchical formulation as presented in section 4.3.1. Giving the control of v t,z along with θd or ḋd to an external human operator would be less intuitive than the direct control of the tip translations defined by (4.34). The exact trajectory of the tip would be harder to handle in this case due to the non-intuitive effect of θd or ḋd on the exact tip trajectory. However, it could be possible to give only the control of the insertion speed v t,z to the operator and let the system handle the alignment with the target. Additionally, in the case of an autonomous controller using (4.37) along with (4.43) or (4.48), an adequate tuning of the insertion velocity v tip and the gain λ θ or λ d is required. If the gain is too low with respect to the insertion velocity, the needle tip does not have enough time to align with the target before it reaches the depth of the target. The gain should thus be chosen large enough to avoid mistargeting if the target is initially misaligned. Tip-based control task functions: All the previously defined task vectors control in some way one of the lateral translations or rotations of the needle tip. They can thus be used with symmetric or asymmetric tip geometries. However, the advantage of a needle with an asymmetric tip is that the tip trajectory can also be controlled during a pure insertion using only the orientation of the asymmetry, without direct control of the lateral translations. In the case of a beveled-tip, the lateral force created at the tip during the insertion is directly linked to the bevel orientation. Orientation of the bevel toward the target can then be achieved by regulating the angle σ around the needle axis between the target and the orientation of the bevel cutting edge (y axis), as depicted in Fig. 4.4. This angle can be expressed according to σ = atan2(y t , x t ) - π 2 . ( 4 .51) The corresponding Jacobian matrix J σ ∈ R 1×6 can be derived as follows: J σ = ∂σ ∂r = ∂σ J σ = y t d 2 - x t d 2 0 x t z t d 2 y t z t d 2 -1 J tip , (4.54) where d was defined by (4.44). Regulation of σ toward zero can also be achieved using σd = -λ σ σ, (4.55) where λ σ is a positive control gain that tunes the exponential decrease rate of σ. A smooth sliding mode control will however be preferred, as was done in [RDG + 13], to rotate the bevel as fast as possible while it is not aligned with the target. This is equivalent to define a maximum rotation velocity ω z,max in (4.55) and to use a relatively high value for λ σ , such that σd = -ω z,max sign(σ) if |σ| ≥ ωz,max λσ -λ σ σ if |σ| < ωz,max λσ . (4.56) Tip-based control can thus be performed by stacking this task function with the insertion velocity task function defined by (4.36) and (4.37) and use them in (4.6), which leads to the following control law v b = J vt,z J σ + v t,z,d σd . (4.57) This control law allows the automatic steering of the needle tip toward the target by using the asymmetry of the needle tip and also let several free DOF of the needle base to perform other tasks at the same time. The direct control of both v t,z and σd can be given to an external human operator to perform the insertion. Alternatively, it could also be possible to give only the control of the insertion speed v t,z to the operator and let the system automatically orient the bevel toward the target. In the case of an autonomous controller using (4.37) and (4.56), an adequate tuning of the insertion velocity v tip with respect to the rotation velocity ω z,max is necessary to ensure that the bevel can be oriented fast enough toward the target before the needle tip reaches the depth of the target. This can usually be achieved by setting a high value ω z,max [RDG + 13]. Conclusion: We have presented several task vectors that could be used in a control law to achieve the steering of the needle tip toward a target. Each task vector uses a different strategy, such as the control of the tip velocity, the alignment of the tip axis with the target or the orientation of the asymmetry of the needle tip toward the target. Most of the task vectors do not constrain all of the available DOF of the needle base, such that they can be used in combination with one another or with other task functions to achieve several objectives at the same time. In particular, the orientation of the bevel of a beveled-tip can be used alongside the control of the lateral translations or rotations of the tip in order to increase the targeting performances of the controller. This will be explored in the experiments presented in section 4.4. The deformations of the needle and the tissues should also be controlled during the insertion, especially when using the control of the lateral translations of the tip, which can only be achieved by bending the needle and pushing on the tissues. Therefore, in the next section we focus on the design of additional task functions in order to ensure the safety of the insertion procedure. Safety tasks design The tasks defined in the previous section are used to control the trajectory of the needle tip. However they do not take into account other criteria that may be relevant to ensure a safe insertion of the needle. Two main points need to be taken into account for the safety of the insertion procedure. First the lateral efforts exerted on the tissues should be minimized to reduce the risks of tearing, which would go against the general concept of minimally invasive procedure. The second point is to avoid breaking the needle, for obvious safety reasons. Both points can be viewed as a same objective since breaking the needle will only occur if large efforts are applied on the tissues. In order to address these two points, we propose in the following three task functions that can be used in combination with the targeting task vectors of previous section using one of the control schemes defined in section 4.3.1 by (4.6), (4.10) or (4.16). The first task is designed toward the control of the deformations of the tissues, the second one toward the control of the deformations of the needle and the third one to achieve a trade-off between the two. An experimental comparison of the performances obtained with each task function will then be provided later in section 4.4.1.2. Surface stretch reduction task: It can be noted that tissue tearing has the greatest probability to appear near the surface of the tissues. Indeed, this occurs when the skin has already been fragilized by the initial cut of the needle and when less surrounding tissues are present to maintain their cohesion. A first solution to avoid tearing the surface of the tissues is to ensure that the body of the needle remains close to the initial position of the insertion point. This can be achieved by reducing the relative lateral position δ ∈ R 2 on the tissue surface between the current position of the needle c N (L f ree ) and the initial position of the insertion point c T (0), as illustrated in Fig. 4.5 (note that we choose to take the notations c N , c T and L f ree that were introduced in the definition of the two-body model in section 2.4.2). This task and the associated Jacobian matrix can then be expressed according to δ = P s c N (L f ree ) -c T (0) , (4.58 ) J δ = P s ∂c N (L f ree ) ∂r = P s J L f ree , (4.59) where P s ∈ R 2×3 is an orthogonal projector onto the tissue surface and Figure 4.5: Illustration of the geometric features used for the safety task functions. Note that the representation is here limited to a 2D case, however in the general case the angle γ is defined in the plane containing the needle base axis z and the initial insertion point c T (0). J L f ree ∈ R 3×6 is the Jacobian matrix linking the variations of the position of the needle point at the curvilinear coordinate L f ree to the variations of the needle base pose r. This matrix J L f ree is computed from the model using the method described by (4.27) in section 4.3.3. Regulation of δ toward zero can be achieved using the classical law δd = -λ δ δ, (4.60) where λ δ is a positive control gain that tunes the exponential decrease rate of δ. Alternatively the scalar distance δ can also be directly used to decrease 143 CHAPTER 4. NEEDLE STEERING the dimension of the task: δ = δ , (4.61 ) J δ = δ T δ J δ , (4.62) δd = -λ δ δ, (4.63) where λ δ is a positive control gain that tunes the exponential decrease rate of δ. Note that this formulation introduces a singularity when δ = 0. However, it can be shown that the local asymptotic stability of the system remains still valid [START_REF] Marey | A new large projection operator for the redundancy framework[END_REF]. From a numerical point of view, it can also be noted that δ T δ is always a unit vector for δ = 0, such that J δ does not introduce arbitrary large values near δ = 0. Therefore in the following we will use the scalar version of the task for the reduction of the tissue stretch at the surface. Needle bending reduction task: A solution to avoid breaking the needle is to ensure that the needle remains as straight as possible. However maintaining the needle strictly straight is not possible since needle bending is necessary to steer the needle tip, either from lateral base motion or by using the natural curvature at the needle tip. We propose to use the bending energy of the needle as a quantity to minimize. This energy can be computed from the needle models presented in section 2.4. As defined in (2.19), the energy is given by E N = EI 2 L N 0 d 2 c N (l) dl 2 2 dl, where we recall that E is the Young's modulus of the needle, I is the second moment of area of the needle section and c N is the spline curve of length L N representing the shape of the needle. The corresponding Jacobian matrix J E N ∈ R 1×6 can then be computed from the model using the method described by (4.27) in section 4.3.3 J E N = ∂E N ∂r . (4.64) Regulation of E N toward zero can by achieved using the classical law ĖN,d = -λ E N E N , (4.65) where λ E N is a positive control gain that tunes the exponential decrease rate of E N . NEEDLE STEERING FRAMEWORK Needle base alignment task: Limiting the distance between the needle and the initial position of the insertion point does not ensure that the needle is not bending outside of the tissues. Similarly, once the needle has been inserted, reducing the bending of the needle does not ensure that the needle is not pushing laterally on the surface of the tissues. In order to avoid pushing on the tissues near the insertion point and to also limit the bending of the needle outside the tissues, the needle base axis can be maintained oriented toward the insertion point. This can be viewed as a remote-center-of-motion around the initial insertion point, in the case where this one is not moving, i.e. if no external tissue motions occur. We propose to achieve this goal by regulating toward zero the angle γ between the needle base z axis and the initial location of the insertion point c T (0), as illustrated in Fig. 4.5. This way the needle base axis should also follow the insertion point in the case of tissue motions that are not due to the interaction with the needle. Noting x 0 , y 0 , and z 0 the coordinates of the initial position of the insertion point c T (0) in the needle base frame {F b } (see Fig. 4.5), the angle γ can be expressed according to γ = atan2 x 2 0 + y 2 0 , z 0 . (4.66) The Jacobian matrix J γ ∈ R 1×6 corresponding to this angle can then be derived as follows: J γ = ∂γ ∂r = ∂γ ∂c T (0) ∂c T (0) ∂r , (4.67) with ∂γ ∂c T (0) = x 0 cos 2 (γ) z 0 x 2 0 + y 2 0 y 0 cos 2 (γ) z 0 x 2 0 + y 2 0 - x 2 0 + y 2 0 x 2 0 + y 2 0 + z 2 0 (4.68) and ∂c T (0) ∂r =   -1 0 0 0 -z 0 y 0 0 -1 0 z 0 0 -x 0 0 0 -1 -y 0 x 0 0   . (4.69) Finally we obtain J γ =                         - x 0 cos 2 (γ) z 0 x 2 0 + y 2 0 - y 0 cos 2 (γ) z 0 x 2 0 + y 2 0 x 2 0 + y 2 0 x 2 0 + y 2 0 + z 2 0 y 0 x 2 0 + y 2 0 - x 0 x 2 0 + y 2 0 0                         T . (4.70) Regulation of γ toward zero can by achieved using the classical law γd = -λ γ γ, (4.71) where λ γ is a positive control gain that tunes the exponential decrease rate of γ. Conclusion: In this section we have defined three different task functions that can be used to control the deformations of the needle or tissues during the insertion. These task functions can be combined together with a targeting task using the task function framework in order to obtain a final control law that allows reaching a target with the needle tip while ensuring the safety aspect of the insertion procedure. In the following section we propose to test in different experimental scenarios the whole needle steering framework that we designed. Several combinations of the task vectors defined in sections 4.3.4.1 and 4.3.4.2 will be explored as well as the different formulations used to fuse them into one control law as described in section 4.3.1. Framework validation In this section we present an overview of the experiments that we conducted to test and validate our proposed needle steering framework. We first use the stereo cameras to obtain a reliable feedback on the needle localization in order to test the different aspects of the framework independently from the quality of the tracking. We then perform insertions under 3D ultrasound visual guidance using the tracking algorithm that we proposed in chapter 3. Insertion under camera feedback In this section we propose to evaluate the performances of our framework when using the visual feedback provided by the stereo camera system presented in section 1.5.2. In all the experiments the stereo camera system is registered and used to retrieve the position of the needle shaft in the tissues using the registration and tracking methods described in section 3.4.1. We first present experiments that we performed to combine our framework with the duty-cycling control technique described in section 4.1.1. We then compare the performances obtained during the needle insertion when using the different safety task functions that were defined in section 4.3.4.2. Finally we propose to test the robustness of the method to modeling errors introduced by lateral motions of the tissues. Switching base manipulation and duty-cycling We first propose to use both base manipulation and tip-based control to insert a needle and reach a virtual target. Tip-based control allows a fine control of the tip trajectory, however the amplitude of the lateral tip motions that can be obtained is limited, such that the target can be unreachable if it is not initially aligned with the needle axis. On the contrary, using base manipulation allows a better control over the lateral tip motions at the beginning of the insertion, however the effect of base motions on the tip motions is reduced once the needle tip is inserted deeper in the tissues. In the following we use an hybrid controller that alternates between dutycycling control (see section 4.1.1), when the target is almost aligned with the needle, and base manipulation using our task framework (see section 4.3) in order to accurately reach a target that may be misaligned at the beginning of the insertion. Experimental conditions (setup in France): In these experiments, the Angiotech biopsy needle is actuated by the Viper s650. The insertion is done in a gelatin phantom embedded in a stationary transparent plastic container. Visual feedback is obtained using the stereo cameras system and the whole needle shaft is tracked in real-time by the image processing algorithm described in section 3.4.1. A picture of the setup is shown in Fig. 4.6. A virtual target to reach is defined just before the beginning of the insertion such that it is located at a predefined position in the initial tip frame. We use the virtual springs model presented in section 2.4.1 with polynomial needle segments of order r = 3. The stiffness per unit length of the model is set to 10000 N.m -2 for these experiments and the length threshold to add a new virtual spring is set to L thres = 2.5 mm. The rest position of a newly added spring (defined as p 0,i in section 2.4.1, see Fig. 2.5) is set at the position of the tracked needle tip in order to compensate for modeling errors. This is similar to the update method 3 presented in section 3.6.2. The pose of the needle base of the model is updated using the odometry of the robot. Control: We use either base manipulation using the task function framework or duty-cycling control depending on the alignment of the target with the needle tip axis. Duty-cycling is used when the target is almost aligned and only small modifications of the tip trajectory are needed. Base manipulation is used when larger tip motions are necessary to align the needle with the target. Base manipulation control: We use three tasks to control the needle manipulator and we fuse them using the singularity robust formulation of the task function framework, as defined by (4.16) in section 4.3.1. Each task is given a different priority level such that it does not disturb the tasks with higher priority. The tasks are defined as follows. • The first task with highest priority controls the tip translation velocity v t , as defined by (4.33) and (4.35). We set the insertion velocity to 1 mm.s -1 . Note that we choose this task over the tip alignment tasks defined in (4.49) and (4.50) because it does not require the tuning of an additional gain. • The second task with medium priority controls the bevel orientation via the angle σ, as defined by (4.51), (4.54) and (4.56). The maximal 4.4. FRAMEWORK VALIDATION rotation speed ω z,max is set to 60 • .s -1 and the gain λ σ is set to 4 3 (see (4.56)) such that the maximal rotation velocity is used when the bevel orientation error is higher than 45 • . • The third task with lowest priority is used to reduce the mean deformations of the tissues δ m , which we compute here from the virtual springs interaction model according to δ m = 1 L ins n i=1 l i δ i , (4.72) where L ins is the current length of the needle that is inserted in the tissues, n is the current number of virtual springs, δ i is the distance between the needle and the rest position of the i th virtual spring, i.e. the virtual spring elongation, and l i is the length of the needle model that is supported by this virtual spring. The Jacobian matrix J δm corresponding to δ m is numerically computed from the model using the method described by (4.27) and the desired variation of δ m is computed as δm,d = -λ δm δ m , (4.73) with the control gain λ δm set to 1. The final velocity screw vector v b applied to the needle base is then computed according to v b = J + vt v t,d + P 1 J + σ σd -λ δm P 2 J + δm δ m , (4.74) with P 1 = I 6 -J + vt J vt , (4.75) P 2 = I 6 - J vt J σ + J vt J σ = P 1 -(J σ P 1 ) + (J σ P 1 ) , (4.76) where I 6 is the 6 by 6 identity matrix. Duty-cycling control: We use duty-cycling control when the target is almost aligned with the needle tip axis. This is detected by comparing the angle θ between the target and the tip axis (as defined in (4.38)) with the maximum angle θ DC obtained during one cycle of duty-cycling. This angle corresponds to the angle obtained during a cycle with only insertion (dutycycle ratio DC = 0), such that θ DC = K nat L DC . (4.77) where K nat is the natural curvature of the needle tip trajectory during the insertion and L DC is the total insertion length of a cycle, set to 3 mm in this experiment. If θ < θ DC , the needle would overshoot the current desired direction in less than a cycle length. In that case it is better to reduce the effective curvature K ef f of the tip trajectory such that it aligns with the desired direction, i.e using K ef f = θ L DC , (4.78) DC = 1 - θ L DC K nat (4.79) The total rotation of the needle during each rotation phase is set to 2π + σ, where σ is the angle between the target and the bevel as defined in (4.51), such that the bevel is oriented in the target direction before starting the translation phase. Experimental scenarios: Four experiments are performed with a same phantom to validate our method. At the beginning of each experiment, the needle is placed such that it is normal to the surface of the gelatin and its tip slightly touches it. The insertion point is shifted between the experiments in a way that the needle can not cross a previous insertion path. The needle is first inserted 7 mm in the gelatin to allow the manual initialization of the tracking algorithm in the images. Then the insertion procedure starts with an insertion speed of 1 mm.s -1 and is stopped when the target is no more in front of the needle tip. Open-loop insertion toward an aligned target: In the first experiment, a virtual target is defined before the beginning of the insertion such that it is aligned with the needle and placed at a distance of 8 cm from the tip. A straight insertion along the needle axis is then performed in openloop control. Fig. 4.7a shows the view of the front camera at the end of the experiment and Fig. 4.8a shows the 3D lateral distance between the needle tip axis and the target. Note that the measure presents a high level of noise at the beginning of the insertion. This is first due to the noisy estimation of the needle direction at the beginning of the insertion since the visible part of the needle is small. Second, the needle tip is far from the target at the beginning of the insertion, which amplifies the effect of the direction error on the lateral distance. We can see that the target is missed laterally by 8 mm at the end because of the natural deflection of the needle. This experiment justifies that needle steering is necessary to accurately reach a target even if it is correctly aligned with the needle axis at the beginning of the procedure. Tip-based control with a misaligned target: In a second experiment, the target is shifted 1 cm away from the initial tip axis and such that a 135 • rotation is necessary to align the bevel toward the target. The (b) Duty-cycling control with a target shifted 1 cm away from the initial needle axis: duty-cycling control is saturated and the target is missed due to insufficient tip deflection. The target can be reached in both cases using the hybrid control framework ((c) aligned target and (d) shifted target). In each graph, the purple sections marked "DC" correspond to duty-cycling control and the red sections marked "BM" correspond to base manipulation. duty-cycling control is used alone for this experiment. Fig. 4.7b shows the view of the front camera at the end of the experiment and Fig. 4.8b shows the 3D lateral distance between the needle tip axis and the target. After the first rotation, the duty-cycling controller is saturated and only performs pure insertion phases. We can see that the lateral alignment error decreases during the insertion. However the natural curvature of the needle is not Table 4.1: Final lateral position error between the needle tip and the target for different insertion scenarios. Final lateral error (mm) Straight insertion and aligned target 7.6 Duty-cycling and shifted target 4.9 Hybrid control and aligned target 0.6 Hybrid control and shifted target 0.1 sufficient to compensate for the initial error and the target is finally missed by 5 mm. This experiment justifies that base manipulation is necessary to accurately reach a misaligned target with a standard needle or that a needle offering higher curvature needs to be used. Hybrid control: Two other experiments were performed with the same initial target placements (one aligned target and one misaligned target) and using the hybrid controller with both base manipulation and duty-cycling. Figure 4.7c and 4.7d show the view of the front camera at the end of the experiments and Fig. 4.8c and 4.8d show the 3D lateral distance between the needle tip axis and the target. We can see that the controller allows reaching the target with a sub-millimeter accuracy in both cases. Table 4.1 shows a summary of the final lateral targeting error between the tip and the target. The targeting error along the needle direction was under 0.25 mm in each experiment, which corresponds to the accuracy of the vision system. These experiments show that using base manipulation in addition to tip-based steering allows a larger reachable space compared to the sole use of tip-based control methods. In addition, we can observe that the controller rarely switched to dutycycling, as can be seen in Fig. 4.8c and Fig. 4.8d. This is due to the small natural curvature of the needle tip trajectory obtained for this association of needle and phantom. In this case it may not be necessary to reduce the curvature of the needle tip trajectory. We could just orient the bevel edge toward the target, as is done by the second task of the base manipulation controller, and avoid the high number of rotations required to perform dutycycling control. However duty-cycling control should still be used when using a more flexible needle, and more especially if the tip trajectory is defined by a planning algorithm that allows non-natural curvatures. A second observation concerns the oscillations in the lateral error that appear during the duty-cycling control in Fig. 4.8c and 4.8d. These oscillations are due to a small misalignment between the axis of rotation of the robot and the actual axis of the needle. This misalignment introduces some lateral motions of the needle during the rotation phases, which in turn modify the needle tip trajectory. From a design point of view, it shows that the accuracy of the realization of a needle steering mechanical system can have a direct effect on the accuracy of the needle steering. Furthermore, depending on the frame in which the observation is made, a lateral motion of the needle base can be seen as a motion of the phantom, so that this oscillation phenomenon confirms the fact that tissue motions is an important issue for an open loop insertion procedure. This effect is likely to have a greater importance when using relatively stiff needles, for which base motion have a significant effect on the tip motion. On the contrary it should have a lower impact when using more flexible needles, so that duty-cycling control is better suited for very flexible needles. Conclusion: We have seen that combining both base manipulation and tip-based control during a visual guided robotic insertion allows a good targeting accuracy in a large reachable space. This validate the fact that using additional degrees of freedom of the needle base can be necessary to ensure the accurate steering of the needle tip toward a target. We also observed that duty-cycling control was actually not really adapted to our experimental setup due to the low natural curvature of the tip trajectory during the insertion. Therefore in the following we do not use dutycycling control anymore but we only orient the cutting edge of the bevel toward the target, such that the full curvature of the needle tip trajectory is used. As a final note, we observed during the experiments that the third task used to reduce the deformations of the tissues had almost no influence on the final velocity applied to the needle base. This is due to the singularity robust formulation and the fact that the task is near incompatible with the first task with high priority. The contribution of the third task was then greatly reduced after the projection in the null space of the first task. Therefore we do not use this formulation in the followings and we use instead the classical formulations defined by (4.6) or (4.10). Safety task comparison We propose here to compare the performances obtained when using the different safety tasks that we defined in section 4.3.4.2. Experimental conditions (setup in France): In these experiments, the Angiotech biopsy needle is actuated by the Viper s650. The insertion is done in a gelatin phantom embedded in a stationary transparent plastic container. Visual feedback is obtained using the stereo cameras system and the whole needle shaft is tracked in real-time by the image processing algorithm described in section 3.4.1. A raisin is embedded in the gelatin 9 cm under the surface and used as a real target. A picture of the setup is shown in Fig. 4.9. We use the two-body model presented in section 2.4.2 with polynomial needle segments of order r = 3. We fix the length of the needle segments to 1 cm, resulting in a total of n = 13 segments and the last segment measuring 0.6 mm. A soft phantom is used in these experiments, such that the stiffness per unit length of the model is set to 1000 N.m -2 . The length threshold to add a new segment to the tissue spline is set to L thres = 0.1 mm. The pose of the needle base of the model is updated using the odometry of the robot. Control: We use three tasks for the control of the needle manipulator and we fuse them using the classical formulation of the task function framework, as defined by (4.6) in section 4.3.1. The different tasks are defined as follows. • The first task controls the tip translation velocity v t , as defined by (4.33) and (4.35). We set the insertion velocity v tip to 5 mm.s -1 . • The second task controls the bevel orientation via the angle σ, as defined by (4.51), (4.54) and (4.56). The maximal rotation speed ω z,max is set to 60 • .s -1 and the gain λ σ is set to 10 (see (4.56)) such that the maximal rotation velocity is used when the bevel orientation error is higher than 6 • . • The third task is one of the three safety tasks defined in section 4.3.4.2: reduction of the tissue stretch δ at the surface ((4.61), (4.62) and (4.63)), reduction of the needle bending energy E N ((2.19), (4.64) and (4.65)) or reduction of the angle γ between the needle base axis and the insertion point ((4.66), (4.70) and (4.71)). The control gain for each of these tasks (λ δ , λ E N or λ γ ) is set to 1. We observed in the previous section that the singularity robust hierarchical formulation (see (4.16)) induces too much distortion of the low priority tasks. Therefore, we choose here to give the same priority level to each task, such that the control should give a trade-off between good targeting and safety of the procedure. The final velocity screw vector applied to the needle base v b is then computed according to v b =   J vt J σ J 3   +   v t,d σd ė3,d   , (4.80) where J 3 is the Jacobian matrix corresponding to the safety task (either J δ , J E N or J γ ) and ė3,d is the desired variation for the safety task (either δd , ĖN,d or γd ). Note that the total task vector is here of dimension 5 while we have 6 degrees of freedom available, such that all tasks should ideally be fulfilled. For the computation of the control law, the desired variations of the two first tasks are computed from the measures of the target and tip position using the visual feedback. On the contrary, the desired variations of the safety tasks are computed from the needle interaction model. Experimental scenarios: Five insertions are performed for each kind of safety task. The needle is placed perpendicular to the tissue surface before the beginning of the insertion. Initial insertion locations are chosen such that they are sufficiently far away from the previous insertions, leading to an initial misalignment with the target up to 1.7 cm. The needle is first inserted 1 cm into the phantom to manually initialize the tracking algorithm. The controller is stopped once the needle tip reaches the depth of the target. Pictures of the initial and final state of one experiment are shown in Fig. 4.10. In the following we compare the values taken by each of the three physical quantities defined for the third task during the experiments, namely the tissue stretch at the surface, the needle bending energy and the angle between the needle base axis and the insertion point. These values are recorded from the state of the model during the insertions. Targeting: Let us first look at the targeting performances of the method. The lateral distance between the needle tip axis and the target was measured during the insertions and is shown in Fig. 4.11. As stated previously this measure is noisy at the beginning of the insertion due to the distance between the needle tip and the target. The mean value of the final lateral targeting error across the five insertion procedures is summarized in Fig. 4.12. The target could be reached in all cases with an accuracy of less than 2 mm, which is sufficiently accurate for most clinical needle insertion applications. This demonstrates the good performances of our steering method. Similar targeting performances are obtained when reducing the surface tissue stretch or reducing the needle bending energy. Aligning the needle base with the insertion point further decreases the targeting error. However this result should be interpreted with caution and may be due to statistical variance, as the targeting error is indeed close to the diameter of the needle (0.7 mm) and the visual system accuracy (0.25 mm). Surface tissue stretch: Let us now consider the effect of the tasks on the tissue stretch δ at the surface. The value of δ for each experiment is shown in Fig. 4.13. The mean value of δ over time and across the five insertion procedures is summarized in Fig. 4.14 for each active task. As expected, actively reducing the surface stretch effectively reduces the surface stretch compared to the other safety tasks. On the other hand reducing the bending of the needle introduces a higher stress to the tissue surface. This can be explained by the fact that keeping the needle straight outside of the tissues requires that the internal shear force applied to the needle at the tissue surface is small. This is only possible if the integral of the force load applied to the needle in the tissues is near zero. Since the needle tip needs to be steered laterally to reach the target, some load is applied to the tissues near the tip. An opposing load is thus necessary near the tissue surface to drive the integral of the load to zero, leading to a deviation of the needle shaft from the initial position of the insertion point. An intermediate between these two behaviors seems to be obtained when aligning the needle with the initial position of the insertion point. This could be expected since orienting the needle base tends to move the needle body toward the same direction, i.e. toward the insertion point, hence reducing the surface stretch. However bending of the needle outside of the tissues is still possible due to the interaction with the tissues, creating a certain amount of stretch at the surface. Needle bending energy: Let us now look at the effect of the tasks on the bending energy E N stored in the needle. The value of E N for each experiment is shown in logarithm scale in Fig. 4.15. The mean value of E N over time and across the five insertion procedures is summarized in Fig. 4. 16 for each active task. As expected, actively reducing the bending energy effectively reduces the energy compared to the other safety tasks. On the other hand reducing the tissue stretch at the surface requires a higher needle bending. This can be explained by the fact that steering the needle tip laterally while keeping the needle near the insertion point results in a force load applied by the tissues only on one side of the needle. Needle bending outside of the tissues is thus necessary to be able to obtain this load. As seen previously for the surface tissue stretch, aligning the needle with the initial position of the insertion point seems to provide an intermediate between these two behaviors. This could be expected since orienting the needle base axis toward the insertion point tends to straighten the part of the needle that is outside of the tissues, hence reducing the overall bending energy. However the needle can still bend near the surface and inside the tissues, which is good to perform the targeting task. An additional observation can be made on the behavior of the needle bending reduction task. Once the needle has been inserted in the tissues and some natural deflection appeared, moving the needle base only provides a limited way of changing the shape of the needle inside the tissues. This creates a non-zero floor value under which the bending energy cannot be reduced without removing the needle from the tissues. From the task function point of view, when the floor value is reached the corresponding task Jacobian matrix becomes incompatible with the task controlling the insertion. A singularity occurs in this case, leading to some instabilities that increases the needle bending, as could be observed in some experiments (for example the blue curve in Fig. 4.15b). This behavior indicates that using this task is not suitable to increase the safety of the control. Base axis insertion point angle: Let us finally consider the effect of the tasks on the angle γ between the needle base axis and the initial position of the insertion point. The value of γ for each experiment is shown in Fig. 4.17. The mean value of γ over time and across the five insertion procedures is summarized in Fig. 4.18 for each active task. As expected, actively reducing the angle between the base axis and the insertion point effectively reduces this alignment error when compared to the other safety tasks. As discussed previously, reducing the tissue stretch at the surface requires bending the part of the needle that is outside the tissues to fulfill the targeting task. Since the needle is constrained to pass by the initial position of the insertion point, this bending can only be achieved by rotating the needle base to put it out of alignment, resulting in a higher value of γ. Similarly, we have seen that reducing the bending of the needle introduces a stretch of the tissues at the surface to achieve the targeting task. Since the needle body is aligned with the needle base axis due to the reduced bending, then the base can not be aligned with the insertion point. It can also be observed during all the experiments that the features associated to the safety tasks tend to increase near the end of the insertion, as visible in Fig. 4.13a and 4.17c. A small increase of the lateral distance near the end can also be observed in Fig 4 .11. Since the task functions are designed to regulate these features toward zero, this effect indicates an incompatibility between the safety and targeting tasks. The total Jacobian matrix defined in (4.80) is then close to singularity, such that the computation of the pseudo-inverse introduces some distortions. The hierarchical formulation (4.10) of the task function framework could be used instead of the classical formulation (4.6) to choose which task should have the priority in this case. This point will be explored later in section 4.4.2. Conclusion: Through these experiments we have confirmed that steering a flexible needle in soft tissues requires a certain amount of tissue deformations and needle bending. Trying to steer the needle while actively reducing the deformations at the surface of the tissues can only be achieved by bending the needle. Trying to reduce the amount of bending during the steering can only be achieved through deformations of the tissue surface. Keeping the needle base aligned with the initial position of the insertion point seems to allow needle steering while procuring a trade-off between tissue deformations near the surface and needle bending outside the tissues. In conclusion, the last method should be preferred in general to reduce both the needle and the tissues deformations. The task reducing the tissue stretch at the surface can be used if the needle is not too flexible, such that it does not bend too much outside of the tissues. On the contrary, the task reducing the needle bending should be avoided since it introduces some stability issues in addition to the deformations of the tissues. Robustness to modeling errors We now propose to evaluate the robustness of the base manipulation framework towards modeling errors and tissue motions. Experimental conditions (setup in France): In these experiments, the Angiotech biopsy needle is actuated by the Viper s650. The insertion is done in a gelatin phantom embedded in a transparent plastic container. The phantom is moved manually during the first half of the insertion. Visual feedback is obtained using the stereo camera system and the whole needle shaft is tracked in real-time by the image processing algorithm described in section 3.4.1. The setup is similar to the previous section and can be seen in Fig. 4.9. A virtual target is defined just before the beginning of the insertion such that it is 8 cm under the tissue surface and 4 mm away from the initial needle axis. This target is fixed in space and does not follow the motions applied to the phantom, hence simulating a moving target from the point of view of the needle which is embedded in the phantom. We use the two-body model presented in section 2.4.2 with polynomial needle segments of order r = 3. We fix the length of the needle segments to 1 cm, resulting in a total of n = 13 segments and the last segment measuring 0.6 mm. The stiffness per unit length of the model is set to 3200 N.m -2 and the length threshold to add a new segment to the tissue spline is set to L thres = 0.1 mm. Control: We use two tasks for the control of the needle manipulator and we fuse them using the classical formulation of the task function framework, as defined by (4.6) in section 4.3.1. The tasks are defined as follows. • The first task controls the tip translation velocity v t , as defined by (4.33) and (4.35). We set the insertion velocity v tip to 2 mm.s -1 . • The second task controls the bevel orientation via the angle σ, as defined by (4.51), (4.54) and (4.56). The maximal rotation speed ω z,max is set to 60 • .s -1 and the gain λ σ is set to 10 (see (4.56)) such that the maximal rotation velocity is used when the bevel orientation error is higher than 6 • . The final velocity screw vector applied to the needle base v b is then computed according to v b = J vt J σ + v t,d σd . (4.81) The controller is stopped once the needle tip reaches the depth of the target. Experimental scenarios: We perform four insertions using the controller defined previously. For each experiment, the phantom is manually moved laterally with respect to the insertion direction with an amplitude of up to 1 cm. During two of the insertions, the interaction model is updated using only the pose of the needle manipulator. During the two other insertions, the model is also updated with the UKF-based update algorithm defined in section 3.5. We use the position feedback version of the algorithm by measuring the position of needle points separated by 5 mm along the needle shaft. The process noise covariance matrix is set with diagonal elements equal to 10 -8 m 2 and the noise covariance matrix with diagonal elements equal to (2.5 × 10 -4 ) 2 m 2 . Results: The lateral distance between the needle tip axis and the target is shown in Fig. 4.19, either measured using the needle tracking (Fig. 4.19a) or estimated from the needle model (Fig. 4.19b). An example of the final state of two models, one updated and one not updated, during a single insertion is shown in Fig. 4.20. We can see that when the position of the tissue model is not updated, the needle model does not fit to the real needle. However the target can be reached with sub-millimeter accuracy in all cases, despite the fact that an inaccurate model is used in some cases. This shows that an accurate modeling of the current state of the insertion is not necessary to obtain estimates of the Jacobian matrices which can maintain the convergence of the control law, as previously expressed by (4.22). The task controller proves to be robust to modeling uncertainties thanks to the closed-loop feedback compensating for the errors appearing in the Jacobian matrices. Nevertheless it can be noted that updating the model is necessary if this one must be used for prediction of the needle tip trajectory. Furthermore, the fact that the phantom is moving while the target is not moving introduces an apparent motion of the target with respect to the needle tip. The designed targeting tasks shows good targeting performances by compensating for this target motion thanks to the closed-loop nature of the control. Conclusions: From these results, we have good reasons to expect good targeting performances when using 3D ultrasound (US) volume as feedback, even if the probe pose is not accurately estimated and causes the model to Yellow and blue lines are, respectively, the needle and tissue spline curves of a model updated using only the pose feedback of the needle manipulator, such that the position of the tissue spline (blue) is not updated during the insertion. Red and green lines are, respectively, the needle and tissue spline curves of a model updated using the pose feedback of the needle manipulator and the visual feedback, such that the position of the tissue spline (green) is updated during the insertion. be updated from inaccurate measures. The close-loop control may be able to ensure good targeting as long as the desired values for the tasks are computed in the same image space, i.e. both needle and target are detected using the same US volume. In the following section we present additional experiments to see if this intuition can be confirmed. Insertion under US guidance In previous sections we tested our steering framework using cameras to track the needle in a translucent phantom. If cameras offer a good accuracy, in clinical practice the needle is inserted in opaque tissues, making cameras unusable for such procedure. In this section we propose to test if the framework can be used in practice using a clinically relevant imaging modality. We present experiments performed using 3D ultrasound (US) as the visual feedback to obtain the 3D position of the body of the needle. We mainly focus our study on the targeting accuracy and also consider the effect of setting different priority levels for the different tasks. Experimental conditions (setup in France): In these experiments, the Angiotech biopsy needle is actuated by the Viper s650. Two phantoms are used, one gelatin phantom and one phantom with a porcine liver embedded in gelatin. We use the 3D US probe and station from BK Ultrasound to grab online 3D US volumes. The US probe is fixed to the end effector of the Viper s850 and maintained fixed in contact with the phantom. The needle is inserted from the top of the phantom while the probe is set to the side of the phantom, as illustrated in Fig. 4.21. A thin plastic film is set to replace one side of the plastic container, allowing a soft contact between the probe and the phantom such that the US waves can propagate through the phantom. This ensures a good visibility of the needle in the US volume, by avoiding too much reflection of the US wave outside of the transducer. This orthogonal configuration can be observed in practice in several medical applications, such as kidney biopsy or prostate brachytherapy, where the needle is inserted perpendicularly to the US wave propagation direction. The whole needle shaft is tracked in each volume using the tracking algorithm described in section 3.4.2. A virtual target is manually defined before the beginning of the insertion. The acquisition parameters of the US probe are set to acquire 31 frames during a sweeping motion with an angle of 1.46 • between successive frames. The acquisition depth is set to 15 cm, resulting in the acquisition of one volume every 900 ms. The needle is around 4 cm from the probe transducer for each experiment, which leads to a maximum resolution of 0.85 mm in the insertion direction and 0.3 mm × 1.72 mm in the other lateral directions. A focus length of 5 cm is set for the transducer to obtain a good effective resolution near the needle. We use the two-body model with polynomial needle segments of order r = 3. We fix the length of the needle segments to 1 cm, resulting in a total of n = 13 segments and the last segment measuring 0.6 mm. The stiffness per unit length of the model is set to 1000 N.m -2 and the length threshold to add a new segment to the tissue spline is set to L thres = 0.1 mm. Control: We use three tasks for the control of the needle manipulator and we fuse them using the hierarchical formulation of the task function framework, as defined by (4.10) in section 4.3.1. Each task is given a different priority level such that it does not disturb the tasks with higher priority. The tasks are defined as follows. • The first task controls the tip translation velocity v t , as defined by (4.33) and (4.35). We set the insertion velocity v tip to 1 mm.s -1 . to the needle base is then computed according to v b = J vt J σ + v t,d σd + P 1 (J δ P 1 ) + δd -J δ J vt J σ + v t,d σd , (4.82) with P 1 = I 6 - J vt J σ + J vt J σ , (4.83) where I 6 is the 6 by 6 identity matrix. In the second set the safety task has the highest priority and the two targeting tasks have the same lower priority. The final velocity screw vector v b is then computed according to v b = J + δ δd + P 2 J vt J σ P 2 + v t,d σd - J vt J σ J + δ δd , (4.84) with P 2 = I 6 -J + δ J δ . (4.85) Experimental scenario: Four insertions are performed in the gelatin phantom and four insertions in the porcine liver embedded in gelatin. For each type of phantom two insertions are performed using a higher priority for the targeting tasks as defined by (4.82) and two insertions are performed using a higher priority for the safety task as defined by (4.84). For each experiment, the needle is first placed perpendicular to the surface of the phantom with its tip slightly touching the surface. This position allows the initialization of the needle model and the tissue surface model using the current pose of the needle holder. The needle is then inserted 1.5 cm in the tissues and a 3D US volume is acquired. The needle tracking algorithm is initialized by manually segmenting the insertion point and the needle tip in the volume. A virtual target point is manually chosen in the volume between 5 cm and 10 cm under the needle. The pose of the probe is initialized separately for each experiment using the registration method described in section 3.6.3. The needle tracking algorithm defined in section 3.4.2 is also initialized at the same time. Then the chosen control law is launched and stops when the tip of the tracked needle reaches the depth of the target. Results: We first discuss the targeting performances obtained for the different experiments and then we discuss the effect of the priority order on the realization of the safety task. We can see that the target can be reached in each case with a final lateral targeting error below 3 mm, which comes close to the maximal accuracy of the reconstructed US volumes. This accuracy may be sufficient for most clinical applications. However, for more demanding applications, using a better resolution for the US volume acquisition could be sufficient to achieve a better targeting accuracy. The priority order does not seem to have a significant impact on the final targeting error, although slightly larger errors could be observed for the insertions in gelatin when the safety task was set to the highest priority. FRAMEWORK VALIDATION During these experiments we choose to update the needle model using only the pose feedback of the needle manipulator; no update of the position of the tissue spline is used to compensate for the modeling errors introduced by the constant stiffness per unit length set in the model. This confirms that the targeting performances are quite robust to modeling approximations. The registration of the probe pose performed at the beginning of the insertion is also quite inaccurate, especially concerning the orientation of the probe. Indeed, it depends on the quality of the manual segmentation of the part of the needle that is initially visible in the US volume. Since this needle part is initially short and the resolution of the volume is limited, it is difficult to manually segment the correct orientation of the needle. Nevertheless, since the inputs of the targeting tasks are provided using directly the position of the target in the frame of the needle tip tracked in the volume, the target can still be accurately reached. This way these experiments have demonstrated that the exact pose of the probe is not required by the steering framework to achieve good targeting performances, thanks to the closed-loop nature of the control. Safety task performances: Let us now look at the safety task that was added to minimize the tissue deformations at the surface. The placement of the probe on the side of the phantom is such that the top surface of the tissues is visible in the US volumes. Hence we can measure the stretch at the surface of the tissues during the insertions. The initial position of the insertion point is recorded at the initialization of the needle tracking algorithm. The surface stretch is then measured as the distance between this initial position and the current position of the tracked needle at the surface. The measured surface stretch during the insertions is shown in Fig. 4.24 along with the corresponding value estimated from the model. Let us first remark that the measured and model estimated values seems to follow the same general tendencies, although they are not really fitting. The fitting error can easily be explained by two factors. First of all it can be the consequence of modeling errors, introduced by non-linearities of the phantom properties such as the natural non-linearity of the liver or some amount of tearing on the surface of the gelatin. It may also be due to the accuracy of the measure of the surface stretch, which is limited by the volume resolution and the fact that a lot of artifacts appear at the tissue surface, deteriorating the quality of the tracking algorithm around this zone. As expected, we observe that the surface stretch of the model is well regulated to zero when the safety task is set to the highest priority. The measured deformations are also reduced, even if this is less visible from the measures done with the biological tissues. On the other hand, when the targeting tasks have the priority, more surface stretch tends to be observed. This indicates that the safety task is not always compatible with the targeting tasks, such that its contribution to the control law is sometimes damped by the hierarchical formulation due to the projection on the null space of the targeting tasks (see (4.82)). It can be noted that the task compatibility only depends on the Jacobian matrices of the different tasks and is independent of their priority levels. Therefore the incompatibility should be observed as well when the priorities are inverted. However the good targeting performances do not seem to be affected by the priority of the safety task. This shows that the safety task is 4.5. CONCLUSION incompatible with only some components of the targeting tasks, corresponding to the lateral translations of the tip. Indeed, when the safety task has the lowest priority, it is simply damped whenever it becomes incompatible with the control of the lateral translations. This results in higher tissue deformation, while the targeting performances are not affected. On the other hand, when the safety task has the highest priority, only the components of the targeting tasks controlling the lateral translations of the needle tip is damped in case of incompatibility. The components corresponding to the tip-based control, i.e. the insertion and the rotation around the needle axis, are indeed always compatible with the safety task since they do not induce much lateral motion of the needle shaft. Hence these components are always available to ensure a certain amount of needle tip steering toward the target, leading to the good targeting performance. Conclusions: We have seen than our needle steering framework could be used to accurately reach a target in soft tissues using 3D US as visual feedback. A safety task can also be added using the hierarchical stack of tasks formulation in order to reduce the tissue deformations. Overall, we could see that setting the highest priority to the safety task provides a better control over the deformations of the tissues, while it does not really affect the good targeting performances of the controller. This could be used in clinical applications to perform accurate needle insertions, while also reducing the amount of tissue deformation that is necessary to reach the target. Conclusion In this chapter we presented a review of current flexible needle steering methods used to accurately reach a targeted region in soft tissues. We focused on the two main approaches using either lateral motions of the needle base to control the motions of the tip or using only the natural deflection generated by an asymmetric tip during the insertion. Then, we provided an overview of different strategies used to define the trajectory that must be followed by the needle tip. We proposed a contribution consisting of a steering framework that allows the control of the 6 degrees of freedom of the base of a flexible needle to achieve several tasks during the insertion. Two main tasks were considered: a targeting task to achieve a good steering of the tip toward a target and a safety task to reduce the efforts applied to the needle and the tissues. This framework is also generic enough to integrate both steering approaches using lateral base manipulation and tip-based control in the targeting task. We then evaluated the performances of the framework through several experiments using a closed-loop visual feedback on the needle provided either by cameras or by the 3D ultrasound modality. These experiments demonstrated the robustness of the control framework to modeling errors and showed that it could achieve a good targeting accuracy even after some motions of the tissues occurred. Several ways to fulfill the safety task were compared and it was found that aligning the needle base with the insertion point during the insertion could provide a trade-off between needle bending and deformations of the tissues. Overall the framework proved to enable the accurate steering of the tip of the needle toward a target while ensuring low deformations of the tissues. However we only considered virtual targets that were fixed in space and the reduction of the deformations of the tissues was only assessed in stationary tissues. In order to adress these two points, in chapter 5 we will consider the compensation of external motions of the tissues during the needle insertion. The control framework will be extended to cope with moving tissues and to integrate force feedback in the control law. Chapter 5 Needle insertion with tissue motion compensation In chapter 4 we proposed a framework to steer a beveled-tip flexible needle under visual guidance. This framework uses all 6 degrees of freedom of the needle base to provide increased targeting performances compared to only using the natural deflection of the beveled-tip. It also allows other tasks to be fulfilled at the same time, such as ensuring the safety of the insertion procedure for the patient. In particular the efforts exerted by the needle on the tissues should be reduced to the strict minimum to avoid further damage caused by the needle. However these efforts can not only be due to the manipulation of the needle but they may also be due to the motions of the tissues themselves. In this chapter we focus on the compensation of such motions of the tissues during the insertion of a needle. The effect of tissue motions on the performances of the needle tracking has already been covered in chapter 3 and we will focus on tracking the motions of a real target in this chapter. We also propose further adaptations of the steering framework designed in chapter 4 to decrease the risks of tearing the tissues due to the lateral tissue motions. We will consider the case of force feedback to perform the steering with motion compensation. The chapter is organized as follows. In section 5.1, we first present some possible causes of tissue motions and an overview of current available techniques that may be used for motion compensation. We then propose some extensions of our control framework in section 5.2 to handle motion compensation via visual feedback or force feedback. The tracking of a moving region of interest using the ultrasound (US) modality will be the focus of section 5.3. Finally in section 5.4 we report the results obtained using the proposed framework to perform needle insertions in moving ex-vivo tissues using 2D US together with electromagnetic tracking as position feedback as well as force feedback for motion compensation. The work presented in this chapter was published in an international journal article [CSB + 18]. Tissue motion during needle insertion Tissue motion is a typical issue that arises during needle insertion procedures. When the procedure is performed under local anesthesia it is possible that the patient moves in an unpredicted manner. In that case, general anesthesia can be needed to reduce unwanted motions [START_REF] Flaishon | An evaluation of general and spinal anesthesia techniques for prostate brachytherapy in a day surgery setting[END_REF]. Whatever the chosen anesthesia method, physiological motions of the patient still occur, mainly due to natural breathing. Motion magnitude greater than 1 cm can be observed in the case of insertions performed near the lungs, like lung or liver biopsies [HMB + 10]. A first consequence of tissue motions is that the targeted region is moving. This can be compensated for by using real-time visual feedback to track the moving target and a closed-loop control scheme to insert the needle toward the measured position of the target. Target tracking using visual feedback is further discussed in the next section 5.3. Another point of concern in current works on robotic assisted procedures is that the needle is fixedly held by a mechanical robotic system. In the case of base manipulation control, a long part of the needle is outside of the tissues at the early stage of the insertion. Tissue motions can then induce a bending of this part of the needle and modify the orientation of the needle tip. This can greatly influence the resulting tip trajectory, especially if the insertion was planned pre-operatively for an open-loop insertion. In the case of tip-based control, the robotic device is often maintained close to the tissue surface to avoid any bending and buckling of the flexible needle outside the tissues. Hence the needle cannot really bend or move laterally, inducing direct damage to the tissues if the lateral motions of the tissues are large. Motion compensation is thus necessary to limit the risks of tearing the tissues. Many compensation methods exist and have been applied in various cases. Predictive control: Predictive control can be used to compensate for periodic motions, like breathing. In this case, the motions of the tissues are first estimated using position feedback and then used to predict the future motions such that they can then be compensated for. Cameras and visual markers can be used to track the surface of the body as was done by Ginhoux et al. [GGdM + 05]. However this does not provide a full information on what is happening inside the body and anatomical imaging modalities can be used instead. For example, Yuen et al. [YPV + 10] used 3D ultrasound (US) for beating heart surgery to track and predict the 1D motions of the mitral annulus in the direction of a linear surgical tool. The main drawback of this kind of predictive control is that the motion is assumed to be periodic with a fix period. This can require placing the patient under artificial breathing, which is usually not the case for classical needle insertions. In the last example, motion compensation of the beating heart was actually performed using a force sensor located between the tissues and the tip of the surgical tool. The motion estimation provided by the visual feedback was only used as a feed-forward to a force controller. Force feedback: Force control is another method used to perform motion compensation. For needle insertion procedures, a separate force sensor was used by Moreira et al. [START_REF] Moreira | Towards physiological motion compensation for flexible needle interventions[END_REF] to estimate the tissue motions in the insertion direction. The estimated motion was used to apply a periodic velocity to the needle in addition to the velocity used for the insertion. Impedance or admittance controls are also often used to perform motion compensation since tissue damage can directly be avoided by reducing the force applied to the tissues. This usually requires to first model the dynamic behavior of the tissues. Many models have been proposed for this purpose [START_REF] Moreira | Viscoelastic model based force control for soft tissue interaction and its application in physiological motion compensation[END_REF]. Atashzar et al. [AKS + 13] attached a force sensor directly to a needle holder. The force sensor was maintained in contact with the surface of the tissues during the insertion, allowing the needle holder to follow the motions of the tissues. While axial tissue motions could be accurately compensated for, lateral tissue cutting may still occur in such configuration since the tissues can slip laterally with respect to the sensor. The force sensor can also be directly attached between the manipulator and the needle, as was done by Cho et al. [CSK + 15][KSKK16] . This way, lateral tissue motions could be compensated for. Motion compensation in the insertion direction is however difficult to perform in this case. Indeed, the insertion naturally requires a certain amount of force to overcome the friction, stiction or tissue cutting forces, such that it is difficult to separate the effect of tissue motions from the necessary insertion forces. Since cutting the tissues in the insertion direction is necessary during the needle insertion procedure, we choose to focus only on lateral tissue motions. These lateral motions are also likely to cause more damage due to a tearing of the tissues. In order to be able to adapt to any kind of lateral motions, such as unpredictable patient motions, in the following we do not consider the case of predictive control. Instead we propose to adapt the needle steering framework that we defined in chapter 4, such that it can incorporate force feedback in a reactive control to compensate for tissue motions. Motion compensation in our task framework In this section we present an extension of the needle steering framework that we proposed in section 4.3 in order to enable motion compensation. We only consider the case of lateral motion compensation to avoid tissue tearing. Compensation in the insertion direction is less critical in our case since it only has an effect at the tip of the needle, which is already controlled by a task designed in section 4.3.4.1 to reach the target. COMPENSATION Motion compensation can easily be integrated in our needle steering framework by adding a task to the controller. In the following, we first discuss the use of the safety tasks that were designed in section 4.3.4.2 and then we propose a new task design to use the force feedback provided by a force sensor. Geometric tasks: A lateral motion of the tissues is equivalent to moving the rest position of the path cut by the needle in the tissues (see section 2.4.2 for the definition of this path), which also modifies the initial position of the insertion point. The safety tasks designed previously to provide a safe behavior of the insertion in stationary tissues can thus directly be used to perform motion compensation. The task designed to minimize the distance between the insertion point and the needle shaft can naturally compensate for the tissue motions since the needle shaft remains close to the insertion point. The task designed to align the needle base with the insertion point will also naturally follow the tissue motions. In this case it is possible that the needle base only rotates to align with the moving insertion point but does not translate. However, if the needle base does not translate while the tissues are moving laterally then the needle tip deviates from its desired trajectory. In this case, motion compensation can be obtained using the combination of the safety task with a targeting task that controls the lateral motions of the tip, such as the tip translations or the alignment of the tip axis with the target. The task designed to minimize the bending of the needle will also be sensitive to tissue motions. Indeed, if the needle is in a state of minimal bending energy, then an external motion of the tissues introduces an additional bending of the needle that the task will compensate. However, this task should not be used for stability reasons, as was discussed in section 4.4.1.2. The main issue concerning the implementation of these safety tasks is that the initial rest position of the insertion point must be known. This rest position can not be observed directly, even with external tissue tracking, since the observable position of the insertion point results from the lateral interaction with the needle. Therefore an estimation is required, for example using the model update method that we proposed in section 3.5.2, such that it gives a correct estimation of the real state of the tissues. However, as was discussed in section 3.6, an exact estimation of the position of the tissues is difficult to obtain due to their non-linear properties and the modeling approximations. Therefore, we propose instead to use force feedback, which directly provides a measure of the interaction of the needle with the tissues and does not rely on a good estimation of the tissue position. Force feedback: The ultimate goal of the motion compensation is to reduce the lateral efforts exerted by the needle on the tissues in order to avoid the tearing of the tissues. A task can then directly be designed to minimize these efforts. In practice it is hard to measure directly the forces applied on the tissues at each point of the needle, as it would require a complex design to place force sensors all along the needle shaft. The forces could be retrieved indirectly by using the full shape of the needle and its mechanical properties. This would require integrated shape sensors in the needle, like fiber Bragg grating (FBG) sensors [PED + 10], which is not always desirable due to the additional design complexity. An imaging modality allowing a full view of the needle could also be used. However, viewing the whole needle is not possible with ultrasound (US) imaging since it is limited to the inside of the tissues. It could be possible to use 3D computerized tomography (CT) or magnetic resonance imaging (MRI), however their acquisition time is too slow to be used for real-time control. The ideal case would be to measure the forces at only one location of the needle, so that it is not required to use special modifications of the needle itself. It can be noted that in a static configuration the total force exerted at the base of the needle corresponds to the sum of the forces exerted along the needle shaft. Inertial effects can usually be ignored in practice because of the low mass of the needle, such that the static approximation is valid in most cases. Therefore minimizing the lateral force exerted at the base of the needle should also reduce the efforts exerted on the tissues. Therefore in the following we propose to design a task for our steering framework in order to minimize this lateral force. Lateral force reduction task: Let us define the lateral component f l ∈ R 2 of the force exerted on the needle base. As mentioned previously, we ignore the axial component since it is necessary for the insertion of the needle. The task Jacobian J f ∈ R 2×6 and the desired variations ḟ l,d of the task are defined such that ḟ l = J f v b , (5.1) ḟ l,d = -λ f f l , (5.2) where λ f is a positive control gain that tunes the exponential decrease rate of f l and we recall that v b is the velocity screw vector of the needle base. The task Jacobian J f is computed from the interaction model using the finite difference method (4.27) presented in section 4.3.3. For this computation, the lateral force can directly be computed as the shear force applied at the level of the needle base. Using the two-body model defined in sec-COMPENSATION tion 2.4.2, this can be expressed according to f l = EI d 3 c N (l) dl 3 l=0 , ( 5.3) where we recall that E is the Young's modulus of the needle, I is its second moment of area and c N is the spline curve representing the needle. This task will be used in the followings to perform motion compensation when a force sensor is available to provide a measure of the interaction force at the base of the needle. This measure will be used as the input to the control law (5.2). However, motion compensation during a needle insertion procedure is not limited to the reduction of the damage done to the tissues. In order to obtain good targeting performances while the tissues are moving, the motions of the target should also be measured. Therefore, in the following we focus on the tracking of a moving target using an imaging modality. Target tracking in ultrasound In all previous experiments we only considered the case of virtual targets. However, in practice, the needle should be accurately steered toward a real target. The target can be moving, either due to physiological motions of the patient or due to the effect of the insertion of the needle on the tissues. In this section we present a tracking algorithm that we developed to follow the motion of a moving spherical target in 2D ultrasound images. Target tracking in 2D ultrasound We use a custom tracking algorithm based on the Star algorithm [START_REF] Friedland | Automatic ventricular cavity boundary detection from sequential ultrasound images using simulated annealing[END_REF] to track the center of a circular target in 2D ultrasound (US) images. This kind of tracking has proved to yield good performances for vessel tracking [GSM + 07]. The process of the tracking algorithm is described in Alg. 1 and illustrated in Fig. 5.1 and we detail its functioning in the following. Template matching: This kind of techniques is widely used in image processing in general and consists in finding a patch of pixels in an image that corresponds the best to another reference patch. Many similarity criteria can be used to assess the resemblance between two patches, like the sum of square differences, the sum of absolute differences or the normalized cross correlation, each having their pros and cons. The reference patch can also be defined in two main ways. The first one consists in extracting a patch in the previous image at the location of the object. This way the object can be tracked all along the image sequence even if its shape changes. However the accumulation of errors can cause Algorithm 1: Target tracking: initialization is performed manually by selecting the target center p center and radius r in the image as well as the number N of rays for the Star algorithm. A square pixel patch I patch centered around p center is extracted for the template matching. I patch , p center , r, N ← INITIALIZE_TRACKING(); while Tracking do I ← ACQUIRE_IMAGE(); p center ← TEMPLATE_MATCHING(I, I patch ); E ← ∅; for i ∈ [0, N -1] do θ ← 2πi N ; Ray ← TRACE_RAY(p center , 2r, θ)                    Star algorithm ; p edge ← EDGE_DETECTION(Ray); E ← E ∪ p edge ; end p center , r ← CIRCLE_FITTING(E); I patch ← EXTRACT_REFERENCE_PATCH(I, p center ); end the tracking to drift. The second one is to take a capture of the object of interest at the beginning of the process and keep it as a reference. This allows avoiding drifts but the tracking can fail if the object shape is changing. In our case we first apply a template matching between two successive images to get a first estimation of the target motion. This is represented by the TEMPLATE_MATCHING function in Alg. 1. We chose here to take the sum of square differences as similarity measure because it is fast to compute and usually yields good matching. The possible drift will be canceled by the following step of the algorithm, which is the Star algorithm. Star algorithm: We use the Star algorithm to refine the tracking and to remove the drift obtained with successive template matching by exploiting the a priori shape of the target. The Star algorithm is initialized around the center of the target estimated by the template matching. Angularly equidistant rays are then projected from the target center (see Fig. 5.1). The length of each ray is chosen such that it is higher than the diameter of the target to ensure that each ray is crossing a boundary of the target. An edge detector is run along each ray to find these boundaries. Contrary to the boundaries of a vessel which are almost anechoic, we consider here an hyperechoic target. Using a classical gradient-based edge detector as was done for the needle tracking in camera images (section 3.4.1), false edge detection could arise due to noise and inhomogeneities inside the target. To reduce this effect, we Figure 5.1: Illustration of the Star algorithm used for the tracking of a circular target in 2D ultrasound. The blue dot is an initial guess of the target center from which rays are projected (blue lines). The estimation of the target center (green cross) is obtained using circle fitting (green circle) on the detected boundaries along each ray (yellow dots). find the boundary along each ray as the point which maximizes the difference between the mean intensities on the ray before and after this point. This is equivalent to finding the distance l edge along each ray such that l edge = arg min L∈[0,2r] 1 L L 0 I(l)dl - 1 2r -L 2r L I(l)dl (5.4) where 2r is the length of the ray and I(l) is the pixel intensity along the ray at the distance l from the initial center (see blue dot and lines in Fig. 5.1). Finally a circle fitting is performed on the detected boundaries to find the center of the target as illustrated in Fig. 5.1. This new estimation of the target center is used to extract a new reference path for template matching and the whole process is repeated for the next image. Both steps of the algorithm are complementary. Template matching can be used to find the target in the whole image if necessary; however its performances are degraded by noise and intensity variations, which cause a drift over time. On the contrary, the Star algorithm can find the real center of the target and adapt to noise, changes of intensity and, up to a certain extend, to changes of the shape of the target. However it requires that the initial guess of the center lies inside the real target in the image. Template matching is thus a good way to provide this first initialization. Overall this tracking algorithm is relatively robust and can be used to track a moving target in 2D US images, in spite of speckle noise or shape variations. It can also easily be adapted to track a 3D spherical target in 3D US volumes. Target tracking validation in 2D ultrasound In this section we provide the results of experiments performed to validate the performances of the tracking algorithm that we developed in previous section. Experimental conditions (setup in the Netherlands): The UR5 robot is used to move a gelatin phantom with embedded play-dough spherical targets. We use the 3D wobbling probe and the ultrasound (US) station from Siemens to acquire the 2D US images. A cross section of the volume is selected to be displayed on the screen of the station such that it contains the target and is normal to the probe axis (US beam propagation direction). The screen of the US scanner is then transferred to the workstation using a frame grabber. The acquisition parameters of the US probe are set to acquire 42 frames during a sweeping motion with an angle of 1.08 • between successive frames. The field of view of each frame is set to 70 • and the acquisition depth is set to 10 cm, resulting in the acquisition of one volume every 110 ms. The targets are between 24 mm and 64 mm from the probe transducer in the experiments, which leads to a maximum resolution of the US image between 0.45 mm × 0.73 mm and 0.70 mm × 1.49 mm. Experimental scenario: A 3D translational motion is applied to the phantom to mimic the displacement of the liver during breathing [HMB + 10]. The applied motion m(t) has the following profile: m(t) = a + b cos 4 ( π T t - π 2 ), (5.5) where a ∈ R 3 is the initial position of the target, b ∈ R 3 is the magnitude of the motion and T is the period of the motion. The magnitude of the motion is set to 7 mm and 15 mm respectively in the horizontal and vertical directions in the image, which corresponds to a typical amplitude of motion of the liver during breathing [HMB + 10]. No motion is set in the out of plane direction. The period of the motion is set to T = 5s. After manual initialization, the tracking is performed for a duration of 30 s corresponding to 6 periods of the motion. Results: The position of the tracked target is compared with the groundtruth obtained from the odometry of the UR5 manipulator. An example of the evolution of the target position is shown in The motion described by (5.5) is applied to the gelatin phantom with a period T = 5s. The global mean tracking error is 3.6 mm for this experiment. However it reduces to 0.6 mm after compensating for the delay of about 450 ms introduced by the data acquisition. the conversion of the volume to Cartesian space, the extraction of the slice to display on the screen, the transfer of the image to the workstation and finally the tracking process. The sweeping takes around 110 ms and the mean tracking time is 300 µs, which indicates that the remaining latency of about 340 ms should mostly be due to the post-scan conversion and the frame grabbing. In order to assess the quality of the tracking algorithm, the actual positioning accuracy is measured by adding a delay to the ground truth signal. The mean tracking errors over time between the delayed ground-truth and the measures are summarized in Table 5.1. Sub-millimeter accuracy is obtained, which is is sufficient for most medical applications. We can also observe that the tracking accuracy is lowered by the distance of the target from the probe. This is due to two factors mentioned in section 3.2. First we use a convex wobbling probe, which means that the distance between the different US beams increases as they get further away from the transducer. Additionally each beam also tends to widen during their propagation due to the diffusion phenomenon. Overall the resolution of the 2D image extracted from the 3D volume naturally decreases when its distance from the probe increases. This confirms that the algorithm yields excellent tracking performance and is only limited by the resolution and the latency of the acquisition system. Hence we use this algorithm in the following to perform needle insertion toward real moving targets. Redefinition of control inputs and tasks: Using this setup, the control inputs consist of the velocity screw vector v U R ∈ R 6 of the end-effector of the UR3 plus the 2 velocities v N ID ∈ R 2 of the NID. Hence, we define the control vector v r ∈ R 8 of the whole robotic system as v r = v U R v N ID . (5.6) In the following we assume that v U R is expressed as the velocity screw vector of the frame of the tip of the NID, corresponding to the frame {F b } depicted in Fig. 5.4. In order to use our steering framework based on task functions with this system, the Jacobian matrices associated to the different tasks that we defined in sections 4.3.4 and 5.2 need to be modified to take into account the additional degrees of freedom (DOF) of the NID. The Jacobian matrix J ∈ R n×8 associated to a task vector e ∈ R n of dimension n is now defined Figure 5.5: Illustration of the three different configurations used to insert the needle. The needle insertion device (NID) is shown in black and the needle is the green line. For configurations 1 and 2, the needle is fully outside and does not slide any further in the NID. For configurations 3, it starts fully inside and can slide in the NID. No constraints is added on the external motion of the NID for configuration 1, while remote center of motion (RCM) is applied around the insertion point for configurations 2 and 3. Additionally, no translations of the tip of the NID is allowed for the third configuration. such that ė = J v r . (5.7) Note that we still use our needle model and the method defined in section 4.3.3 to compute the Jacobian matrices. However, the method is adapted to add the two additional DOF of the NID. For simplicity, in the followings we will keep the same notations that we used in sections 4.3.4 and 5.2 for the Jacobian matrices of the different tasks. We will also refer to the equations presented in these sections for the definitions of the tasks. Insertion configurations and control laws: We compare three different insertion configurations, as depicted in Fig. 5.5. The first two configurations are used to simulate the case of a needle held by its base. The needle is fully outside of the NID and no control of the translation stage inside the NID is performed, which is equivalent to having a 10.8 cm long needle held by its base. A remote center of motion around the insertion point is added for the configuration 2. For the third configuration, the tip of the NID (center of frame {F b } in Fig. 5.4) is set in contact with the surface of the phantom and the needle is initially inside the NID. The insertion is then performed using the translation stage of the NID, resulting in a variable length of the part of the needle that is outside the NID. A remote center of motion is also added around the tip of the NID, which is equivalent to the insertion point in this case. We use several tasks to define the control associated to each configuration and we fuse them using the classical formulation of the task function framework, as defined by (4.6) in section 4.3.1. Four tasks are common to all configurations and are defined as follows. • The first task controls the insertion velocity v t,z of the needle tip along the needle axis, as defined by (4.36) and (4.37). We set the insertion velocity v tip to 3 mm.s -1 . • The second task controls the bevel orientation via the angle σ, as defined by (4.51), (4.54) and (4.56). The maximal rotation speed ω z,max is set to 180 • .s -1 and the gain λ σ is set to 10 (see (4.56)) such that the maximal rotation velocity is used when the bevel orientation error is higher than 18 • . • The third task controls the alignment angle θ between the needle tip and the target, as defined by (4.38), (4.42) and (4.43). The control gain λ θ is set to 1. Due to the stiffness of the phantom and the high flexibility of the needle, this task can rapidly come close to singularity once the needle is deeply inserted. In order to avoid high control outputs, this task is deactivated once the needle tip has been inserted 2 cm. Given the insertion velocity and the gain set for this task, this gives enough time to globally align the needle with the target such that tip-based control tasks (first and second tasks) are then sufficient to ensure a good targeting. • The fourth task is used to remove the rotation velocity ω U R ,z of the UR3 around the needle axis. Indeed, we can observe that this rotation has the same effect on the needle as the rotation ω N ID of the needle inside the NID. However, using the UR3 for this rotation would result in unnecessary motions of the whole robotic arm and of the NID, which could pose safety issues for the surroundings. Therefore we add a task to set ω U R ,z to zero. The Jacobian matrix J ω U R ,z ∈ R 1×8 and the desired value ω U R ,z,d associated to this task are then defined as J ω U R ,z = [0 0 0 0 0 1 0 0], (5.8) ω U R ,z,d = 0. (5.9) For the first two configurations, an additional task is added to remove the translation velocity v N ID of the needle inside the NID. The Jacobian matrix J v N ID ∈ R 1×8 and the desired value v N ID ,d associated to this task are then defined as J v N ID = [0 0 0 0 0 0 1 0], (5.10) v N ID ,d = 0. (5.11) The final control vector v r,1 ∈ R 8 for the first configuration is then computed according to v r,1 =       J vt,z J σ J θ [0 0 0 0 0 1 0 0] [0 0 0 0 0 0 1 0]       +       v t,z,d σd θd 0 0       . (5.12) Note that the task space is here of dimension 5 while the input space is of dimension 8, such that all tasks should ideally be fulfilled. For the second configuration, a task is added to align the needle base with the initial position of the insertion point at the surface of the phantom, such that there is a remote center of motion. This task is defined via the angle γ between the needle base axis and the insertion point, as defined by (4.66), (4.70) and (4.71). The final control vector v r,2 ∈ R 8 for this configuration is then computed according to v r,2 =         J vt,z J σ J θ J γ [0 0 0 0 0 1 0 0] [0 0 0 0 0 0 1 0]         +         v t,z,d σd θd γd 0 0         . (5.13) Note that the task space is here of dimension 6 while the input space is of dimension 8, such that all tasks should ideally be fulfilled. Finally, a remote center of motion is applied at the insertion point for the third configuration. Since the tip of the NID is directly located at the insertion point, this is achieved by adding a task to remove the translation velocity v U R ∈ R 3 . The Jacobian matrix J v U R ∈ R 3×8 and the desired value v U R ,d associated to this task are then defined as J v U R = [I 3 0 3×5 ], (5.14) v U R ,d = 0, (5.15) where I 3 is the 3 by 3 identity matrix and 0 3×5 is the 3 by 5 null matrix. MOTION COMPENSATION USING FORCE FEEDBACK The final control vector v r,3 ∈ R 8 for this configuration is then computed according to v r,3 =       J vt,z J σ J θ [0 0 0 0 0 1 0 0] [I 3 0 3×5 ]       +       v t,z,d σd θd 0 0       . (5.16) Note that the task space is here of dimension 7 while the input space is of dimension 8, such that all tasks should ideally be fulfilled. Experimental scenario: The needle is first placed perpendicular to the surface of the tissues such that its tip barely touches the phantom surface. Then a straight insertion of 8 mm is performed, corresponding to the minimal length of the needle that remains outside the NID when the needle is fully retracted inside it. This way the tip of the NID is just at the level of the phantom surface for the third configuration. This is also done with the two other configurations such that the initial length of needle inside the phantom is the same for every experiment. Four insertions are performed for each configuration. The virtual target is initialized such that it is 7 cm in the insertion direction and 1 cm in a lateral direction with respect to the needle axis. A different lateral direction is used for each experiment, such that the different targets are rotated about 90 • around the needle axis between each experiment. The controller is then started and stopped once the needle tip reaches the depth of the target. Results: We measure the interaction force exerted at the base of the needle, i.e. in the frame {F b } depicted in Fig 5 .4, during each experiment. The mean value of the absolute lateral force is summarized for each insertion configuration in Fig. 5.6. We can see that when the NID is near the surface of the tissues, it induces an increase in the amount of force exerted at the needle base compared to the case where the needle base is far from the tissues. This could be expected since the lateral motion of the needle shaft near the needle base is directly applied to the tissues when the whole needle is inserted. In the opposite case the lateral motion can be absorbed by some amount of bending of the needle body, resulting in less force applied to the tissue. While it is better to reduce the amount of force applied to the tissue, it is also important for the motion compensation to have a good sensitivity of the force measurements to the needle and tissue motions. Using the first two configurations would result in a small and noisy measured force at the base of the needle, due to the damping of the lateral motions of the tissue by the compliance of the needle. Such measures would neither be useful for the model update algorithm nor for the compensation of the tissue motions. These configurations also have the drawback that they require a motion of the whole robot only to insert the needle, which increases the risk of collision with the surroundings. On the contrary this is not required with the third configuration since the insertion can be performed using the internal translation stage of the NID, which would be better in order to avoid collisions in a medical context. Additionally, this last configuration offers a great sensitivity to the tissue motions, since a small displacement is sufficient to induce a significant measure of force at the needle base. This would be beneficial for the model update as well as for the motion compensation. Therefore, in the followings we will choose to insert the needle using the third configuration. Needle insertion with motion compensation We present here the results of the experiments performed to test the performances of our framework during an insertion with lateral tissue motions. Experimental conditions (setup in the Netherlands): The setup used to hold and insert the needle is the same as in section 5.4.1. The ATI force torque sensor is still used to measure the force applied to the base of the needle and the Aurora electromagnetic (EM) tracker is used to measure the position and direction of the tip of the biopsy needle. The UR5 robot is used to apply a known motion to a phantom. Two phantoms are used, one with porcine gelatin and one with a bovine liver embedded in the gelatin. Artificial targets made of play-dough are placed in the gelatin phantom and We use the two-body model presented in section 2.4.2 with polynomial needle segments of order r = 3 to represent the part of the needle that is outside of the NID, from the frame {F b } depicted in Fig. 5.7 to the needle tip. We fix the length of the needle segments to 1 cm, resulting in one segment of 8 mm when the needle is retracted to the maximum inside the NID and 11 segments with the last one measuring 8 mm when the needle is fully outside. We use a rather hard phantom, such that we set the stiffness per unit length of the model to 35000 N.m -2 . The length threshold to add a new segment to the tissue spline is set to L thres = 0.1 mm. The length and the pose of the base of the needle model are updated using the odometry feedback from the UR3 robot and the NID. The position of the tissue spline in the model is also updated using the force feedback and the EM feedback as input for the update algorithm that we defined in section 3.5.2. The performances of the update algorithm during these experiments have already been described in section 3.6.1. Figure 5.8 summarizes the whole setup and algorithms used for these experiments. Control: As explained in section 5.4.1, we consider here the input velocity vector v r of the whole robotic system defined by (5.6). We use three targeting tasks, one motion compensation task and two additional tasks for the control of the system and we fuse them using the classical formulation of the task Figure 5.8: Block diagram representing the experimental setup and control framework used to perform needle insertions in a moving phantom. The UR5 robot applies a motion to a phantom. The position of the target is tracked in ultrasound images. Measures from the force torque sensor and electromagnetic (EM) tracker are used to update the needle-tissue interaction model. The model and all measures are used by the task controller to control the UR3 and the needle insertion device in order to steer the needle tip towards the target while compensating for tissue motions. MOTION COMPENSATION USING FORCE FEEDBACK function framework, as defined by (4.6) in section 4.3.1. The different tasks are defined as follows. • The first task controls the insertion velocity v t,z of the needle tip along the needle axis, as defined by (4.36) and (4.37). We set the insertion velocity v tip to 3 mm.s -1 . • The second task controls the bevel orientation via the angle σ, as defined by (4.51), (4.54) and (4.56). The maximal rotation speed ω z,max is set to 180 • .s -1 and the gain λ σ is set to 10 (see (4.56)) such that the maximal rotation velocity is used when the bevel orientation error is higher than 18 • . • The third task controls the alignment angle θ between the needle tip and the target, as defined by (4.38), (4.42) and (4.43). The control gain λ θ is set to 1. Due to the stiffness of the phantoms and the high flexibility of the needle, this task can rapidly come close to singularity once the needle is deeply inserted. In order to avoid high control outputs, this task is deactivated once the needle tip has been inserted 2 cm. Given the insertion velocity and the gain set for the task, this gives enough time to globally align the needle with the target such that tip-based control tasks (first and second tasks) are then sufficient to ensure good targeting. • The fourth task is the safety task and it is chosen to reduce the lateral force f l applied to the base of the needle, as defined by (5.1) and (5.2). The control gain λ f is set to 2.5. • The fifth task is used to remove the rotation velocity ω U R ,z of the UR3 around the needle axis, as was discussed in section 5.4.1. It is defined by (5.8) and (5.9). • The sixth task is used to remove the translation velocity v U R ,z of the UR3 along the needle axis. For similar reasons as the previous task, we can observe that this translation is redundant with the insertion of the needle by the NID. However, translating the UR3 in this direction could drive the NID into the tissues, which should be avoided for safety reasons. Therefore we add a task to set v U R ,z to zero. The Jacobian matrix J v U R ,z ∈ R 1×8 and the desired value v U R ,z,d associated to this task are then defined as J v U R ,z = [0 0 1 0 0 0 0 0], (5.17) v U R ,z,d = 0. (5.18) COMPENSATION The final control vector v r at the beginning of the insertion is then computed according to v r =         J vt,z J σ J θ J f [0 0 0 0 0 1 0 0] [0 0 1 0 0 0 0 0]         +         v t,z,d σd θd ḟ l,d 0 0         . (5.19) Once the needle tip reaches 2 cm under the initial tissue surface, v r is then computed according to v r =       J vt,z J σ J f [0 0 0 0 0 1 0 0] [0 0 1 0 0 0 0 0]       +       v t,z,d σd ḟ l,d 0 0       . (5.20) The inputs of the targeting tasks are computed using the target position measured from the tracking in US images and the pose of the needle tip measured from the EM tracker. The input of the safety task is computed from the lateral force applied at the needle base that is measured from the force sensor. Note that the task space is here of dimension 6 or 7 while the input space is of dimension 8, such that all tasks should ideally be fulfilled. Registration: The position of the EM tracking system is registered in the frame of the UR3 robot before the experiments using the method that was presented in section 3.6.1. The force torque sensor is also calibrated beforehand to remove the sensor biases and the effect of the weight of the NID in order to reconstruct the interaction forces applied to the base of the needle. The details of the sensor calibration and the force reconstruction can be found in Appendix A. In order to accurately reach the target, its position must be known in the frame of the needle tip, such that the inputs of the targeting tasks can be computed. An initial registration step is thus required to find the correspondence between a pixel in a 2D US image and its real location in a common frame with the EM tracker, which is the UR3 robot frame in our case. The pose of the 3D US probe is first registered beforehand in the UR3 robot frame using the following method. The needle is inserted at different locations near the surface of the phantom and two sets of the positions of the needle tip are recorded, one using the needle manipulator odometry and needle model, and the other one using a manual segmentation of the needle tip in acquired 3D US volumes. Point cloud matching between the two sets is then used to find the pose of the US probe in the frame of the UR3 robot. The drawback of this method is that it is not clinically relevant, since many insertions are required before starting the real insertion procedure. However, for a clinical integration the pose of the probe could be measured using an external positioning system, such as an EM tracker fixed on the probe or through the tracking of visual markers with an optical localization system. Before the beginning of each needle insertion, the acquisition of 3D US volumes is launched and a plane normal to the probe axis is selected to be displayed on the US station screen. This plane is chosen such that it contains the desired target for the insertion. The position of the image in space is then calculated from the probe pose and the distance between the probe and the image, available using the US station. Each image can finally be scaled to real Cartesian space by using the size of a pixel, which is also known using the measurement tool available in the US station. Experimental scenario: Five insertions are performed in each phantom. At the beginning of each experiment the image plane displayed on the US station is selected to contain the desired target. The tracking algorithm is manually initialized by selecting the center of the target and its diameter in the image. The needle is initially inside the NID. An initialization of the insertion is performed by moving the tip of the NID to the surface of the tissues, such that the 8 mm of the needle that remain outside the NID are inserted into the tissues. The update algorithm, needle insertion and tissue motions are then started and are stopped once the needle tip has reached the depth of the target. The motion applied to the phantom is similar to the motion used for target tracking validation defined by (5.5) in section 5.3.2. The amplitude of the motion is set to 15 mm and 7 mm in the x and z direction of the world frame {F w } depicted in Fig. 5.7, respectively. Several values of the period of the motion are used, between 10 s and 20 s, as recapped in Table 5.2. Results: An example of the position of the needle tip measured with the EM tracker and the position of the tracked target during an insertion in the liver phantom can be seen in Fig. 5.9. We can see in Fig. 5.9b that both the needle and the target follow the motions of the tissues shown in Fig. 5.9a. The needle tip is steered toward the target and reaches it at the end of the insertion, as can be seen in Fig. 5 Table 5.2: Summary of the conditions and results of the insertions performed in a gelatin phantom and a bovine liver embedded in gelatin. Different periods T are used for the motion of the phantom. The target location in the initial tip frame is indicated for each experiment. The error is calculated as the absolute lateral distance between the needle tip axis and the center of the target at the end of the insertion. The mean and standard deviation of the error for each kind of phantom are presented separately. Target ). An example of a slice extracted from a final volume in gelatin is depicted in Fig. 5.12, showing that the center of the target can be accurately reached. Table 5.2 gives a recap of the initial position of the targets in the initial frame of the needle tip and the absolute lateral errors obtained at the end of each experiment. We can see that the target can be reached with an accuracy under 4 mm in all cases. This can be sufficient in clinical applications to reach medium sized tumors near moving structures. We can note that we choose to not compensate for the latency introduced by the acquisition system (see section 5.3.1). Indeed, this latency will mostly be unknown during a real operation and can also vary depending on the chosen acquisition parameters, such as the field of view and the resolution of the scanning. This point could be addressed if higher accuracy is required for a specific application. The mean targeting error is also higher in biological tissues than in gelatin. Several factors may explain this observation. The main reason is certainly that the tracking of the target is more challenging in this case. The target is less visible and the level of noise is increased in the image due to the texture of the tissues, as can be seen when comparing Fig. 5.10 and 5.11. This also limits the accuracy of the detection of the target center in the final 3D US volume. The evaluation of the final targeting accuracy is thus subject to more variability. The inhomogeneity of the tissues can also be a cause of deviation of the needle tip from its expected trajectory. However the tip-based targeting task orienting the bevel edge toward the target should alleviate this effect. Motion compensation: Finally, let us consider the motion compensation aspect during the insertion. Due to the proximity of the NID with the surface of the phantom, the lateral forces measured at the needle base are very similar to the forces applied to the tissues, such that we can use these measures to estimate the stress applied to the tissues. Table 5.3 gives a recap of the maximum and mean lateral forces applied to the needle base during the insertion. We can observed that these forces are maintained under 1.0 N during all the experiments, which is sufficiently low to avoid significant tearing of the tissues. In clinical practice, this is also lower than the typical forces that can be necessary to puncture the tissues and insert the needle. The framework is thus well adapted to perform motion compensation and to avoid tissue damage. Note that we did not perform full insertions without the motion compensation to compare the forces obtained in this case. However, we could observe during preliminary experiments that applying a lateral motion to the phantom while the needle is inserted in the tissues and fixed by the NID results in a large cut in the gelatin and also damages the needle. A small amount of tearing could still be observed at the surface of the gelatin when the motion compensation was performed, essentially due to the natural weakness of the gelatin and the cutting effect when the needle is inserted while applying a lateral force. Nevertheless, it can be expected that real biological tissues are more resistant and would not be damaged in this case. Figure 5.13 shows a representative example of the motions of the tissues and the lateral forces measured during an insertion in the bovine liver. Two phases can clearly be distinguished on the force profile. During the first 6 seconds, some fast variations of the force can be observed. This corresponds to the phase where all tasks are active using (5.19). The robotic system is thus controlled to explicitly align the needle tip axis with the target while reducing the applied force. Since the target is initially misaligned, a lateral rotation is necessary, which naturally introduces an interaction with the phantom. Fast motion of the system are thus observed, resulting from the interaction between the alignment and motion compensation tasks. After the tip alignment task has been deactivated, all remaining tasks in (5.20) are relatively independent, since inserting and rotating the needle does not introduce significant lateral forces. This results in a globally smoother motion, where the needle is simply inserted while the lateral motion of the NID naturally follows the motion of the phantom. Therefore motion compensation is clearly achieved in this case. We can see that the lateral force is well driven towards zero when the tissues are moving slowly. However, the amount of lateral force also tends to increase when the velocity of the tissues increases, which is due to the proportional nature of the control law (5.2). This could be improved by modifying the control law in order to reduce the motion following error, for example by adding some integral term in addition to the proportional gain. Conclusions: These experiments confirm that, when the needle has a great level of flexibility, lateral motion of the needle should only be performed at the beginning of the insertion to align the needle with the target. It allows a fast and efficient way to modify the needle trajectory without having to insert the needle, which could not be possible by exploiting only the natural deflection of the tip. However once the needle is inserted deeper in the tissues, the motion of the base has only a low effect on the tip trajectory compared to its effect on the force applied to the surface of the tissues. Alternating between several tasks depending on the state of the insertion has thus proved to be a good way to exploit the different advantages of each task while reducing their undesirable effects. These experiments also demonstrate that motion compensation can be performed at the same time as the accurate steering of the needle tip toward a target. Here we use only a first order control law for the force reduction task, which proves to be sufficient to yield good motion compensation. A more advanced impedance control could be used to obtain even better results and to reduce even further the applied lateral force [START_REF] Moreira | Viscoelastic model based force control for soft tissue interaction and its application in physiological motion compensation[END_REF]. Overall this set of experiments also demonstrates the great flexibility of our control framework. It shows that the framework can be used in a new configuration, where the needle is not held by its base but is instead inserted progressively in the tissues using a dedicated device. It has also proven to be able to maintain the same general formulation and adapt to 2D US, EM sensing and force feedback. Conclusion In this chapter we provided an overview of motion compensation techniques that can specifically be used to follow the motions of soft tissues. We then showed that the framework we proposed in chapter 4 could be easily adapted to provide motion compensation capabilities in addition to the needle steering. We then proposed a tracking algorithm that can follow a moving target in 2D ultrasound images. The performances of this algorithm were validated through experiments showing that it could achieve good tracking up to the resolution provided by the image. Finally we demonstrated the great flexibility of our global framework at handling multiple kinds of feedback modalities and robotic systems through experiments performed in a multi-sensor context and using a system dedicated to needle insertion. We showed that it allows fusing the steering of the tip of a needle with the compensation of lateral motions of the tissues. Results obtained during experimental insertions in moving ex-vivo biological tissues demonstrated that performances compatible with the clinical context can be obtained for both tasks performed at the same time, which constitute a great contribution toward safe and accurate robotic needle insertion procedures. Conclusion Conclusions In this thesis we covered several aspects of robotic needle insertion under visual guidance. In Chapter 1 we first presented the clinical and scientific context of this work and the challenges associated to this context. In Chapter 2 we provided a review on the modeling of the interaction between a needle and soft tissues. We then proposed two 3D models of the insertion of a beveled tip needle in soft tissues and compared their performances. In Chapter 3 we addressed the issue of needle localization in 3D ultrasound (US) volumes. We first provided an introduction to US imaging and an overview of needle tracking algorithms in 2D and 3D US images. We proposed a 3D tracking algorithm that takes into account the natural artifacts that can be observed around the needle location. We used our needle model to improve the robustness of the needle tracking and proposed a method to update the model and take tissue motions into account. In Chapter 4 we first presented a review of techniques used to control the trajectory of a needle during its insertion and to define the trajectory that must be followed by the needle tip. We then proposed a needle steering framework that is based on the task function framework used to perform visual servoing. The framework allows a great flexibility and can be adapted to different steering strategies to control several kinds of needles with symmetric or asymmetric tips. We then validated our framework through several experimental scenarios using the visual guidance provided by cameras or a 3D US probe. In Chapter 5 we considered the case of patient motions during the needle insertion and provided an overview of methods that can be used to compensate for these motions. We then extended our steering framework to handle motion compensation using force feedback. We finally demonstrated the flexibility of our framework by performing needle steering in moving soft tissues using a dedicated needle insertion robot in a multi-sensor context. In the following we draw some conclusions concerning needle modeling, ultrasound visual feedback, needle steering under ultrasound guidance and compensation of tissue motions during needle insertion. Finally we present some perspectives for future developments of our work. Needle modeling We first focused on the modeling of the behavior of a flexible needle being inserted in soft tissues. Fast and accurate modeling of the interaction phenomena occurring during the insertion of a needle is a necessity for the control of a robotic system aiming at the assistance of the insertion procedure. We have seen that models based on kinematics are efficient and allow fast and reactive control. However their efficiency often comes at the cost of a limitation of the phenomena that they can take into account. On the contrary, almost every aspect of an insertion can be described using finite element modeling, from the deformations of the needle to the complex modifications that it can create on the tissues. Nevertheless this complexity can only be obtained through heavy parameterization that requires accurate knowledge of boundary conditions and time consuming computations. A trade-off should thus be found to take into account the main interaction phenomena and to keep a low level of complexity to stay efficient. Mechanics-based models offer such compromise by reducing their computation requirement while still being a realistic representation of the reality. In this thesis we proposed a 3D flexible needle model that is simple enough to yield real-time performances, while still being able to take into account the deformations of the needle body due to its interaction with the moving tissues at its tip and all along its shaft. Ultrasound visual feedback A second requirement for the development of a robotic system usable in clinical practice is its ability to monitor the state of the environment on which it is acting. For needle procedure assistance this means knowing the state of the needle and of the tissues. Visual feedback is a great way to deal with this issue by mean of the dense information it can provide on the environment. Additional conditions should also be fulfilled on the nature of the provided images. A great quality of the image is only useful if it can be obtained frequently. On the contrary, images provided at a fast rate can only be used if they contain exploitable information. To this end, the ultrasound (US) modality is one of the best choice since it can provide real-time 2D or 3D relevant data on a needle and its surrounding tissues. Extracting this information in a reliable way is however a great challenge due to the inherent properties of this modality. We have shown a review of the different artifacts that can appear in US images as well as current techniques used to localize a needle in these images. In order to be used in the context of a closedloop control of a robotic system, needle detection algorithms should be fast and accurate. We contributed to this field by proposing a needle tracking method that directly exploits the artifacts created by the needle to find the location of its whole body in 3D US volumes. Ensuring the good consistency of the tracking between successive volumes can be achieved by modeling the expected behavior of the needle. The modeling of the current state of the needle interaction with the tissues can also be improved by exploiting the measures available on the needle. Therefore we fused our contributions by proposing a method to update our interaction model from the measures provided by several sensors and to use the model to improve the quality of the needle tracking. Needle steering under ultrasound guidance Once a good model of the insertion process is available along with a reliable measure of the needle location, the closed-loop robotic control of the insertion can be performed. We first reviewed the current techniques available to steer a needle in soft tissues and the different strategies used to define the trajectory of the tip. In order to remain as close as possible from a possible clinical application, we chose to focus on the steering of standard needles and did not consider the case of special designs. Two main methods are usually used in the literature to steer the needle, either manipulating the needle by its base, like usually done by the clinicians, or by exploiting an asymmetry of the tip to deflect the needle from a straight path during the insertion. In order to stay generic, we designed a control framework to manipulate the needle by its base and also to exploit the asymmetry of the tip. This framework also directly uses our previous contribution on needle modeling by using the model in real-time to compute the motions to apply to the base of the needle allowing specific motions of the needle tip. We performed several validation experiments to assess the performances of our framework in needle insertions performed under visual feedback. In particular we showed that the approach is robust to modeling errors and can adapt to tissue or target motions. Tissue motion compensation during needle insertion We addressed the topic of the compensation of the patient motions during a needle insertion procedure. The body of a patient may be moving during a medical procedure for several reasons, the most common ones being physiological motions which can hardly be avoided. Tissue motions can have several impacts on the results of a needle insertion procedure. The first one is that the targeted region inside the body is moving. This point should be taken into account by tracking the target and adapt the control of the needle trajectory accordingly. The steering framework that we proposed was already capable of handling this first issue and we complemented it with a target tracking algorithm in ultrasound images. The second important issue concerns the safety of the insertion, which can be compromised if the robotic system is not designed to follow the motions of the patient. In order to address this issue, we proposed an adaptation of our steering framework to be able to perform motion compensation using either visual feedback or force feedback. We demonstrated the performances of our method by performing the insertion of a flexible needle in ex-vivo moving biological tissues, while compensating for the tissue motions. Overall, the results of our experiments confirmed that our global steering framework can be adapted to several kinds of robotic systems and can also integrate the feedback provided by several kinds of sensing modalities in addition to the visual feedback, such as force measurements or needle tip pose provided by an electromagnetic tracker. Perspectives We discuss here several extensions and further developments that could be made to complement the current work in terms of technical improvements and clinical acceptance. Both aspects are linked since driving this work toward the operating room requires to first identify the specific needs and constraints of the medical staff, and then translate them into theoretical and technical constraints. In the following we address the limitations that were already mentioned in this manuscript as well as new challenges arising from specific application cases. Needle tracking Speed of needle tracking: We presented a review of needle tracking techniques in ultrasound (US) images and volumes. In order to be usable, automatic tracking of the needle should be fast, such that it gives a good measure of the current state of the insertion. Hence efficient tracking algorithms are a first necessity to achieve this goal, which is why we proposed a fast tracking algorithm. However, independently of the chosen tracking algorithm, current works are mostly using 3D volumes reconstructed in Cartesian space. Since the reconstruction of a post-scan volume from pre-scan acquired data requires some time, it introduces a delay in the acquisition. This is a first obstacle for a fast control of the needle trajectory since the needle has to be inserted slowly to ensure that the visual feedback is not completely out of date once available. The time of conversion can be reduced using specific hardware optimization, however a simple solution would be to directly use the pre-scan data to track the needle. While conversion of the whole post-scan volume remains desirable to provide an easy visualization for the clinician, it is not necessary for a tracking algorithm. The needle could be tracked in the pre-scan space and then converted in Cartesian space if needed. Taking a step further, it can be noted that acquiring the pre-scan volume also takes time. Tracking of the needle could be done frame by frame during the wobbling process to provide 2D feedback on the needle section directly once available. Reliability of needle tracking: In order to be usable for a control application in a clinical context, the tracking of the needle should be reliable. Reliability in the case of tracking in 3D US is challenging because of the overall low quality and the artifacts present in the image, which is the reason why we proposed a method to account for needle artifacts. In general, even for the human eye, it can be difficult to find the location of the needle in a given US volume when other bright linear structures are present. Temporal filtering is usually applied to reduce the size of the region in which to search the needle. In our case we used a mechanical model of the needle to predict the position of the needle in the next volume and we updated the model to take into account tissue motions. However large tissue motions or an external motion of the US probe can cause a large apparent motion of the needle in the volume which is not taken into account by the temporal filtering, resulting in a failure of the tracking. Following the motion of the other structures around the needle could be a solution to ensure the spacial consistency of the tracked needle position. Tracking the whole tissues could be a solution, for example by using methods based on optical flow [TSJ + 13]. Deep learning techniques could also be explored since they show ever improving promising results for the analysis of medical images [LKB + 17]. Active tracking: An accurate tracking of the needle location in the US volume is very important for the good proceedings of the operation. Servoing the position of the US probe could be a good addition to increase the image quality and ease the tracking process. A global optimization of the US quality could be performed [START_REF] Chatelain | Confidence-driven control of an ultrasound probe: Target-specific acoustic window optimization[END_REF]. A control scheme could also be designed to take into account the needle specific US artifacts, such as intensity dropout due to the incidence angle, and to optimize the quality of the image specifically around the needle [START_REF] Chatelain | Real-time needle detection and tracking using a visually servoed 3d ultrasound probe[END_REF]. Needle steering Framework improvement: The needle insertion framework based on task functions that we proposed can be extended in many ways. First we did not directly consider the case of obstacle avoidance. This could be easily added by the design of a specific avoidance task or by using trajectory planning to take into account sensitive regions [XDA + 09]. A higher level control over the tasks priority should also be added to adapt the set of tasks to the many different situations that can be encountered [START_REF] Mansard | Task sequencing for highlevel sensor-based control[END_REF]. For example targeting tasks could be deactivated in case a failure of the needle tracking has been detected. A specific task could be designed to take the priority in this case and to move the needle in such a way that it can easily be found by the tracking algorithm. Active model learning: In our control framework we used a mechanicsbased model of the needle to estimate to local effect of the control inputs on the needle shape. We proposed a method to update the local model of the tissues according to the available measures. A first improvement would be to explore the update of other parameters as well, like the tissue stiffness. However it is possible that the model does not always accurately reflect the real interaction between the needle and tissues, mainly due to the complex nature of biological tissues. A model-less online learning of the interaction could be explored, using the correlation between the control motions applied to the needle and its real measured effect on the needle and tissue motions. The steering strategy could also be modified to optimize this learning process, for example by stopping the insertion and slightly moving the needle base in all direction to observe the resulting motions of the needle tip and target. Clinical integration Before integrating the proposed framework into real clinical workflow, many points needs to be taken into account and discussed directly with the clinical staff. System registration: A first requirement for a good integration into the clinical workflow is that the system should be easy to use out-of-the-box, without requiring time consuming registration before each operation. In the case where the insertion is done using 3D ultrasound (US) to detect both the needle and the target in a same frame, we have proposed a simple registration method to estimate the pose of the US probe in the frame of the needle manipulator. The method only requires two clicks of the operator through a GUI and is necessary anyway to initialize the tracking of the needle. We showed that this was sufficient to achieve good targeting performances thanks to the robustness of the control method, however the estimation of the motions of the tissues proved to be more dependent on an accurate registration. An online estimation of the probe pose could be used to refine the initial registration. This may require additional sensors, such as fiber Bragg grating sensors integrated into the needle [PED + 10], to be able to differentiate between the motion that is due to the tissues or the probe. In clinical practice the US probe is also unlikely to stay immobile during the whole procedure. This can for example be because it is manually held by the clinician or because the field of view of the probe is too narrow and it has to be moved to follow the needle and the target. Online estimation would also be an advantage in those cases, and sensors could provide a direct feedback on the probe pose, like electromagnetic trackers or external cameras. A specific mechanical design could also be used to mechanically link the probe to the needle manipulator [YPZ + 07]. In this case the probe pose is known by design and a registration step is unnecessary. Tele-operation: The method we have proposed so far was aimed at performing a fully automated needle insertion. This nowadays still remains a great factor of rejection among the medical community. However the clinician can easily be integrated into the framework. A first possibility is to consider the robot as an assistant which can perform some predefined automated tasks, such as a standby mode, during which only tissue motion compensation is performed, and an insertion mode, during which the needle is automatically driven toward the target. The clinician would only have to select the set of tasks currently being performed. This way the global flow of the operation would still be controlled by a human operator while the low level complex fusion of the tasks would be handled by the system. However this only leaves a partial control to the clinician on the real insertion procedure. A second possibility is to give a full control to the clinician over one of the tasks and let the others to the system. For example the clinician can control the trajectory of the needle tip while the robot transparently handles the orientation of the bevel and motion compensation. A haptic interface could be used to provide a guidance on the optimal trajectory to follow to avoid some predefined obstacles and reach the target. Other kinds of haptic feedback could also be explored, as for example a feedback on the state of the automatic tasks performed by the system or the compatibility of the clinician's task with the other tasks. A visual feedback could also be provided to the clinician such that the control of the tip trajectory could be defined directly in the frame of a screen instead of the frame of the real needle. Clinical validation: We have shown that our steering framework could be adapted on several robotic systems. In order to go toward clinical integration, repeatability studies have to be conducted in biological tissues to assess the robustness of the method with a specific set of hardware components. These studies should be repeated for each envisioned set of hardware and the performances should be evaluated in accordance with the exact application that is considered. Performance requirements can indeed be different for each application, as for example in moving lung biopsies and prostate brachytherapy. Long-term vision: Finally, one can believe that fully autonomous surgeon robots will one day become reality. Contrary to a human surgeon, robotic systems are not limited to two hands and two eyes. They can have several dexterous arms that perform manipulations with more accuracy than a human. They can also integrate many feedback modalities at once, allowing a good perception of many different aspects of their environment. This is currently not enough to provide them with a good understanding of what is truly happening in front of them and the best action that should be performed. However, with the ever improving performances of artificial intelligence, it may be possible in the future that robotic systems have a better comprehension and adaptability to their environment. They could then be able to chose and perform with great efficiency the adequate task, as for example a medical act, that is the best adapted to a current situation taken in its globality. Before reaching this state, systems and techniques should first be developed that can autonomously perform narrow tasks with the best efficiency, such as a needle insertion. These could then be connected together to form a generic expert system. Dans cette thèse nous nous concentrons sur le contrôle automatique de la trajectoire d'une aiguille flexible à pointe biseautée en utilisant la modalité échographique comme retour visuel. Nous proposons un modèle 3D de l'interaction entre l'aiguille et les tissus ainsi qu'une méthode de suivi de l'aiguille dans une séquence de volumes échographiques 3D qui exploite les artefacts visibles autour de l'aiguille. Ces deux éléments sont combinés afin d'obtenir de bonnes performances de suivi et de modélisation de l'aiguille même lorsque des mouvements des tissus sont observés. Nous développons également une approche de contrôle par asservissement visuel pouvant être adaptée au guidage de differents types d'outils longilignes. Cette approche permet d'obtenir un contrôle précis de la trajectoire de l'aiguille vers une cible tout en s'adaptant aux mouvements physiologiques du patient. Les résultats de nombreux scénarios expérimentaux sont présentés et démontrent les performances des différentes méthodes proposées. Abstract The robotic guidance of a needle has been the subject of a lot of research works these past years to provide an assistance to clinicians during medical needle insertion procedures. However, the accurate and robust control of a needle insertion robotic system remains a great challenge due to the complex interaction between a flexible needle and soft tissues as well as the difficulty to localize the needle in medical images. In this thesis we focus on the ultrasound-guided robotic control of the trajectory of a flexible needle with a beveled-tip. We propose a 3D model of the interaction between the needle and the tissues as well as a needle tracking method in a sequence of 3D ultrasound volumes that uses the artifacts appearing around the needle. Both are combined in order to obtain good performances for the tracking and the modeling of the needle even when motions of the tissues can be observed. We also develop a control framework based on visual servoing which can be adapted to the steering of several kinds of needle-shaped tools. This framework allows an accurate placement of the needle tip and the compensation of the physiological motions of the patient. Experimental results are provided and demonstrate the performances of the different methods that we propose. Figure 1 . 1 : 11 Figure 1.1: Example of (a) continuum robot (taken from [CMC + 08]) and (b) concentric tubes (taken from [WRC09]). Figure 1 . 2 : 12 Figure 1.2: Illustration of several kinds of needle tip. Fig. 1.3. They are usually designed for a specific kind of intervention, such as prostate interventions under ultrasound (US) imaging [YPZ + 07] [HBLT12]. Special robots have also been designed to be compatible with the limitations imposed by computerized tomography (CT) scanners [MGB + 04], magnetic resonance imaging (MRI) scanners [MvdSK + 17] or both [ZBF + 08]. Figure 1 . 4 : 14 Figure 1.4: Example of special designs of the needle tip: (a) multi-segment needle (taken from [KFRyB11]), (b) one degree of freedom active pre-bent tip (taken from [AGL + 16]) and (c) two degrees of freedom active prebent tip (taken from [RvdBvdDM15]). Figure 1.5: Pictures of the robotic systems used for the experiments: (a) system used in France and (b) system used in the Netherlands. Figure 1 . 6 :Figure 1 161 Figure 1.6: Picture of the stereo camera system and one gelatin phantom used for the experiments. ( a ) a Figure 1.8: Picture of the needle used for the experiments in France. Figure 2 . 1 : 21 Figure 2.1: Illustration of the reaction forces applied to the needle tip by the tissues depending on the tip geometry. A symmetric tip, on the right, induces symmetric reaction forces. An asymmetric tip, on the left, induces asymmetric reaction forces which can modify the tip trajectory. Figure 2. 3 : 3 Figure 2.3: Illustration of finite element modeling (taken from (a) [ODGRyB13] and (b) [CAR + 09]). Figure 2.4: Illustration of mechanics-based models of needle-tissue interaction using either virtual springs or a continuous load. Figure 2 2 Figure 2.6: Illustration of the reaction forces applied on each side of the bevel (depicted on the right). Point O corresponds to the end of the 3D curve representing the needle. The current velocity of the needle tip v t defines the angle β in which the cutting occurs in the tissues. Figure 2 2 Figure 2.8: Picture of the setup used to acquire the trajectory of the tip of a real needle inserted in a gelatin phantom. The frame attached to the needle base is denoted by {F b }. {F b }, depicted in Fig. 2.8, and is one of the following • No motion (straight insertion) • Translation of 2 mm or -2 mm along x axis • Translation of 2 mm or -2 mm along y axis • Rotation of 3 • or -3 • around x axis • Rotation of 3 • or -3 • around y axis • Rotation of 90 • , -90 • or 180 • around z axis CHAPTER 2 .Figure 2 Figure 2 . 10 : 22210 Figure 2.9: Tip position obtained when a translation is applied along the x axis of the base frame between two insertion steps along the z axis. Measures are shown with solid lines, virtual springs model with long-dashed lines and two-body model with short-dashed lines. 2. 5 . 5 VALIDATION OF THE PROPOSED MODELS Rotation around x axis in the needle base frame {F b } Figure 2 . 11 : 211 Figure 2.11: Tip position obtained when a rotation is applied around the x axis of the base frame between two insertion steps along the z axis. Measures are shown with solid lines, virtual springs model with long-dashed lines and two-body model with short-dashed lines. Rotation around z axis in the needle base frame {F b } FinalFigure 2 . 14 : 214 Figure 2.14: Absolute final error between the position of the simulated needle tip and the measured position of the real needle tip. For each type of base motions, the mean and standard deviation are calculated using the final position error across 3 different insertions performed in the phantom. Figure 2 . 15 : 215 Figure 2.15: Average absolute error during the whole insertion between the position of the simulated needle tip and the measured position of the real needle tip. For each type of base motions, the mean and standard deviation are calculated over the whole length of the insertion and across 3 different insertions performed in the phantom. Figure 2 . 16 : 216 Figure 2.16: Computation time needed to get the shape of the needle from the base pose and position of the tissue model (virtual springs or spline segments). Figure 3 . 1 : 31 Figure 3.1: Illustration of the reconstruction of an ultrasound (US) scan line from the delay of propagation of the US wave. A first interface is encountered after a time t1 and a second one after time t 2 > t 1 . A part of the US wave reaches back the transducer after a total time of 2t 1 and a second part after 2t 2 . The distance of the first and second interfaces from the transducer are then computed as ct 1 and ct 2 , respectively. Figure 3 . 2 : 32 Figure 3.2: Illustration of the configuration of the piezoelectric elements (red) on linear and convex ultrasound transducers. Figure 3 . 3 : 33 Figure 3.3: Illustration of the effect of US beam width on the lateral and out of plane resolution of an US probe. Piezoelectric elements are represented in red and only a linear transducer is depicted here. . 8 )Figure 3 . 4 : 834 Figure 3.4: Illustration of 2D post-scan conversion for linear (top) and convex (bottom) transducers. Figure 3 . 5 : 35 Figure 3.5: Illustration of 3D post-scan conversion for a convex transducer wobbling probe. Figure 3 3 Figure 3.6: Illustration of several phenomena leading to the appearance of needle artifacts in ultrasound images: needle reverberation artifact, side lobes (blue arrows) artifact and reflection out of the transducer. Figure 3 . 7 : 37 Figure 3.7: Two orthogonal cross sections of a 3D ultrasound (US) volume showing the artifacts present around a needle. The needle is in the plane of the picture on the left and the right picture shows a cross section of the needle. In both images the US wave is coming from the left. Figure3.9: Illustration of the gradient-based algorithm used to track the needle in the images acquired by two stereo cameras. Steps of the algorithm: 1) initialization of control points from the previous needle location, 2) detection of the minimum and maximum gradient in normal directions, 3) update of the control points and polynomial fitting, 4) tip detection using gradient along the needle shaft direction. Figure 3.10: Illustration of the sub-cost functions used for the local tracking algorithm. The voxel intensities in the dark blue box should be low, so they are subtracted from sub-cost J 1 (see eq.(3.39)), while the ones in the orange box should be high and are added to J 1 . Similarly, voxels in the green boxes are added to the sub-cost J 2 (see eq.(3.41)). Once the needle has been tracked laterally by maximizing the total cost function J (see eq.(3.38)), a research of the tip is performed along the tangent at the extremity of the needle to maximize the function J 4 . The voxel intensities in the light blue box are subtracted from J 4 (see eq.(3.44)) while the ones in the yellow box are added. Figure 3 . 11 : 311 Figure 3.11: Picture of the setup used to acquire volume sequences of needle insertions in a gelatin phantom. The Viper s650 robot holds the needle on the left and the Viper s850 robot holds the probe on the right. Figure 3 . 14 : 314 Figure 3.14: Illustration of the principle of Bayesian filtering Figure 3 . 16 : 316 Figure 3.16: Illustration of the unscented Kalman filter Figure 3 . 3 Figure 3.18: Illustration of the two-body model and definition of the state x considered for the unscented Kalman filter (UKF). Figure 3 . 3 Figure 3.19: Experimental setup used to validate the performances of the tissue motion estimation algorithm when using force feedback at the base of the needle and position feedback at the needle tip. Figure 3 Figure 3 33 Figure 3.20: Example of measured and estimated tip position as well as the corresponding absolute position and orientation estimation errors. Three different combinations of feedback are used for the update algorithm: force and torque feedback (FT, red), tip position and orientation feedback (PO, green) and feedback from all sources (FT+PO, blue). Figure 3 3 Figure3.22: Example of torques measured and estimated using three different combinations of feedback for the update algorithm: force and torque feedback (FT, red), tip position and orientation feedback (PO, green) and feedback from all sources (FT+PO, blue). Figure 3 3 Figure 3.23: Example of measured and estimated tissue position as well as the corresponding absolute position estimation error. Three different combinations of feedback are used for the update algorithm: force and torque feedback (FT, red), tip position and orientation feedback (PO, green) and feedback from all sources (FT+PO, blue). Figure 3 3 Figure 3.25: Mean over time and across five experiments of the absolute error between the real and modeled tip position obtained for the different update methods and two different update rates. Figure 3.26: Example of measured and simulated positions of the needle tip during an insertion in gelatin while lateral motions are applied to the phantom. Five different models and update methods are used for the needle tip simulations. The measured tissue motions are shown in (a), the different tip positions in (b) and the absolute error between the measured and simulated tip positions in (c). Figure 3 . 3 Figure3.27: Two orthogonal views of a sequence acquired during a needle insertion in gelatin. Different models and update methods are used for the needle tip simulations. Method 1: rigid needle; method 2: flexible needle; method 3: flexible needle with extremity of the tissue spline updated from the measured tip position; method 4: flexible needle with tissue spline position updated with lateral tissue motion estimation; method 5: flexible needle with tissue spline updated with lateral tissue motion estimation and extremity from the measured tip position. The tissue spline of the different models are overlaid on the images as colored lines. Method 1 does not have any cut path and methods 2, 3, 4 and 5 are depicted in green, blue, red and yellow, respectively. The real needle can be seen in black, although it is mostly recovered by the tissue splines associated with methods 4 and 5. Overall only the tissue splines of method 4 and 5 can follow the real shape of the path cut in the gelatin. Figure 3 3 Figure3.28: Example of absolute error between the measured and simulated positions of the needle tip when using an update rate of 1 Hz for the estimation of the tissue motions. Figure 3 3 Figure 3.29: Example of tissue motions measured and estimated using the update method 4 with the position feedback obtained from cameras. Two update rates are compared: (a) fast update rate corresponding to the acquisition with cameras, (b) slow update rate simulating the acquisition with 3D ultrasound. Overall the estimations follow the real motions of the tissues. Figure 3 3 Figure3.31: Position of the tip in the 3D ultrasound volume obtained by manual segmentation and using different needle tracking methods. The insertion is performed along the y axis of the probe while the lateral motion of the tissues is applied along the z axis. One tracking method is initialized without model of the needle (blue), one is initialized from a model of the needle that does not take into account the motions of the tissues (green) and one is initialized from a model updated using the tissue motion estimation algorithm presented in section 3.5.2 (red). Figure 3 3 Figure 3.32: Illustration of the needle tracking in two orthogonal cross sections of a 3D ultrasound volume acquired near the end of the insertion. The result of the tracking initialized without model is represented by the blue curve, the tracking initialized from a non updated model by the green curve and the tracking initialized from an updated model by the red curve. Without information on the needle motion (blue curve), the tracking fails and get stuck on an artifact occurring at the surface of the tissues (blue arrow). Updating only the needle base leads to a initialization of the tracking around another bright structure that is tracked instead of the real needle (green curve). Taking into account the tissue motions allows a better initialization between two acquisitions, such that the tracking can find the needle (red curve). Figure 4.1: Illustration of the different kinds of flexible needle steering methods: (a) tip-based steering of a needle with asymmetric tip (b) base manipulation of a needle with symmetric tip (c) base manipulation of a needle with asymmetric tip. (a) Classical hierarchical formulation (b) Singularity robust formulation Figure 4 . 2 : 42 Figure4.2: Illustration of the task function framework in the case of two near incompatible tasks (n = 2) and a control vector with two components v x and v y (m = 2). Each E i is the set of control inputs for which the task i is fulfilled. Each S i is the input vector obtained using a single task i in the classical formulation (4.6). C is the input vector obtained using both tasks in the classical formulation (4.6). The same input vector C is obtained using the hierarchical formulation (4.13). Each R i is the input vector obtained using the singularity robust formulation (4.16) when the task i is given the highest priority. The contributions due to tasks 1 and 2 are shown with blue and red arrows, respectively, when the task 1 is given the highest priority and with green and yellow arrows, respectively, when the task 2 is given the highest priority. Figure 4 4 Figure 4.3: Illustration of the different needle base poses used to estimate the Jacobian matrices associated to the different features. Figure 4.3 shows an illustration of the frames corresponding to each r i . Using the first order of the Taylor expansion we then have s(r i ) s(r) + δtJ s v i . (4.25) Figure 4 . 4 : 44 Figure 4.4: Illustration of different geometric features that can be used to define task functions for the general targeting task. Figure 4 . 6 : 46 Figure 4.6: Picture of the setup used to test the hybrid base manipulation and duty-cycling controller. Figure 4 . 7 : 47 Figure 4.7: Final views of the front camera at the end of 4 insertions with different controls. The crosses represent the target. (a) Straight insertion with an initially aligned target: the target is missed due to tip deflection. (b) Duty-cycling control with a target shifted 1 cm away from the initial needle axis: duty-cycling control is saturated and the target is missed due to insufficient tip deflection. The target can be reached in both cases using the hybrid control framework ((c) aligned target and (d) shifted target). Figure 4 . 8 : 48 Figure 4.8: Measure of the lateral distance between the needle tip axis and the target during 4 insertions with different controls. (a) Straight insertion with an initially aligned target: the target is missed due to tip deflection.(b) Duty-cycling control with a target shifted 1 cm away from the initial needle axis: duty-cycling control is saturated and the target is missed due to insufficient tip deflection. The target can be reached in both cases using the hybrid control framework ((c) aligned target and (d) shifted target). In each graph, the purple sections marked "DC" correspond to duty-cycling control and the red sections marked "BM" correspond to base manipulation. Figure 4 4 Figure 4.9: Picture of the setup used to test the performances of the different safety tasks. (a) Initial state, Front camera (b) Initial state, side camera (c) Final state, front camera (d) Final state, side camera Figure 4 . 10 : 410 Figure 4.10: Views of the front and side cameras at the beginning and end of one experiment. The green line represents the needle segmentation and the target set for the controller is represented by the red cross. Figure 4 . 11 : 411 Figure 4.11: Measure of the lateral distance between the needle tip axis and the target during the insertions. Each graph shows a set of five insertions performed using one specific kind of safety task. Measures are noisy at the beginning of the insertion due to the distance between the needle tip and the target. Bending energy reduction Base axis / insertion point angle reduction Figure 4 . 4 Figure 4.12: Mean value of the final distance between the needle tip axis and the target. The mean is taken across the five experiments for each kind of safety task. Figure 4 . 13 : 413 Figure 4.13: Value of the distance between the needle and the initial position of the insertion point at the tissue surface during the insertions. Each graph shows a set of five insertions performed using one specific kind of safety task. Base axis / insertion point angle reduction Figure 4 . 14 : 414 Figure 4.14: Mean value of the distance between the needle and the initial position of the insertion point at the tissue surface. The mean is taken over time and across the five experiments for each kind of safety task. Figure 4 . 15 : 415 Figure 4.15: Value of the energy of bending stored in the needle during the insertions. Each graph shows a set of five insertions performed using one specific kind of safety task. Figure 4 . 16 : 416 Figure 4.16: Mean value of the energy of bending stored in the needle. The mean is taken over time and across the five experiments for each kind of safety task. Figure 4 . 17 : 417 Figure 4.17: Value of the angle between the needle base axis and the initial position of the insertion point at the tissue surface during the insertions. Each graph shows a set of five insertions performed using one specific kind of safety task. Figure 4 . 4 Figure 4.18: Mean value of the angle between the needle base axis and the initial position of the insertion point at the tissue surface. The mean is taken over time and across the five experiments for each kind of safety task. Estimated lateral distance using needle model Figure 4 . 4 Figure 4.19: Lateral distance between the needle tip axis and the target during a controlled insertion of the needle while lateral motions are applied to the phantom. (a) distance measured using the tracking of the needle, (b) distance estimated using the needle model. Two insertions are performed without update of the model to account for tissue motions (blue and green lines) and two insertions are performed while the model is fully updated (red and black lines). Figure 4 . 4 Figure 4.20: State of two needle models overlaid on the camera views at the end of a needle insertion. The blue cross represents the virtual target.Yellow and blue lines are, respectively, the needle and tissue spline curves of a model updated using only the pose feedback of the needle manipulator, such that the position of the tissue spline (blue) is not updated during the insertion. Red and green lines are, respectively, the needle and tissue spline curves of a model updated using the pose feedback of the needle manipulator and the visual feedback, such that the position of the tissue spline (green) is updated during the insertion. Targeting performances: The lateral distance between the axis of the measured needle tip and the target during the insertions are shown in Fig.4.22. Two cross sections of the US volume acquired at the end of the Figure 4 . 4 Figure 4.22: Measure of the lateral distance between the needle tip axis and the target during the insertions. Insertions are performed either in gelatin phantom or in porcine liver embedded in gelatin. Highest task priority is given to either the targeting or the safety tasks. (a) Gelatin, targeting tasks with highest priority (b) Gelatin, safety task with highest priority (c) Porcine liver, targeting tasks with highest priority (d) Porcine liver, safety task with highest priority Figure 4 . 4 Figure 4.23: Cross sections of an ultrasound volume at the end of the insertion for different experimental conditions. The result of the needle tracking is overlaid as a red curve and the interaction model is projected back in the two cross sections with the needle spline in blue and the tissue spline in yellow. The target is shown as a red cross. The green dashed lines indicates the surface of the tissues. Figure 4 . 4 Figure 4.24: Distance between the needle shaft and the initial position of the insertion point at the surface during the insertions. The graphs show the value of the distance measured in the acquired ultrasound volume or estimated from the model. Insertions are performed either in gelatin phantom or in porcine liver embedded in gelatin. Highest task priority is given to either the targeting or the safety tasks. Figure 5 5 Figure 5.2: Illustration of the performance of the target tracking algorithm.The motion described by (5.5) is applied to the gelatin phantom with a period T = 5s. The global mean tracking error is 3.6 mm for this experiment. However it reduces to 0.6 mm after compensating for the delay of about 450 ms introduced by the data acquisition. Figure 5 . 4 : 54 Figure 5.4: Picture of the setup used to compare the force exerted at the needle base using different configurations for the insertion. Figure 5.6: Mean value of the absolute lateral force exerted at the base of the needle. The mean is taken over time and across the four experiments for each configuration. Figure 5 5 Figure 5.7: Picture of the setup used to perform needle insertions toward a target embedded in an ex-vivo bovine liver while compensating for lateral motions of the phantom. Figure 5.9: Measures during an insertion in a bovine liver embedded in gelatin: (a) Measure of the tissue motions from the UR5 odometry, (b) measure of the needle tip position from the electromagnetic tracker and measure of the target position from the tracking in 2D ultrasound, (c) measure of the lateral distance between the target and the needle tip. Overall the target can be reached with good accuracy even if the tissues are moving. Figure 5 . 10 : 510 Figure 5.10: Target tracking in ultrasound images during a needle insertion in a bovine liver embedded in gelatin. The boundaries of the target are not always clearly visible. The needle being inserted can slightly be seen coming from the right. Figure 5 . 11 : 511 Figure 5.11: Target tracking in ultrasound images during a needle insertion in a gelatin phantom. The needle can be seen coming from the right. Figure 5 .Figure 5 . 13 : 5513 Figure5.12: Slice extracted from a high resolution 3D ultrasound volume acquired at the end of an insertion in the gelatin phantom. The needle is coming from the right and the needle tip is shown with a red cross. The line on the left corresponds to a wooden stick used to maintain the spherical target during the conception of the phantom. d Position of the base of the needle p 0,i Rest position of a virtual spring p N,i Needle point associated to a virtual spring T tip Normal torque at the needle tip x, y, z Generic axes of a frame χ i Characteristic function of a curve defining its domain of definition φ Angle between the wheels of a bicycle model Π j Projector onto a virtual spring plane P i v t Needle tip translation velocity v ins Needle insertion velocity θ Orientation of the wheel of a kinematic model {F b } Frame of the needle base {F t } Frame of the needle tip a Length of the bevel along the needle axis b Length of the face of a bevel d in Inner needle diameter d out Outer needle diameter E Needle Young's modulus E N Bending energy stored in the needle E T Deformation energy stored in the tissues I Second moment of area of the needle section Stiffness per unit length of the interaction between the needle and the tissues K nat Natural curvature of the trajectory of an asymmetric needle tip u k Control input vector at time index k w Process noise vector for Bayesian filtering W k Process noise matrix of a linearized system for Kalman filtering w k Process noise vector at time index k x State vector for Bayesian filtering x, y, z Generic axes of a frame x a Augmented state for unscented Kalman filtering x k State vector at time index k y Measure vector for Bayesian filtering y k Measure vector at time index k δ Dirac delta function δφ Angular displacement of the ultrasound transducer of a wobbling probe between the beginning of two frame acquisitions Binary variable indicating the direction of sweeping of the ultrasound transducer of a wobbling 3D probe d Estimated unit vector tangent to a point along a needle f b Estimated lateral force exerted at the base of a needle pj Estimated position of a point along a needle tb Estimated lateral torque exerted at the base of a needle xk State estimate after the update step for Bayesian filtering xk|k-1 State estimate after the prediction step for Bayesian filtering ŷk Measure estimate after the prediction step for Bayesian filtering λ Wavelength of an ultrasound wave . Floor operator X i Particle for a particle filtering or sigma point for unscented Kalman filtering Y i Measure vector associated to a sigma point for unscented Kalman filtering 250 LIST OF SYMBOLS φ Angle between the center and current orientation of the ultrasound transducer of a 3D wobbling probe φ Phase of the tissue motion for the breathing motion profile ρ Mass density of a medium θ Angle of a rotation associated to the angle-axis rotation vector θu ỹk Innovation vector for Bayesian filtering × Cross product operator between two vectors {F b } Frame of the needle base {F t } Frame of the needle tip {F w } Fixed reference frame associated to a robot atan2(y, x) Multi-valued inverse tangent operator b Amplitude of the 1D tissue motion for the breathing motion profile c Speed of sound in soft tissues 1540 m.s -1 Distance between an interface in the tissues and the ultrasound transducer d o Acquisition depth of an ultrasound probe f Frequency of an ultrasound wave f s Sampling frequency of the radio-frequency signal g Generic probability density function I post Post-scan image I pre Pre-scan image J Cost function used for the needle tracking J 1 , J 2 , J 3 , J 4 Sub-cost functions used for the needle tracking K Bulk modulus of a medium k Time index for Bayesian filtering L Length of a polynomial curve l d Curvilinear coordinate of point along the needle L d , L n , L t Lateral integration distances for the needle tracking sub-costs 251 LIST OF SYMBOLS l j Curvilinear coordinate of point along the needle L p Distance between two piezoelectric elements along an ultrasound transducer L s Distance between samples along a scan line M Number of points along the needle taken as measures for unscented Kalman filtering m 1D breathing motion profile applied to the tissues N Number of control points defining the polynomial curve for the needle tracking n Coefficient tuning the shape of the motion for the breathing motion profile n Number of segments in a spline of the two-body model N ν Dimension of the measurement noise vector for Bayesian filtering N f Number of frames acquired during a sweeping motion of the ultrasound transducer of a 3D wobbling probe N l Number of scan lines N p Number of particles of a particle filter n p Number of piezoelectric elements of an ultrasound transducer N s Number of samples acquired along a scan line N u Dimension of the control input vector for Bayesian filtering N w Dimension of the process noise vector for Bayesian filtering N x Dimension of the state vector for Bayesian filtering N y Dimension of the measure vector for Bayesian filtering p Generic probability density function R Radius of curvature of a convex transducer r Polynomial order of spline segments R m Radius of the circular trajectory described by the ultrasound transducer of a 3D wobbling probe r N Radius of the needle expressed in voxels in an ultrasound volume λ δ Positive control gain for the task associated to the distance between the rest position of the insertion point and the needle point at the surface of the tissues λ γ Positive control gain for the task associated to the angle between the needle base axis and the rest position of the insertion point λ σ Positive control gain for the task associated to the angle between the bevel cutting edge and a target λ θ Positive control gain for the task associated to the angle between the needle tip axis and a target λ d Positive control gain for the task associated to the distance between the needle tip axis and a target λ δ Positive control gain for the task associated to the vector between the rest position of the insertion point and the needle point at the surface of the tissues λ δm Positive control gain for the task associated to the mean deformation of the tissues along the needle shaft λ E N Positive control gain for the task associated to the bending energy stored in the needle ω z,max Maximal rotation velocity around the needle axis σ Angle between the bevel cutting edge and a target σ i Singular value of a matrix τ i Singular value of the pseudo-inverse of a matrix v t Translation velocity of the needle tip v t,z Translation velocity of the needle tip along its axis v tip Scalar insertion velocity of the needle tip θ Angle between the needle tip axis and a target θ DC Angle of rotation of the tip during one cycle of duty-cycling control J Estimation of the Jacobian matrix J {F b } Frame of the needle base {F t } Frame of the needle tip {F w } Fixed reference frame associated to a robot 256 LIST OF SYMBOLS atan2(y, x) Multi-valued inverse tangent operator d Distance between the needle tip axis and a target DC Duty cycle in duty-cycling control E N Bending energy stored in the needle i Level of priority of a task K ef f Effective curvature of the trajectory of an asymmetric needle tip during duty-cycling control K nat Natural curvature of the trajectory of an asymmetric needle tip L N Length of the spline curve representing the needle model L DC Insertion length of a cycle during duty-cycling control L f ree Length of the needle that is outside the tissues L ins Length of the insertion phase in duty-cycling control L rot Length of the rotation phase in duty-cycling control L thres Threshold length before the addition of a tissue spline segment in the two-body model L thres Threshold length between the addition of two successive virtual springs m Dimension of the control input vector n Dimension of the task vector n Number of segments in a spline of the two-body model r Polynomial order of spline segments t Generic time x 0 , y 0 , z 0 Components of the rest position of the insertion point in the frame of the needle base x t , y t , z t Components of the position of a target in the frame of the needle tip Chapter 5 . d Subscript used to indicate the desired value of a quantity 0 3×5 3 by 5 null matrix a Initial position of the tissues for the breathing motion profile b Amplitude of the tissue motion for the breathing motion profile c N Spline curve representing the needle e Task vector f l Lateral force exerted at the base of the needle I 3 3 by 3 identity matrix J Generic Jacobian matrix relating the variation of the task vector with respect to the control inputs J γ Jacobian matrix associated to the angle between the needle base axis and the rest position of the insertion point J σ Jacobian matrix associated to the angle between the bevel cutting edge and a target J f Jacobian matrix associated to the lateral force exerted at the base of the needle J ω U R ,z Jacobian matrix associated to the rotation velocity of the robot around the needle axis J v U R ,z Jacobian matrix associated to the translation velocity of the tip of the needle insertion device along the needle axis J v U R Jacobian matrix associated to the translation velocity of the tip of the needle insertion device J vt,z Jacobian matrix associated to the translation velocity of the needle tip along its axis J v N ID Jacobian matrix associated to the translation velocity of the translation stage of the needle insertion device J θ Jacobian matrix associated to the angle between the needle tip axis and a target m Breathing motion profile applied to the tissues v r Control inputs vector of the robotic system consisting of the UR3 robot and the needle insertion device v N ID Control inputs vector of the needle insertion device v U R Control inputs vector of the UR3 robot 258 LIST OF SYMBOLS x, y, z Generic axes of a frame γ Angle between the needle base axis and the rest position of the insertion point λ γ Positive control gain for the task associated to the angle between the needle base axis and the rest position of the insertion point λ σ Positive control gain for the task associated to the angle between the bevel cutting edge and a target λ θ Positive control gain for the task associated to the angle between the needle tip axis and a target λ f Positive control gain for the task associated to the lateral force exerted at the base of the needle ω z,max Maximal rotation velocity around the needle axis ω N ID Rotation velocity of the rotation stage of the needle insertion device ω U R ,z Rotation velocity of the robot around the needle axis σ Angle between the bevel cutting edge and a target v U R ,z Translation velocity of the tip of the needle insertion device along the needle axis v U R Translation velocity of the tip of the needle insertion device v t,z Translation velocity of the needle tip along its axis v tip Scalar insertion velocity of the needle tip v N ID Translation velocity of the translation stage of the needle insertion device θ Angle between the needle tip axis and a target {F b } Frame of the needle base {F t } Frame of the needle tip {F w } Fixed reference frame associated to a robot E Needle Young's modulus I Second moment of area of the needle section L thres Threshold length before the addition of a tissue spline segment in the two-body model Résumé Le guidage robotisé d'une aiguille a été le sujet de nombreuses recherches ces dernières années afin de fournir une assistance aux cliniciens lors des procédures médicales d'insertion d'aiguille. Cependant le contrôle précis et robuste d'un système robotique pour l'insertion d'aiguille reste un grand défi à cause de l'interaction complexe entre une aiguille flexible et des tissus ainsi qu'à cause de la difficulté à localiser l'aiguille dans les images médicales. Table 1 . 1 Needle type Chiba biopsy needle Chiba biopsy stylet Reference Angiotech MCN2208 Aurora Needle 610062 Young's modulus 200 GPa 200 GPa Outer diameter 22G (0.7 mm) 23.5G (0.55 mm) Inner diameter 0.48 mm 0.5 mm Length (cm) 12.6 from 0.8 to 10.8 Tip type Chiba Chiba Tip angle 25 • 25 • 1: Characteristics of the needles used in the experiments. The lengths are calculated from the base of the needle holder to the needle tip. Table 3 . 3 1: Mean over time and across five experiments of the absolute error between the real and modeled tip position obtained for the different update methods and two different update rates. Absolute position error (mm) Update rate 30 Hz 1 Hz Method 1 5.9±3.9 5.9±3.9 Method 2 6.1±3.0 6.1±3.0 Method 3 2.1±1.6 1.9±1.5 Method 4 0.6±0.3 0.9±0.5 Method 5 0.4±0.2 0.7±0.5 Table 5 . 5 3: Summary of the lateral force measured at the base of the needle during the insertions performed in a gelatin phantom and a bovine liver embedded in gelatin. The mean and standard deviation of the lateral force are calculated over time. Phantom Max Force (mN) Mean Global mean Gelatin Liver 630 870 189 ± 119 154 ± 92 494 162 ± 88 753 217 ± 135 773 268 ± 176 1022 137 ± 127 538 61 ± 72 601 41 ± 64 626 64 ± 61 474 136 ± 127 198 ± 133 88 ± 103 3D US probe • The second task controls the bevel orientation via the angle σ, as defined by (4.51), (4.54) and (4.56). The maximal rotation speed ω z,max is set to 60 • .s -1 and the gain λ σ is set to 10 (see (4.56)) such that the maximal rotation velocity is used when the bevel orientation error is higher than 6 • . • The third task is the safety task used to reduce the tissue stretch at the surface δ, as defined by (4.61), (4.62) and (4.63). The control gain λ δ is set to 1. Note that in a general clinical context it is not always possible to see the insertion point at the tissue surface due to the configuration of the probe with respect to the insertion site. So we choose here to use the estimation of δ using the needle model instead of the real measure as an input of the safety task. Two sets of priority levels are tested. In the first set, the two targeting tasks (first and second tasks) have the same priority and the safety task (third task) has a lower priority. The final velocity screw vector v b applied Motion compensation using force feedback In previous sections we have defined a way to use force feedback in our needle steering framework as well as a method to track a moving target in 2D ultrasound (US) images. Therefore, in this section we present the results of experiments that we conducted to test our control framework in the case of a needle insertion performed under tissue motions. Force sensitivity to tissue motions We first propose to compare the sensitivity of the force measurements depending on the configuration of the needle. Two configurations are mostly used to perform robotic needle insertions. The first one is mainly used to performed base manipulation and consists in holding the needle by its base, leaving a part of the body of the needle outside the tissues during the insertion. The second configuration is mainly used to performed tip-based steering. The needle is then usually maintained in an insertion device such that only the part outside of the device can bend. The device is placed near the surface such that the needle is directly inserted inside the tissues, with no intermediate length left free to bend between the device and the tissues. In the following we perform needle insertions using different configurations and compare the interaction forces measured at the base of the needle. Experimental conditions (setup in the Netherlands): We use the needle insertion device (NID) attached to the UR3 robot arm. The biopsy needle with the embedded electromagnetic (EM) tracker is placed inside the NID and is inserted in a gelatin phantom. The ATI force torque sensor is used to measure the interaction efforts exerted at the base of the needle. A picture of the setup is shown in Fig. 5.4. The position of the EM tracking system is registered in the frame of the UR3 robot before the experiments using the method that was presented in section 3.6.1. The force torque sensor is also calibrated beforehand to remove the sensor biases and the effect of the weight of the NID in order to reconstruct the interaction forces applied to the base of the needle (see Appendix A). A fixed virtual target is defined just before the beginning of the insertion such that it is at a fixed position in the initial frame of needle tip. We use the two-body model presented in section 2.4.2 with polynomial needle segments of order r = 3 to represent the part of the needle that is outside of the NID, from the frame {F b } depicted in Fig. 5.4 to the needle tip. We fix the length of the needle segments to 1 cm, resulting in n = 1 segment of 8 mm when the needle is retracted to the maximum inside the NID and n = 11 segments with the last one measuring 8 mm when the needle is fully outside of the NID. We use a rather hard phantom, such that we set the List of Publications Appendix A Force sensor calibration This appendix presents the registration process and the computation method, used in the experiments of chapters 3 and 5, to retrieve the interaction forces and torques applied at the base of the needle without the gravity component due to the mass of the needle insertion device. The force f ∈ R 3 measured by the sensor can be expressed according to where m d is the mass of the needle insertion device (NID), g ∈ R 3 is the gravity vector, b f ∈ R 3 is the sensor force bias and f ext ∈ R 3 is the rest of the forces applied to the sensor, with each vector defined in the sensor frame. The torque t ∈ R 3 measured by the sensor can be expressed similarly according to where × denotes the cross product operator, c d ∈ R 3 is the position of the center of mass of the NID, b t ∈ R 3 is the sensor torque bias and t ext ∈ R 3 is the rest of the torques applied to the sensor, with again each vector defined in the sensor frame. Note that f ext and t ext correspond to the contribution of the interaction forces and torques that we want to measure. Let us define g w the gravity vector expressed in the world reference frame and w R f ∈ SO(3) the rotation from the world frame to the force sensor frame such that During the insertion procedure, the contribution of the gravity and the biases can be removed depending on the pose of the NID to isolate the interaction forces. Then, The interaction forces f b ∈ R 3 and torques t b ∈ R 3 applied to the base of the needle can then be expressed in the needle base frame according to where f R b ∈ SO(3) and f T b ∈ R 3 are, respectively, the rotation and translation from the sensor frame to the needle base frame. In practice only the orientation w R e ∈ SO(3) of the end effector of the UR3 is known thanks to the robot odometry such that w R f is actually computed according to Noting g i the gravity vector associated to i th orientation of the UR3 end effector, b f and m d can first be computed to minimize the cost function J f defined as which leads after calculations to Then b t and c d can be computed to minimize the cost function J t defined as which leads after calculation to with where g i,x , g i,y and g i,z are the components of g i . LIST
477,821
[ "1247400" ]
[ "490899", "491336" ]
00175403
en
[ "phys" ]
2024/03/05 22:32:10
2007
https://hal.science/hal-00175403/file/Ramos_JA073412H-31-44_Revised.pdf
Yuxia Luan Laurence Ramos email: ramos@lcvn.univ-montp2.fr Real-time observation of polyelectrolyte-induced binding of charged bilayers We present real-time observations by confocal microscopy of the dynamic behavior of multilamellar vesicles (MLVs), composed of charged synthetic lipids, when put in contact with oppositely charged polyelectrolyte (PE) molecules. We find that the MLVs exhibit astonishing morphological transitions, which result from the discrete and progressive binding of the charged bilayers induced by a high PE concentration gradient. Our physical picture is confirmed by quantitative measurements of the fluorescence intensity as the bilayers bind to each other. The shape transitions lead eventually to the spontaneous formation of hollow capsules, whose thick walls are composed of lipid multilayers condensed with PE molecules. This class of objects may have some (bio)technological applications. Introduction Liposomes are often studied as simplified models of biological membranes 1,2 and are extensively used in the industrial area ranging from pharmacology to bioengineering. [START_REF] Lasic | Handbook of Biological Physics, 1a ,Structure and Dynamics of Membranes[END_REF] The biomimetic properties of the membrane also make liposomes attractive as vessels for model systems in cellular biology. [START_REF] Karlsson | [END_REF]5 Composite systems of lipid bilayer and polymers have received special attention due to their similarity with living systems such as plasma membrane and various organelle membrane, that mainly consist of complex polymers and lipids. 6 Experimental investigations of vesicle/polymer mixed systems also aim at improving the stability and at controlling the permeability of liposomes for drug delivery or targeting, or for gene therapy. [7][8][9][10][11][12] For example, stabilization is usually obtained by loose hydrophobic anchoring of water soluble chains that do not significantly perturb the bilayer organization, such as alkyl-modified Dextran or Pullulan with a low degree of substitution, 13,14 long poly(ethylene glycol) capped with one or two lipid anchors per macromolecule, or poloxamers. 15,16 It was shown recently 7,[17][18][19][20] that water-soluble polymers, upon binding to vesicles, can markedly affect the shape, curvature, stiffness, or stability of the bilayer. However, the mechanisms of these polymer-induced reorganizations of membranes remain sometimes conjectural, although it is clear that the hydrophobicity of the polymer plays an important role. On the other hand, interactions between surfactants and polymers in bulk solution are extensively investigated, due to their numerous applications from the daily life to the various industries (e.g. pharmaceutical, biomedical application, detergency, enhanced oil recovery, paints, food and mineral processing). [START_REF] Goddard | Interactions of Surfactants with Polymer and Proteins[END_REF][START_REF] Zintchenko | [END_REF][23][24][25] Charged amphiphilic molecules, like lipids or surfactants, and oppositely charged polyelectrolytes (PE) spontaneously form stable complexes, which are very promising objects, because of their great variability in structures and properties. [26][27] In this context, interactions of charged bilayers with oppositely charged PE are particularly regarding. For instance, in bioengineering, the interactions between lipids bilayers and DNA molecules are crucial for gene therapy. [28][29][30][31][32] When charged bilayers interact with polyelectrolyte of opposite charge, it is generally accepted that electrostatic interactions induce the bridging of the lipid bilayers by the PE molecules. [33][34][35] The resulting structure of the PE/lipid complexes is a condensed lamellar phase, with PE strands intercalated between the lipids bilayers. However, although most studies provide a general picture for the PE/lipid structure, very few addressed the question of the mechanism of the formation of the complexes or the associated issue of dynamics and intermediate steps for the assemblage process. In addition to our previous work 36 , two noticeable exceptions include the work of Kennedy et al. who found that the order of addition of DNA to cationic lipid or vice versa could affect the size and size distribution of the complexes 31 and that of Boffi et al. who showed that two distinct types of DNA/lipid complexes can be formed depending on the sample preparation procedure. 32 Nevertheless, the determinants for the assembly and dynamics of complex formation remain poorly understood. Under certain conditions, lipids can self-assemble into giant vesicles, the size of living cells. These are very elegant objects that allow manipulation and real time observation with a light microscope [37][38][39][40] and that have opened the way to a wealth of theoretical and experimental investigations. 41 However, unlike experimental work on giant unilamellar vesicles, experimental reports on real-time observation of the effect of a chemical species on the stability and shape changes of a multilamellar vesicles are extremely scarce. [42][43][44][45][46] Nevertheless, as it is demonstrated in this paper, when multilamellar vesicles are used, richer behaviours can be expected, since cooperative effects due to the dense packing of bilayers may play an important role. In the present study, we employ a real-time approach to study the dynamics of the interactions between charged membranes and oppositely charged PE molecules, and monitored by light and confocal microscopy the behavior of multilamellar vesicles (MLVs) made of a synthetic lipid in a concentration gradient of PE. When the gradient is strong enough, the MLV undergoes spectacular morphological transitions, which enable us to visualize the progressive binding of charged bilayers induced by oppositely charged PE molecules. Specifically, these shape transitions lead eventually to the spontaneous formation of a hollow capsule with thick walls that are presumably composed of lipid multilayers condensed with PE molecules. This class of objects may have some potential (bio)technological applications 47 and this contribution could have some significance in mimicking bioprocess. We first present our experimental observations, then describe the mechanisms at play and provide quantitative measurements, based on the fluorescence intensity, which support our physical picture. Finally, we briefly conclude. Experimental Results We use vesicles made of didodecylammonium bromide (DDAB) as synthetic lipid, and an alternating copolymer of styrene and maleic acid in its sodium salt form, as anionic polyelectrolyte (PE). The DDAB bilayers are labeled with a fluorescent surfactant for confocal and fluorescence imaging. We follow by light and confocal microscopy the behavior of the DDAB vesicles when they are submitted to a PE concentration gradient. The Materials and Methods are described in the Supporting Information. Interaction between Giant Unilamellar Vesicles and Polyelectrolyte The time-dependent morphological changes of a giant unilamellar vesicle (GUV) are investigated when the GUV is exposed to a concentrated PE solution (30% W/W). The GUV is floppy and fluctuating before interacting with PE. Upon contact with the polyelectrolyte solution, the bilayer becomes tense and the vesicle immediately turns to perfectly spherical and taut. Some patches, that appear very intense in fluorescence, gradually formed on the surface of the GUV. The patches thicken with time (Figure 1). Concomitantly, the size of the GUV decreases. These processes lead ultimately to the collapse of the GUV, resulting in a single small lump made of a compact DDAB/PE complex. The duration of the whole process is of the order of several minutes. Analogous observations have been recently reported for the interaction of GUVs with small unilamellar vesicles 48 , with the matrix protein of a virus 49 , and with a flavonoid of green tea extracts 50 . Interaction between Multilamellar Vesicles and Polyelectrolyte Phase-diagram In sharp contrast to the case of GUVs, the interactions of charged multilamellar vesicles (MLVs) with polyelectrolyte molecules of opposite charge lead to unexpectedly rich phenomena. We interestingly notice that, depending on C PE , the PE concentration, completely different morphological transitions are observed. The "phase" diagram shown in Figure 2 summarizes our experimental findings for MLVs put into contact with different concentrations of PE. Successive peeling events are found when C PE < ~ 2 % as shown in Figure 2A, while concentrated PE (C PE >~10%) induces the appearance of spectacular morphological changes of the MLV (Figure 2B and2C). We confirmed by differential interference contrast and phase contrast microscopy that non-fluorescent MLVs exhibit identical morphological changes, and that all dynamical processes reported below are preserved. C PE Peeling ~ 2 % Layer by layer binding ~ 10 % A) B) C) C PE Peeling ~ 2 % Layer by layer binding Weak Polyelectrolyte Gradient When a MLV is exposed to a diluted PE concentration, the size of the MLV gradually decreases and concomitantly small aggregates formed in the vicinity of the MLV. The MLV is peeled progressively, layer after layer, one DDAB/PE complex being formed for each peeling event, while the interior of the MLV remains always intact. Peeling events proceed until the MLV is completely used up. The final state of the MLV is a pile of small aggregates of size ranging from 2 to 10 μm. The whole consumption of a MLV through the peeling mechanism is a slow process that lasts more than 10 minutes, each peeling events lasting about tens of seconds (Figure 3). We note that the effect of a weak polyelectrolyte gradient on a MLV has been reported previously. 45 However, the novel confocal microscopy pictures given in Figure 3 show unambiguously a single event, which provides a compelling evidence for a peeling mechanism. Strong Polyelectrolyte Gradient In sharp contrast with our observations for a dilute polyelectrolyte solution, for C PE > ~10%, the morphological transitions of a MLV lead to a finite-size cellular object, with water encapsulated in the cells, and whose walls are very likely made of DDAB/PE complexes (Figure 4E). The angles between the thick walls measured in 2-dimentional picture are about 120°, similarly to the angle at which film meet in a three-dimensional dry foam. [START_REF] Weaire | The Physics of Foams[END_REF] When the size of the initial MLV is sufficiently small, hollow capsules are eventually obtained (Figures 4B-D), whose size is sensibly equal to that of the initial MLV. Although the large scale structure depends dramatically on the initial PE concentration, the microscopic structure in all cases is a condensed lamellar phase (Figure 4G), as checked by small-angle X-ray scattering (Figure 4F), whose periodicity is of the order of 3.0 nm, hence only slightly larger than the bilayer thickness (2.4 nm). The typical whole sequence of morphological transformation of a MLV when it is exposed to a concentrated PE solution is shown in the time series pictures of Figure 5. Before the MLV starts to deform significantly, the fluorescent intensity inside the vesicle becomes heterogeneous, the higher intensity being localized in the region with higher PE concentration. The surprising buds (Figure 2 B-C) composed of well-separated set of bilayers form subsequently. Interestingly, we note that the first striated buds form systematically where the PE concentration is lower. The interaction dynamics then speeds up and the MLV is found to experience rapid fluctuations, with the formations of protruding and budding, while dynamical events can also be distinguished in the core of the vesicle. The initially "full" I (arb. u.) q (nm -1 ) B) A) A) E) E) G) C) C) F) I (arb. u.) q (nm -1 ) B) A) A) E) E) G) C) C) D) F) I (arb. u.) q (nm -1 ) B) A) A) E) E) G) C) C) F) MLV appears finally essentially devoid in DDAB bilayers: the inside of the resulting object is essentially black with some thick fluorescent strands. This cellular soft object forms therefore a peculiar kind of biliquid foam [53][54][55] . As opposed to our observations for a weak polyelectrolyte gradient, the dynamics is here very fast: the whole sequence lasts less than 1 minute. We finally note that we have performed some additional tests. First, we have done experiments in salted water (with NaBr) instead of pure water. The main observations described above are preserved with a salt concentration of 10 -3 M. With a NaBr concentration of 10 -2 M, our experimental observation in pure water cannot be reproduced due to the lack of stability of the MLVs. 56 Secondly, we have also investigated the interaction of MLV with other polymers, both neutral and charged (as listed in the Supporting Information), and found similar results as those described here only with the polystyrenesulfonate polyanions, thus confirming that attractive electrostatic interactions between the DDAB bilayers and the polymer molecules are a key ingredient for our observations. Discussion Due primarily to the strong electrostatic interactions between charged bilayers and oppositely charged polyelectrolyte molecules, DDAB/PE complexes form, whose structure is a condensed lamellar phase. 36,45 The confocal pictures of a GUV interacting with a PE solution (Figure 1) provide a dynamic observation for the formation of these complexes. In this part, we discuss the experimental findings on the formation of DDAB/PE complexes, when polyelectrolyte molecules interact with a MLV. As we showed in the experimental section, depending on the PE concentration, the polyelectrolyte molecules interact with a unique bilayer (when the PE gradient is weak), or with the entire stack of bilayers (when the PE gradient is strong). Interaction between PE and a unique bilayer Upon contact with a weak PE gradient, a MLV is peeled off gradually. Each peeling event implies firstly the formation of a pore, which expands until failure of the entire bilayer. We have previously visualized the expansion of a pore by light microscopy. 45 Pore formation in unilamellar vesicles has been observed under different experimental conditions, including application of an electric field [57][58] interaction with proteins 37,50 or with a water-soluble polymer with hydrophobic pendent groups, 59 or attractive interactions with a patterned surface. 60 In our case, pores form because of the adsorption of PE onto the DDAB bilayers due to a strong electrostatic attraction between the two species. In fact, because of these interactions, part of the surface area of the external bilayer may be used up to form PE/lipid complexes. This creates a tension in the bilayer which ruptures above a critical tension, leading to the formation of a pore. The peeling mechanism was previously discussed in details. 45 Interaction between PE and a stack of bilayers PE-induced binding of two bilayers as elementary mechanism We argue that the astonishing structures exhibited upon contact of a multilamellar vesicle with a strong gradient of PE concentration are due to a discrete and progressive binding of the bilayers induced by the PE molecules as they diffuse within the multilayer material. The elementary initial event can be imaged in real-time and is shown in Figure 6. It consists of the budding starting from the outmost bilayer. This structure results from the expulsion of the water that is located between the outmost and the secondary outer bilayers as they rapidly bind to each other due to the bridging of the oppositely charged PE between them. The binding front can be followed by confocal imaging: with time, the binding quickly spreads and the water between the bilayers is driven into a small and spherical water pool. Such events typically last a few seconds, and are faster when the PE concentration gradient is higher. A scheme of the microscopic process is shown in Figure 6E. We note that a temperature-induced binding of bilayers has been observed by light microscopy, but the dynamics could not be followed. 61 The succession of such events, i.e. binding of the secondary outer with the ternary outer bilayers, then binding of the ternary outer with the quaternary outer bilayers, … leads to the striated structures shown Figures 2B,C and 7. These structures originate from the successive formation of water pools, while the core of the MLV remains intact. The further interaction with PE leads to the binding of bilayers in the core of the MLV: the initially homogeneous contrast inside the MLV (Figures 2A,4A) becomes progressively extremely heterogeneous as bilayers bind to each other and leave large portions free of bilayers. This is simultaneously accompanied by more important and erratic shape transformation, which leads ultimately to the formation of a cellular biliquid foam or hollow capsule (Figures 4 and5). More quantitatively, the volumes measured by image analysis can be compared with the volumes evaluated from the simple model (scheme, Figure 6E). We take for the water thickness between DDAB bilayers, 80 nm, the maximum swelling of the lamellar phase (prior to interaction with PE) [START_REF] Dubois | [END_REF] and calculate, for the MLV of Figure 6 (radius 10.7 μm), the volume of the water pool after binding of the outmost and secondary outer bilayer, V c . We find V c =130 μm 3 . We compare V c to the volume V m , for the water pockets evaluated from Figures 6C andD. We find V m,6C = 400 μm 3 ≈ 3V c and V m,6D = 530 μm 3 ≈ 4V c , respectively, as expected since 3 and 4 elementary events have occurred respectively in C and D (as clearly distinguished in a movie of the process, movie S1 in SI). Similarly, we measure that the total water volume for 10 bilayers (white circle, Figure 2C) is about 4300 μm 3 , while we calculate that the volume resulting from the binding of two bilayers is about 470 μm 3 , hence roughly 10 times smaller, as expected. The very good agreement between the numerical values confirms the mechanism we propose, and suggests that there is no water release during this process. Quantification of the discrete binding of the bilayers We follow the binding of individual surfactant bilayers into the thick bundles with confocal microscopy (Figure 7), and analyze the fluorescence intensity distribution with Image J. In Figure 8A, we show that the intensity profile, perpendicular to a bilayer, is homogeneous along the bilayer. We define I, the integrated intensity, as the surface area of the peak of the intensity profile. We found that I is constant for all individual bilayers (labeled a to h). The empty symbols in Figure 8C show I along the thick bundle P1-P2 (marked by the crosses). We measure that the intensity increases along the thick bundle from P1 to P2, which precisely reveals a discrete and continuous increase of I as more and more bilayers bind. To quantify this, we add to the intensity (along P1-P2) between bilayers n and n+1 the intensities of all bilayers with labels ranging from n+1 and h (h is the last bilayer). The calculated values are reported as full symbols in Figure 8C. Interestingly, we find that, at any step, these calculated intensities are a very good evaluation of the intensity (full hexagons) for the thickest part of the bundle (closed to P2). In fact, all full symbols Figures 8C are located on a same horizontal line. This provides a further and conclusive evidence of a discrete and progressive binding of bilayers. Furthermore, our calculations demonstrate that the number of individual bilayers that compose a bundle can be evaluated from the fluorescent intensity of the bundle. For instance, by comparison of the intensity of a single bilayer to that of the thickest wall, we evaluate that for the cellular composite material (Figure 4E) the thickest external wall contains ~ 20 bilayers. Kinetics is a key parameter for our observations Importantly, we have noted that the events are polarized, the "budding" always occurring in the point diametrically opposed to the point where PE concentration is higher. This indicates that the binding always starts where PE concentration is higher and that the process is sensitive to the PE concentration gradient. In addition, the formation of buds indicates that the binding kinetics is faster than the diffusion of water across the compact bilayers. These experimental observations are consistent with the fact that the key parameter for this novel observation is to expose MLV to a strong gradient of PE. In addition, the hollow capsules formed have more or less the same size as the initial MLV when the initial MLV is not too large. This supports the fact that the water release, if any, is weak during the whole process, which is in full agreement with a binding kinetics faster than the diffusion of water across the bilayers. In addition, our observations imply that the PE molecules penetrate inside the MLV. Very generally, the entry of the PE molecules into a MLV is driven by three forces: electrostatic interactions (between the surfactant headgroups and the maleic acid units of the PE), hydrophobic interactions (between the surfactant tails and the styrene units of the PE) and osmotic pressure (due to the high concentration of PE outside the MLV). Microscopically, the PE molecules may deform the bilayer, weaken the cohesion among the organized DDAB molecules, and create defects; hence membrane subunits may temporally be separated, allowing the passage of PE molecules (as observed with lipid vesicles in presence of surfactant) [62][63][64] . We finally note that the penetration of a polymer across a lipid bilayer has been recently observed experimentally, [65][66] in concordance with our experimental findings. Conclusions In summary, we have provided experimental data on the kinetics of formation of synthetic charged lipids/polyelectrolyte complexes. By using multilamellar vesicles and a high polyelectrolyte concentration gradient, we were able to visualize by confocal imaging the progressive binding of the charged bilayers as they interact with oppositely charged polyelectrolyte. Although PE/lipid interactions have previously been visualized on the nanometer scale by atomic force microscopy [67][68][69] , our experiments constitute, to the best of our knowledge, one of the first observations on the micrometer scale. We have described the microscopic mechanisms at play and have provided quantitative measurements, which support our physical picture. The key parameter for this novel observation is to expose MLV to a strong gradient of PE. We indeed have demonstrated that a weak gradient induced radically different morphological transitions. Our description of a gradual binding process of charged bilayers induced by oppositely charged polyelectrolyte may shed some light for understanding the more complicated cell membrane behaviors induced by different kinds of charged proteins. Finally, we have also shown that a strong gradient induced eventually the spontaneous evolution of a MLV towards a hollow capsule. Our simple approach may be useful in designing a class of soft composite polyelectrolyte/lipid shell for applications for drug delivery or controlled drug release. Figure 1 . 1 Figure 1. Evolution of the morphology of a giant unilamellar vesicle (GUV) upon contact with a concentrated PE solution (30% W/W). Timing is indicated in white text. The scale is the same for all pictures. Scale bar = 5 μm. Figure 2 . 2 Figure 2. "Phase" diagram of MLV in contact with different PE concentrations, viewed by confocal imaging. Scale bars =10 μm. Figure 3 . 3 Figure 3. One peeling event of MLV induced by a diluted PE (0.5 % W/W). Timing is indicated in white text. The scale is the same for all pictures. Scale bar = 10 μm. 52 52 5 I 5 (arb. u.)q (nm -1 ) 5 I 5 (arb. u.)q (nm -1 ) 5 IFigure 4 . 54 Figure 4. (A, B) Differential Interference Contrast; (C) Fluorescence and (D, E) Confocal imaging of, (A) a MLV Figure 5 . 5 Figure 5. Time series showing the shape transformation of a MLV upon contact with a concentrated PE solution Figure 6 . 6 Figure 6. (A-D) Series of the morphological transformation of a MLV as it interacts with PE. The PE diffuses from Figure 7 .Figure 8 . 78 Figure 7. Pictures showing individual bilayers binding into a thick bundle. Scale bar =10 μm. Timing is indicated in Acknowledgment. We acknowledge financial support from the CNRS-CEA-DFG-MPIKG Network "Complex fluids: from 3 to 2 dimensions" and from the European Network of Excellence "SoftComp" (NMP3-CT-2004-502235). We thank G. Porte for fruitful discussions. Supporting Information Available: Materials and Methods; Movie showing the initial process of budding formation as a MLV interacts with a concentrated PE solution (30%W/W).
24,970
[ "842989", "975578" ]
[ "737", "737" ]
01754054
en
[ "stat", "qfin" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01754054/file/DeepLearningContSirignano2018.pdf
Justin Sirignano Rama Cont Universal features of price formation in financial markets: perspectives from Deep Learning Using a large-scale Deep Learning approach applied to a high-frequency database containing billions of electronic market quotes and transactions for US equities, we uncover nonparametric evidence for the existence of a universal and stationary price formation mechanism relating the dynamics of supply and demand for a stock, as revealed through the order book, to subsequent variations in its market price. We assess the model by testing its out-of-sample predictions for the direction of price moves given the history of price and order flow, across a wide range of stocks and time periods. The universal price formation model exhibits a remarkably stable out-of-sample prediction accuracy across time, for a wide range of stocks from different sectors. Interestingly, these results also hold for stocks which are not part of the training sample, showing that the relations captured by the model are universal and not asset-specific. The universal model -trained on data from all stocks -outperforms, in terms of out-of-sample prediction accuracy, asset-specific linear and nonlinear models trained on time series of any given stock, showing that the universal nature of price formation weighs in favour of pooling together financial data from various stocks, rather than designing asset-or sector-specific models as commonly done. Standard data normalizations based on volatility, price level or average spread, or partitioning the training data into sectors or categories such as large/small tick stocks, do not improve training results. On the other hand, inclusion of price and order flow history over many past observations improves forecasting performance, showing evidence of path-dependence in price dynamics. 1 Price formation: how market prices react to supply and demand The computerization of financial markets and the availability of detailed electronic records of order flow and price dynamics in financial markets over the last decade has unleashed TeraBytes of high frequency data on transactions, order flow and order book dynamics in listed markets, which provide us with a detailed view of the high-frequency dynamics of supply, demand and price in these markets [START_REF] Cont | Statistical modeling of high frequency financial data: Facts, models and challenges[END_REF]. This data may be put to use to explore the nature of the price formation mechanism which describes how market prices react to fluctuations in supply and demand. At a high level, a 'price formation mechanism' is a map which represents the relationship between the market price and variables such as price history and order flow: Price(t + ∆t) = F Price history(0...t), Order Flow(0...t), Other Information = F (X t , t ), where X t is a set of state variables (e.g., lagged values of price, volatility, and order flow), endowed with some dynamics, and t is a random 'noise' or innovation term representing the arrival of new information and other effects not captured entirely by the state variables. Empirical and theoretical market microstructure models, stochastic models and machine learning price prediction models can all be viewed as different ways of representing this map F , at various time resolutions ∆t. One question, which has been implicit in the literature, is the degree to which this map F is universal (i.e., independent of the specific asset being considered). The generic, as opposed to asset-specific, formulation of market microstructure models seems to implicitly assume such a universality. Empirical evidence on the universality of certain stylized facts [START_REF] Cont | Empirical properties of asset returns: stylized facts and statistical issues[END_REF] and scaling relations [START_REF] Benzaquen | Unravelling the trading invariance hypothesis[END_REF][START_REF] Andersen | Intraday Trading Invariance in the E-Mini S&P 500 Futures Market[END_REF][START_REF] Kyle | Market microstructure invariance: Empirical hypotheses[END_REF][START_REF] Mandelbrot | The multifractal model of asset returns[END_REF] seems to support the universality hypothesis. Yet, the practice of statistical modeling of financial time series has remained asset specific: when building a model for the returns of a given asset, market practitioners and econometricians only use data from the same asset. For example, a model for Microsoft would only be estimated using Microsoft data, and would not use data from other stocks. Furthermore, the data used for estimation is often limited to a recent time window, reflecting the belief that financial data can be 'non-stationary' and prone to regime changes which may render older data less relevant for prediction. Due to such considerations, models considered in financial econometrics, trading and risk management applications are asset-specific and their parameters are (re)estimated over time using a time window of recent data. That is, for asset i at time t the model assumes the form Price i (t + ∆t) = F X i 0:t , t | θ i (t) , where the model parameter θ i (t) is periodically updated using recent data on price and other state variables related to asset i. As a result, data sets are fragmented across assets and time and, even in the high frequency realm, the size of data sets used for model estimation and training are orders of magnitude smaller than those encountered in other fields where Big Data analytics have been successfully applied. This is one of the reasons why, except in a few instances [START_REF] Buhler | Deep hedging[END_REF][START_REF] Dixon | Sequence Classification of the Limit Order Book using Recurrent Neural Networks[END_REF][START_REF] Kolanovic | Big Data and AI Strategies: Machine Learning and Alternative Data Approach to Investing[END_REF][START_REF] Sirignano | Stochastic Gradient Descent in Continuous Time[END_REF][START_REF] Sirignano | Deep Learning for Limit Order Books[END_REF][START_REF] Sirignano | Deep Learning for Mortgage Risk[END_REF], large-scale learning methods such as Deep Learning [START_REF] Goodfellow | Deep Learning[END_REF] have not been deployed for quantitative modeling in finance. In particular, the non-stationarity argument is sometimes invoked to warn against their use. On the other hand, if the relation between these variables were universal and stationary, i.e. if the parameter θ i (t) varies neither with the asset i nor with time t, then one could potentially pool data across different assets and time periods and use a much richer data set to estimate/ train the model. For instance, data on a flash crash episode in one asset market could provide insights into how the price of another asset would react to severe imbalances in order flow, whether or not such an episode has occurred in its history. In this work, we provide evidence for the existence of such a universal, stationary relation between order flow and market price fluctuations, using a nonparametric approach based on Deep Learning. Deep learning can estimate nonlinear relations between variables using 'deep' multilayer neural networks which are trained on large data sets using 'supervised learning' methods [START_REF] Goodfellow | Deep Learning[END_REF]. Using a deep neural network architecture trained on a high-frequency database containing billions of electronic market transactions and quotes for US equities, we uncover nonparametric evidence for the existence of a universal and stationary price formation mechanism relating the dynamics of supply and demand for a stock, as revealed through the order book, to subsequent variations in its market price. We assess the model by testing its out-of-sample predictions for the direction of price moves given the history of price and order flow, across a wide range of stocks and time periods. The universal price formation model exhibits a remarkably stable out-of-sample prediction accuracy across time, for a wide range of stocks from different sectors. Interestingly, these results also hold for stocks which are not part of the training sample, showing that the relations captured by the model are universal and not asset-specific. We observe that the neural network thus trained outperforms linear models, pointing to the presence of nonlinear relationships between order flow and price changes. Our paper provides quantitative evidence for the existence of a universal price formation mechanism in financial markets. The universal nature of the price formation mechanism is reflected by the fact that a model trained on data from all stocks outperforms, in terms of out-of-sample prediction accuracy, stock-specific linear and nonlinear models trained on time series of any given stock. This shows that the universal nature of price formation weighs in favour of pooling together financial data from various stocks, rather than designing stock-or sector-specific models as commonly done. Also, we observe that standard data transformations such as normalizations based on volatility or average spread, or partitioning the training data into sectors or categories such as large/small tick stocks, do not improve training results. On the other hand, inclusion of price and order flow history over many past observations improves forecasting performance, showing evidence of path-dependence in price dynamics. Remarkably, the universal model is able to extrapolate, or generalize, to stocks not within the training set. The universal model is able to perform well on completely new stocks whose historical data the model was never trained on. This implies that the universal model captures features of the price formation mechanism which are robust across stocks and sectors. This feature alone is quite interesting for applications in finance where missing data problems and newly issued securities often complicate model estimation. Outline Section 2 describes the dataset and the supervised learning approach used to extract information about the price formation mechanism. Section 3 provides evidence for the existence of a universal and stationary relationship linking order flow and price history to price variations. Section 4 summarizes our main findings and discusses some implications. A data-driven model of price formation via Deep Learning Applications such as image, text, and speech recognition have been revolutionized by the advent of 'Deep Learning' -the use of multilayer ('deep') neural networks trained on large data sets to uncover complex nonlinear relations between high-dimensional inputs ('features') and outputs [START_REF] Goodfellow | Deep Learning[END_REF]. At an abstract level, a deep neural network represents a functional relation y = f (x) between a high-dimensional input vector x and an output y through iterations ('layers') consisting of weighted sums followed by the application of nonlinear 'activation' functions. Each iteration corresponds to a 'hidden layer' and a deep neural network can have many hidden layers. Neural networks can be used as 'universal approximators' for complex nonlinear relationships [START_REF] Hornik | Multilayer Feedforward Networks are Universal Approximators[END_REF], by appropriately choosing the weights in each layer. In supervised learning approaches, network weights are estimated by optimizing a regularized cost function reflecting in-sample discrepancy between the network output and desired outputs. In a deep neural network, this represents a high-dimensional optimization over hundreds of thousands (or even millions) of parameters. This optimization is computationally intensive due to the large number of parameters and large amount of data. Stochastic gradient descent algorithms (e.g., RMSprop or ADAM) are used for training neural networks, and training is parallelized on Graphics Processing Units (GPUs). We apply this approach to learn the relation between supply and demand on an electronic exchange -captured in the history of the order book for each stock -and the subsequent variation of the market price. Our data set is a high-frequency record of all orders, transactions and order cancellations for approximately 1000 stocks traded on the NASDAQ between Figure 1: The limit order book represents a snapshot of the supply and demand for a stock on an electronic exchange. The 'ask' side represents sell orders and the 'bid' side, buy orders. The size represents the number of shares available for sale/purchase at a given price. The difference between the lowest sell price (ask) and the highest buy price (bid) is the 'spread' (in this example, 1 ¢). Electronic buy and sell orders are continuously submitted, cancelled and executed through the exchange's order book. A 'limit order' is a buy or sell order for a stock at a certain price and will appear in the order book at that price and remain there until cancelled or executed. The 'limit order book' is a snapshot of all outstanding limit orders and thus represents the visible supply and demand for the stock (see Figure 1). In US stock markets, orders can be submitted at prices occurring at multiples of 1 cent. The 'best ask price' is the lowest sell order and the 'best bid price' is the highest bid price. The best ask price and best bid price are the prices at which the stock can be immediately bought or sold. The 'mid-price' is the average of the best ask price and best bid price. The order book evolves over time as new orders are submitted, existing orders are cancelled, and trades are executed. In electronic markets such as the NASDAQ, new orders may arrive at high frequencysometimes every microsecond -and order books of certain stocks can update millions of times per day. This leads to TeraBytes of data, which we put to use to build a data-driven model of the price formation process. When the input data is a time series, causality constraints require that the relation between input and output respects the ordering in time. Only the past may affect the present. A network architecture which reflects this constraint is a recurrent network (see an example in Figure 2) based on Long Short-Term Memory (LSTM) units [START_REF] Gers | Learning to Forget: Continual Prediction with LSTM[END_REF]. Each LSTM unit has an internal state which maintains a nonlinear representation of all past data. This internal state is updated as new data arrives. Our network has 3 layers of LSTM units followed by a final feed-forward layer of rectified linear units (ReLUs). A probability distribution for the next price move is produced by applying a softmax activation function. LSTM units are specially designed to efficiently encode the temporal sequence of data [START_REF] Gers | Learning to Forget: Continual Prediction with LSTM[END_REF][START_REF] Goodfellow | Deep Learning[END_REF]. h 1 t-1 h n-m t-1 h n t-1 Y t-1 X t-1 h 1 t-1(1) h n-m t-1(1) h n t-1(1) h 1 t-1(2) h n-m t-1(2) h n t-1(2) h 1 t h n-m t h n t Y t X t h 1 t(1) h n-m t(1) h n t(1) h 1 t(2) h n-m t(2) h n t(2) h 1 t+1 h n-m t+1 h n t+1 Y t+1 X t+1 . . . We train the network to forecast the next price move from a vector of state variables, which encode the history of the order book over many observation lags. The index t represents the number of price changes. At a high level, the LSTM network is of the form (Y t , h t ) = f (X t , h t-1 ; θ). (2.1) Y t is the prediction for the next price move, X t is the state of the order book at time t, h t is the internal state of the deep learning model, reprenting information extracted from the history of X up to t, and θ designates the model parameters, which correspond to the weights in the neural network. At each time point t the model uses the current value of state variables X t (i.e. the current order book) and the nonlinear representation of all previous data h t-1 , which summarizes relevant features of the history of order flow, to predict the next price move. In principle, this allows for arbitrary history-dependence: the history of the state variables (X s , s ≤ t) may affect the evolution of the system, in particular price dynamics, at all future times T ≥ t in a nonlinear way. Alternative modeling approaches typically do not allow the flexibility of blending nonlinearity and history-dependence in this manner. A supervised learning approach is used to learn the value of the (high-dimensional) parameter θ by minimizing a regularized negative log-likelihood objective function using a stochastic gradient descent algorithm [START_REF] Goodfellow | Deep Learning[END_REF]. The parameter θ is assumed to be constant across time, so it affects the output at all times in a recursive manner. A stochastic gradient descent step at time t requires calculating the sensitivity of the output to θ, via a chain rule, back through the previous times t -1, t -2, . . . , t -T (commonly referred to as 'backpropagation through time'). In theory, backpropagation should occur back to time 0 (i.e., T = t). However, this is computationally impractical and we truncate the backpropagation at some lag T . In Section 3.4, we discuss the impact of the past history of the order book and the 'long memory' of the market. The resulting LSTM network involves up to hundreds of thousands of parameters. This is relatively small compared to networks used for instance in image or speech recognition, but it is huge compared to econometric models traditionally used in finance. Previous literature has been almost entirely devoted to linear models or stochastic models with a very small number of parameters. It is commonly believed that financial data is far too noisy to build such large models without overfitting; our results show that this is not necessarily true. Given the size of the data and the large number of network parameters to be learned, significant computational resources are required both for pre-processing the data and training the network. Training of deep neural networks can be highly parallelized on GPUs. Each GPU has thousands of cores, and training is typically 10× faster on a GPU than a standard CPU. The NASDAQ data was filtered to create training and test sets. This data processing is parallelized over approximately 500 compute nodes. Training of asset-specific models was also parallelized, with each stock assigned to a single GPU node. Approximately 500 GPU nodes are used to train the stock-specific models. These asset-specific models, trained on the data related to a single stock, were then compared to a 'universal model' trained on the combined data from all the stocks in the dataset. Data from various stocks were pooled together for this purposes without any specific normalization. Due to the large amount of data, we distributed the training of the universal model across 25 GPU nodes using asynchronous stochastic gradient descent (Figure 3). Each node loads a batch of data (selected at random from all stocks in the dataset), computes gradients of the model on the GPU, and then updates the model. Updates occur asynchronously, meaning node j updates the model without waiting for nodes i = j to finish their computations. Results We split the universe of stocks into two groups of roughly 500 stocks; training is done on transactions and quotes for stocks from the first group. We distinguish: • stock-specific models, trained using data on all transactions and quotes for a specific stock. • the 'universal model', trained using data on all transactions and quotes for all stocks in the training set. All models are trained for predicting the direction of the next price move. Specifically, if τ 1 , τ 2 , . . . are the times at which the mid-price P t changes, we estimate P[P τ k+1 -P τ k > 0|X τ 0:k ] Figure 3: Asynchronous stochastic gradient descent for training the neural network. The dataset, which is too large to be held in the nodes' memories, is stored on the Online Storage system. Batches of data are randomly selected from all stocks and sent to the GPU nodes. Gradients are calculated on the GPUs and then the model is asynchronously updated. and P[P τ k+1 -P τ k < 0|X τ 0:k ] where X t is the state of the limit order book at time t. The models therefore predict whether the next price move is up or down. The events are irregularly spaced in time. The time interval τ k+1 -τ k between price moves can vary considerably from a fraction of a second to seconds. We measure the forecast accuracy of a model for a given stock via the proportion of observations for which it correctly predicts the direction of the next price move. This can be estimated using the empirical estimator A i = Number of price changes where model correctly predicts price direction for stock i Total number of price changes × 100%. All results are out-of-sample in time. That is, the accuracy is evaluated on time periods outside of the training set. Model accuracy is reported via the cross-sectional distribution of the accuracy score A i across stocks in the testing sample, and models are compared by comparing their accuracy scores. In addition, we evaluate the accuracy of the universal model for stocks outside the training set. Importantly, this means we assess forecast accuracy for stock i using a model which is trained without any data on stock i. This tests whether the universal model can generalize to completely new stocks. Typically, the out-of-sample dataset is a 3-month time period. In the context of highfrequency data, 3 months corresponds to millions of observations and therefore provides a lot of scope for testing model performance and estimating model accuracy. In a data set with no stationary trend (as in the case at such high frequencies), a random forecast ('coin-flip') would yield an expected score of 50%. Given the large size of the data set, even a small deviation (i.e. 1%) from this 50% benchmark is statistically significant. The main findings of our data-driven approach may be summarized as follows: • Nonlinearity: Data-driven models trained using deep learning substantially outperform linear models in terms of forecasting accuracy (Section 3.1). • Universality: The model uncovers universal features that are common across all stocks (Section 3.2). These features generalize well: they are also observed to hold for stocks which are not part of the training sample. • Stationarity: model performance in terms of price forecasting accuracy is remarkably stable across time, even a year out of sample. This shows evidence for the existence of a stationary relationship between order flow and price changes (Section 3.3), which is stable over long time periods. • Path-dependence and long-range dependence: inclusion of price and order flow history is shown to substantially increase the forecast accuracy. This provides evidence that price dynamics depend not only on the current or recent state of the limit order book but on its history, possibly over long time scales (Section 3.4). Our results show that there is far more common structure across data from different financial instruments than previously thought. Providing a suitably flexible model is used which allows for nonlinearity and history-dependence, data from various assets may be pooled together to yield a data set large enough for deep learning. Deep Learning versus Linear Models Linear state space models, such as Vector Autoregressive (VAR) models, have been widely used in the modeling of high frequency data and in empirical market microstructure research [START_REF] Hasbrouck | Empirical Market Microstructure: The Institutions[END_REF] and provide a natural benchmark for evaluating the performance of a forecast. Linear models are easy to estimate and capture in a simple way the trends, linear correlations and autocorrelations in the state variables. The results in Figure 4 show that the deep learning models substantially outperform linear models. Given the large sample size, an increase of 1% in accuracy is considered significant in the context of high-frequency modeling. The linear (VAR) model may be formulated as follows: at each observation we update a vector of linear features h t and then use a probit model for the conditional probability of an upward price move given the state variables: h t = Ah t-1 + BX t , Y t = P(∆P t > 0|X t , h t ) = G(CX t + Dh t ). (3.-1) where G depends on the distributional assumptions on the innovations in the linear model. For example, if we use a logistic distribution for the innovations in the linear model, then the probability distribution of the next price move is given by softmax (logistic) function applied to a linear function of the current order book and linear features: P(∆P t > 0|X t , h t ) = Softmax(CX t + Dh t ). We compare the neural network against a linear model for approximately 500 stocks. To compare models we report the difference in accuracy scores across the same test data set. Let • L i be the accuracy of the stock-specific linear model g θ i for asset i estimated on data only from stock i, • Âi be the accuracy of the stock-specific deep learning model f θ i trained on data only from stock i, and • A i be the accuracy for asset i of the universal deep learning model f θ trained on a pooled data set of all quotes and transactions for all stocks. The left plot in Figure 4 reports the cross-sectional distribution for the increase in accuracy Âi -L i when moving from the stock-specific linear model to the stock-specific deep learning model. We observe a substantial increase in accuracy, between 5% to 10% for most stocks, when incorporating nonlinear effects using the neural networks. The right plot in Figure 4 displays histograms of A i (red) and L i (blue). We clearly observe that moving from a stock-specific linear model to the universal nonlinear model trained on all stocks substantially improves the forecasting accuracy by around 10%. The deep neural network outperforms the linear model since it is able to estimate nonlinear relationships between the price dynamics and the order book, which represents the visible supply and demand for the stock. This is consistent with an abundant empirical and econometric literature documenting nonlinear effects in financial time series, but the large amplitude of this improvement can be attributed to the flexibility of the neural network in representing nonlinearities. More specifically, sensitivity analysis of our data-driven model uncovers stable nonlinear relations between state variables and price moves, i.e. nonlinear features which are useful for forecasting. Figure 5 presents an examples of such a feature: the relation between the depth on the bid and ask sides of the order book and the probability of a price decrease. Such relations have been studied in queueing models of limit order book dynamics [START_REF] Cont | A stochastic model for order book dynamics[END_REF][START_REF] Cont | Price dynamics in a Markovian limit order market[END_REF]. In particular, it was shown in [START_REF] Cont | Price dynamics in a Markovian limit order market[END_REF] that when the order flow is symmetric then there exists a 'universal' relation -not dependent on model parameters -between bid depth, ask depth and the probability of a price decrease at the next price move. However, the derivations in these models hinge on many statistical assumptions which may or may not hold, and the universality of such relations remained to be empirically verified. Our analysis shows that there is indeed evidence for such a universal relation, across a wide range of assets and time periods. Figure 5 (left) displays the probability of a price decrease as a function of the depth (the number of shares) at the best bid/ask price. The larger the best ask size, the more likely the next price prove will be downwards. The probability is approximately constant along the center diagonal where the bid/ask imbalance is zero. However, as observed in queueing models [START_REF] Cont | A stochastic model for order book dynamics[END_REF][START_REF] Cont | Price dynamics in a Markovian limit order market[END_REF][START_REF] Figueroa-Lopez | One-level limit order book model with memory and variable spread[END_REF], even under simplifying assumptions, the relation between this probability and various measures of the bid/ask imbalance is not linear. Furthermore, such queueing models typically focus on the influence of depth at the top of the order book and it is more difficult to extract information from deeper levels of the order book. The right contour plot in Figure 5 displays the influence of limit orders deeper in the order book (here: total size aggregated across levels 5 to 10) on the probability of a price decrease. We see that the influence is less than the depth at the top of the book, as illustrated by the tighter range of predicted probabilities, but still significant. Universality across assets A striking aspect of our results is the stability across stocks of the features uncovered by the deep learning model, and its ability to extrapolate ('generalize') to stocks which it was not trained on. This may be illustrated by comparing forecasting accuracy of stock-specific models, trained only on data of a given stock, to a universal model trained on a pooled data set of 500 stocks, a much larger but extremely heterogeneous data set. As shown in Figure 6, which plots A i -Âi , the universal model consistently outperforms the stock-specific models. This indicates there are common features, relevant to forecasting, across all stocks. Features extracted from data on stock A may be relevant to forecasting of price moves for stock B. Given the heterogeneity of the data, one might imagine that time series from different stocks should be first normalized (by average daily volume, average price or volatility etc.) before pooling them. Surprisingly, this appears not to be the case: we have observed that standard data transformations such as normalizations based on average volume, volatility or average spread, or partitioning the training data into sectors or categories such as large/small tick stocks do not improve training results. For example, a deep learning model trained on small tick stocks does not outperform the universal model in terms of forecasting price moves for small tick stocks. It appears that the model arrives at its own data-driven normalization of inputs based on statistical features of the data rather than ad hoc criteria. The source of the universal model's outperformance is well-demonstrated by Figure 7. The universal model most strongly outperforms the stock-specific models on stocks with less data. The stock-specific model is more exposed to overfitting due to the smaller dataset while the universal model is able to generalize by interpolating across the rich scenario space of the pooled data set and therefore is less exposed to overfitting. So, the existence of these common features seems to argue for pooling the data from different stocks, notwithstanding their heterogeneity, leading to a much richer and larger set of training scenarios. Using 1 year of the pooled data set is roughly equivalent to using 500 years (!) of data for training a single-stock model and the richness of the scenario space is actually enhanced by the diversity and heterogeneity of behavior across stocks. Due to the large amount of data, very large universal models can be estimated without overfitting. Figure 8 shows the increase in accuracy for a universal model with 150 units per layer (which amounts to several hundred thousand parameters) versus a universal model with 50 units per layer. Remarkably, the universal model is even able to generalize to stocks which were not part of the training sample: if the model is only trained on data from stocks {1, . . . , N }, its forecast accuracy is similar for stock N +1. This implies that the universal model is capturing features in the relation between order flow and price variastions which are common to all stocks. Table 1 illustrates the forecast accuracy of a universal model trained only on stocks 1-464 (for January 2014-May 2015), and tested on stocks 465-489 (for June-August 2015). This universal model outperforms stock-specific models for stocks 465 -489, even though the universal model has never seen data from these stocks in the training set. The universal model trained only on stocks 1 -464 performs roughly the same for stocks 465 -489 as the universal model trained on the entire dataset of stocks 1 -489. Results are reported in Table 1. Figure 10 displays the accuracy of the universal model for 500 completely new stocks, which are not part of the training sample. The universal model achieves a high accuracy on these new stocks, demonstrating that it is able to generalize to assets that are not included in the training data. This is especially relevant for applications, where missing data issues, stock splits, new listings and corporate events constantly modify the universe of stocks. Stationarity The relationships uncovered by the deep learning model are not only stable across stocks but also stationary in time. This is illustrated by examining how forecast accuracy behaves when the training period and test period are separated in time. Figure 10 shows the accuracy of the universal model on 500 stocks which were not part of the training sample. The left histogram displays the accuracy in June-August, 2015, shortly after the training period (January 2014-May 2015), while the right plot displays the cross-sectional distribution of accuracy for the same model in January-March, 2017, 18 months after the training period. Interestingly, even one year after the training period, the forecasting accuracy is stable, without any adjustments. Such stability contrasts with the common practice of 'recalibrating' models based on a moving window of recent data due to perceived non-stationarity. If the data were nonstationary, accuracy would decrease with the time span separating the training set and the prediction period and it would be better to train models only on recent periods immediately before the test set. However, we observe that this is not the case: Table 2 reports forecast results for models trained over periods extending up to 1, 3, 6, and 19 months before the test set. Model accuracy consistently increases as the length of the training set is increased. The message is simple: use all available data, rather than an arbitrarily chosen time window. Note that these results are not incompatible with the data itself being non-stationary. The stability we refer to is the stability of the relation between the inputs (order flow and price history) and outputs (forecasts). If the inputs themselves are non-stationary, the output will be non-stationary but that does not contradict our point in any way. Path-dependence Statistical modeling of financial time series has been dominated by Markovian models which, for reasons of analytical tractability, assume that the evolution of the price and other state variables only depends on their current value and there is no added value to including their history beyond one lag. There is a trove of empirical evidence going against this hypothesis, and pointing to long-range dependence in financial time series [START_REF] Bacry | Continuous cascade models for asset returns[END_REF][START_REF] Lillo | The long memory of the efficient market[END_REF][START_REF] Mandelbrot | The multifractal model of asset returns[END_REF]. Our results are consistent with these findings: we find that the history of the limit order book contains significant additional information beyond that contained in its current state. Figure 11 shows the increase in accuracy when using an LSTM network, which is a function of the history of the order book, as compared with a feedforward neural network, which is only a function of the most recent observation (a Markovian model). The LSTM network, which incorporates temporal dependence, significantly outperforms the Markovian model. The accuracy of the forecast also increases when the network is provided with a longer history as input. Figure 12 displays the accuracy of the LSTM network on a 5, 000-step sequence minus the accuracy of the LSTM network on a 100-step sequence. Recall that a step ∆ k = τ k+1 -τ k is on average 1.7 seconds in the dataset so 5000 lags corresponds to 2 hours on average. There is a significant increase in accuracy, indicating that the deep learning model is able to find relationships between order flow and price change events over long time periods. Our results show that there is significant gain in model performance from including many lagged values of the observations in the input of the neural network, a signature of significant -and exploitable -temporal dependence in order book dynamics. Discussion Using a Deep Learning approach applied to a large dataset of billions of orders and transactions for 1000 US stocks, we have uncovered evidence of a universal price formation mechanism relating history of the order book for a stock to the (next) price variation for that stock. More importantly, we are able to learn this mechanism through supervised training of a deep neural network on a high frequency time series of the limit order book. The resulting model displays several interesting features: Figure 11: Comparison of out-of-sample forecast accuracy of a LSTM network with a feedforward neural network trained to forecast the direction of next price move based on the current state of the limit order book. Cross-sectional results for 500 stocks for test period June-August, 2015. Figure 12: Out-of-sample increase in accuracy when using a 5000-step sequence versus a 100-step sequence, across 1, 000 stocks. Test period : June-August 2015. • Universality: the model is stable across stocks and sectors, and the model trained on all stocks outperforms stock-specific models, even for stocks not in the training sample, showing that features captured are not stock-specific. • Stationarity: model performance is stable across time, even a year out of sample. • Evidence of 'long memory' in price formation: including order flow history as input, even up to several hours, improves prediction performance. • Generalization: the model extrapolates well to stocks not included in the training sample. This is especially useful since it demonstrates its applicability to recently listed instruments or those with incomplete or short data histories. Our results illustrate the applicability and usefulness of Deep Learning methods for modeling of intraday behavior of financial markets. In addition to the fundamental insights they provide on the nature of price formation in financial markets, these findings have practical implications for model estimation and design. Training a single universal model is orders of magnitude less complex and costly than training or estimating thousands of single-asset models. Since the universal model can generalize to new stocks (without training on their historical data), it can also be applied to newly issued stocks or stocks with shorter data histories. Figure 2 : 2 Figure 2: Architecture of a recurrent neural network. Figure 4 : 4 Figure 4: Comparison of a deep neural network with linear models. Models are trained to predict the direction {-1, +1} of next mid-price move. Comparison for approximately 500 stocks and out-of-sample results reported for June-August, 2015. Left-hand figure: increase in accuracy of stock-specific deep neural networks versus stock-specific linear models. Righthand figure: accuracy of a universal deep neural network (red) compared to stock-specific linear models (blue). Figure 5 : 5 Figure 5: Left: relation between depth at the bid, depth at the ask and the probability of a price decrease. The x-axis and y-axis display the quantile level corresponding to the observed bid and ask depth. Right: Contour plot displaying the influence of levels deeper in the order book (5 to 10) on the probability of a price decrease. Figure 6 : 6 Figure 6: Out-of-sample forecasting accuracy of the universal model compared with stockspecific models. Both are deep neural networks with 3 LSTM layers followed by a ReLU layer. All layers have 50 units. Models are trained to predict the direction of the next move. Comparison across 489 stocks, June-August, 2015. Figure 7 : 7 Figure 7: Increase in out-of-sample forecast accuracy (in %) of the universal model compared to stock-specific model, as a function of size of training set for stock specific model (normalized by total sample size, N = 24.1 million). Models are trained to predict the direction of next price move. Comparison across 500 stocks, June-August, 2015. Figure 8 : 8 Figure 8: Comparison of two universal models: a 150 unit per layer model versus 50 unit per layer model. Models are trained to predict direction {-1, +1} of next mid-price move. Out-of-sample prediction accuracy for direction of next price move, across approximately 500 stocks (June-August, 2015). Figure 9 : 9 Figure 9: Performance on approximately 500 new stocks which the model has never seen before. Out-of-sample accuracy reported for June-August, 2015. Universal model trained during time period January 2014-May 2015. Figure 10 : 10 Figure 10: Performance on 500 new stocks which the model has never seen before. Left: outof-sample accuracy reported for June-August, 2015. Right: out-of-sample accuracy reported for January-March, 2017. Universal model trained on data from January 2014-May 2015. Table 1 : 1 Comparison of universal model trained on stocks 1-464 versus (1) stock-specific models for stocks 465-489 and (2) universal model trained on all stocks 1-489. Models are trained to predict direction of next mid-price move. Second column shows the fraction of stocks where the universal model trained only on stocks 1-464 outperforms models (1) and (2). The third column shows the average increase in accuracy. Comparison for 25 stocks and out-of-sample results reported for June-August, 2015. Model Comparison Average increase in accuracy Stock-specific 25/25 1.45% Universal 4/25 -0.15% Table 2 : 2 Out-of-sample forecast accuracy of deep learning models trained on entire training set (19 months) vs. deep learning models trained for shorter time periods immediately preceding the test period, across 50 stocks Aug 2015. Models are trained to predict the direction of next price move. Second column shows the fraction of stocks where the 19-th month model outperforms models trained on shorter time periods. The third column shows the average increase in accuracy across all stocks. Historical order book data was reconstructed from NASDAQ Level III data using the LOBSTER data engine[START_REF] Huang | LOBSTER: Limit Order Book Reconstruction System[END_REF]. the London Quant Summit 2018, JP Morgan and Princeton University for their comments. Computations for this paper were performed using a grant from the CFM-Imperial Institute of Quantitative Finance and the Blue Waters supercomputer grant "Distributed Learning with Neural Networks".
43,265
[ "1030004" ]
[ "303576", "542130" ]
01754055
en
[ "spi" ]
2024/03/05 22:32:10
2017
https://hal.science/hal-01754055/file/Blavette--Influence%20of%20the%20wave%20dispersion_EWTEC2017-revised_v4.pdf
Anne Blavette email: anne.blavette@ens-rennes.fr Thibaut Kovaltchouk email: thibaut.kovaltchouk@ac-reims.fr François Rongère email: francois.rongere@ec-nantes.fr Marilou Jourdain De Thieulloy Paul Leahy email: paul.leahy@ucc.ie Bernard Multon email: bernard.multon@ens-rennes.fr H Ben Ahmed email: hamid.benahmed@ens-rennes.fr Marilou Jourdain Hamid Ben Influence of the wave dispersion phenomenon on the flicker generated by a wave farm Keywords: Flicker, aggregation effect, hydrodynamic modelling, time delay-based approach à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Influence of the wave dispersion phenomenon on the flicker generated by a wave farm I. INTRODUCTION The inherently fluctuating nature of waves may be reflected to some extent in the power output of wave energy converters (WECs). These fluctuations can induce voltage fluctuations which can potentially generate flicker [START_REF] Molinas | Power Smoothing by Aggregation of Wave Energy Converters for Minimizing Electrical Energy Storage Requirements[END_REF]- [START_REF] Kovaltchouk | Wave farm flicker severity: Comparative analysis and solutions[END_REF] Hence, wave farm managers will be required to demonstrate that their farm is compliant with grid codes and similar regulations, in order to be granted grid connection. This is usually performed through grid impact assessment studies by means of numerical power system simulators such as DIgSILENT PowerFactory [START_REF]DIgSILENT PowerFactory[END_REF], PSS®E [START_REF] Pss®e | [END_REF], etc. Hence, numerical models of the considered WEC(s) are necessary. These models can be based on experimental data in the form of electrical power output time series [START_REF] Blavette | Impact of a Medium-Size Wave Farm on Grids of Different Strength Levels[END_REF] or on socalled "wave-to-wire" models which compute this type of data from the (usually simulated) sea surface elevation. A significant number of such "wave-to-wire" models have been developed, as reviewed in [START_REF] Penalba | A Review of Wave-to-Wire Models for Wave Energy Converters[END_REF]. However, few of them have considered arrays of WECs as described in [START_REF] Forehand | A Fully Coupled Wave-to-Wire Model of an Array of Wave Energy Converters[END_REF]. Regarding these latter, different approaches have been used. The most comprehensive approach consists in simulating the sea surface elevation at each node of the wave farm where a WEC is located, taking into account the wave dispersion phenomenon, as well as the hydrodynamic interactions between WECs due to radiation and diffraction. Using this approach is very heavy from a computational perspective and should be restricted to simulating the output power of a wave farm whose WECs are closely located. In the case where the WECs are sufficiently far away from each other so that their hydrodynamic interactions can be considered as negligible, a second approach should be used which consists in calculating the sea surface elevation at each node of the wave farm where a WEC is located without taking into account the radiation and diffraction due to the neighbouring WECs. Finally, a third simplified approach has been widely used in the electrical engineering community in flicker studies focussing on wave farms. This approach consists in calculating the output power of an entire farm based on the power profile of a single WEC. This power profile serves as a reference to which a random time delay is applied for each WEC in order to model the device aggregation effect [1]- [START_REF] Kovaltchouk | Wave farm flicker severity: Comparative analysis and solutions[END_REF], which will be described in Section C.2. The computational effort regarding this latter approach is extremely light with respect to the other two approaches. However, questions remain concerning its physical validity, as this approach does not take into account the wave dispersion phenomenon. The objective of this paper is to tackle this question by comparing the flicker level obtained from the second and the third approaches described in this section. Sufficiently distantly located WECs will be considered in order to neglect the inter-WECs hydrodynamic interactions. The farm output power is then injected in a local electrical grid model developed under PowerFactory to compute the corresponding voltage profile at the Point of Common Coupling (PCC). This voltage profile serves as input to a flickermeter from which the associated short-term flicker level is computed. The modelling hypotheses will be described in Section II and the results in Section III. In Section IV, the conclusions will be detailed. The results of the comparative study will contribute in defining the required level of hydrodynamic detail necessary for simulating the output power of a wave farm when it is to be used for flicker analyses. II. MODELLING HYPOTHESES A. Hydrodynamic simulation The hydrodynamic model is based on linear wave theory and simulates wave field from a superposition of Airy waves obtained through discretising a JONSWAP spectrum and using random phases. Contrary to the time delay-based method, the wave dispersion phenomenon is taken into account here. By discretising the wave spectrum 𝑆(𝜔) using 𝑛 regularly spaced frequency components, the amplitude of each elementary wave component is given by [START_REF] Faltinsen | Sea loads on ships and offshore structures[END_REF] as: 𝑎(𝜔 𝑖 ) = 𝑎 𝑖 = √2𝑆(𝜔 𝑖 )δ𝜔 (1) Each wave component then represents a complex elementary free surface elevation at horizontal position (𝑥, 𝑦) and time 𝑡: 𝜂 ̃𝑖(𝑥, 𝑦, 𝑡) = 𝑎 𝑖 𝑒 𝑗[𝑘 𝑖 (𝑥 cos 𝜃 ̅ +𝑦 sin 𝜃 ̅ ) -𝜔 𝑖 𝑡 + 𝜑 𝑖 ] (2 ) where 𝜃 ̅ is the mean wave direction of the mono-directional wave field, 𝜑 𝑖 ∈ [0, 2𝜋[ is the random phase of the 𝑖 th wave component chosen at wave field initialisation and 𝑘 𝑖 is the wave number, which is solution of the dispersion relation given by 𝜔 𝑖 2 = 𝑔𝑘 𝑖 tanh 𝑘 𝑖 ℎ (3) with ℎ being the mean water depth at position (𝑥, 𝑦). If the water depth can be considered as infinite, the relation degenerates to 𝜔 𝑖 2 = 𝑔𝑘 𝑖 . Summing the 𝑛 contributions gives the free surface elevation: This linear wave field modelling is then integrated into a linear framework for wave structure interactions that relies on hydrodynamics coefficients obtained from the linear potential flow theory. Note that in this study, no hydrodynamic interactions between the scattered wave fields are taken into account so that only coefficients for one isolated body are to be calculated using the seakeeping software NEMOH [START_REF] Babarit | Theoretical and numerical aspects of the open source BEM solver NEMOH[END_REF] which is based on the Boundary Element Method (BEM). The time domain linear excitation force applying to a body having position (𝑥, 𝑦) is then obtained by the superposition of the excitation generated by each wave component as: 𝐹 𝑒𝑥 (𝑥, 𝑦, 𝑡) = 𝑅𝑒 {∑ 𝐹 ̃𝑒𝑥 (𝜔 𝑖 )𝜂 ̃𝑖(𝑥, 𝑦, 𝑡) 𝑛 𝑖=1 } (5) Sea-states were simulated for significant heights equal to 1 m and 3 m, as well as for peak periods equal to 7 s, 9 s, 10 s and 12 s. B. Wave device The wave farm is composed of identical heaving buoys controlled passively and described in a previous paper [START_REF] Kovaltchouk | Influence of control strategy on the global efficiency of a Direct Wave Energy Converter with Electric Power Take-Off[END_REF]. As the focus of this paper is on the comparison of two methods for modelling the device aggregation effect on flicker, a simple, passive control strategy was adopted for the WEC. It consists of the application of a constant damping factor as a function of the sea-state characteristics (significant wave height 𝐻 𝑠 and 𝑇 𝑝 ). This damping factor is optimised with respect to a given sea-state during a preliminary offline study. For the sake of realism, levelling is applied on the power takeoff (PTO) force, which is limited to 1 MN, and on the output electrical power, which is limited to 1 MW. Each WEC is connected to the offshore grid through a fully rated back-toback power electronic converter. C. Wave farm 1) Wave farm layout The wave farm considered in this study is considered to be composed of 24 of the devices described in the previous section. All these devices are deemed identical in terms of hydromechanical and electro-mechanical properties. They are placed at a distance 𝑑 of each other, on 3 rows and 8 columns facing the incoming waves, as shown in Fig. 1. The inter-WEC distance 𝑑 is supposed to be sufficient so that the hydrodynamic interactions between the devices can be considered as negligible. In this research work, it was assumed equal to 600 m in the full hydrodynamic approach, while it is made approximately equal to 600 m in the timedelay based approach, as it will be described in Section C.4 [START_REF] Babarit | Impact of long separating distances on the energy production of two interacting wave energy converters[END_REF]. 2) Introduction to the two approaches studied here The wave farm power output is computed as the sum of the power output of all the WECs composing the farm. As the temporal profile of the sea surface elevation at a given node in the wave farm is not expected to be identical to this corresponding to another node, it is not expected either that two power output temporal profiles from two different WECs could be identical. Hence, the power output of a wave farm cannot be computed as the product of the power output of a single device times the number of WECs composing the farm. Also, the fact that different WECs achieve peak power at different times leads to a reduced peak-to-average ratio of the wave farm power output compared to that of a single device. This is illustrated in Fig. 2 which shows the temporal power output profile for a single WEC and for the wave farm composed of 24 devices normalised by their respective average value. While the peak-to-average ratio is equal to 3.6 for the single WEC (even though its power output is limited to 1 MW), it is equal to 1.9 for the wave farm. This decrease in the peak-to-average ratio implies that the temporal power output profile is "smoother" in the case of a wave farm than in the case of a single WEC which is usually referred to as the device aggregation effect. The objective of this paper was to determine whether a full hydrodynamic simulation was required to model this effect on flicker, or whether a simplified, time delay-based method was sufficient. These two approaches are described in the next sections. Fig. 2 Temporal power output profile (over 100s) for a single WEC (blue) and for the wave farm composed of 24 devices (pink) for significant wave height 𝐻 𝑠 =3 m and peak period 𝑇 𝑝 =7 s. The profiles are normalised with respect to their average value. 2.1) Full hydrodynamic simulation In this approach, the wave excitation force at each node of the wave farm where a WEC is located is computed by means of the code described in Section A. Then, the power output of each WEC is computed based on its corresponding excitation force temporal profile. Hence, the power output 𝑃 𝑓𝑎𝑟𝑚 of the wave farm corresponds to the algebraic sum of the power output 𝑃 𝑊𝐸𝐶𝑖 of each WEC 𝑖, such as: 𝑃 𝑓𝑎𝑟𝑚 (𝑡) = ∑ 𝑃 𝑊𝐸𝐶𝑖 (𝑡) 24 𝑖=1 (7) 2.2) Time delay-based method The time delay-based method requires only a single power output temporal profile 𝑃 𝑊𝐸𝐶1 of a single WEC, to which different uniformly distributed random time delays ∆𝑡 𝑖 are applied to represent the power output of the other devices composing the farm. Hence, the wave farm output power can be expressed as: 𝑃 𝑓𝑎𝑟𝑚 (𝑡) = ∑ 𝑃 𝑊𝐸𝐶1 (𝑡 + ∆𝑡 𝑖 ) 24 𝑖=1 [START_REF]DIgSILENT PowerFactory[END_REF] where ∆𝑡 𝑖 models the fictive propagation of a wave group whose envelope characteristics are independent of the travelled distance. This means that the wave dispersion phenomenon is not taken into account here. This physical effect implies that in dispersive media such as water, the travel speed of a sine wave is linked to its frequency through the dispersion relationship which, for deep water waves, can be expressed as [START_REF] Falnes | Ocean waves and oscillating systems[END_REF]: 𝜔 2 = 𝑔𝑘 (9) as mentioned earlier. Term 𝛥𝑡 𝑖 is assumed equal to: 𝛥𝑡 𝑖 = 𝑑 𝑡𝑑 𝑣 𝑔 = 4𝜋𝑑 𝑡𝑑 𝑔𝑇 𝑝 ( 10 ) where 𝑑 𝑡𝑑 is distance between a reference WEC (whose time delay is equal to zero) and given WEC 𝑖, and 𝑣 𝑔 is the group speed which is defined as equal to 𝑔𝑇 𝑝 /4𝜋 here, where 𝑔=9.81 m.s² is the gravity of Earth. Given that the incoming waves are simulated as mono-directional waves, the distance 𝑑 𝑡𝑑 taken into account here is equal to the distance along the axis parallel to the wave front propagation direction. In the full hydrodynamic approach, the distance between the WECs was assumed to be equal to a fixed distance 𝑑 =600 m. However, if this constant distance were used in the time delay-based approach (i.e. 𝑑 𝑡𝑑 = 𝑑), given that the excitation force temporal profile is similar for all WECs, then all the devices located on a given row of the wave farm would present the same power output at any time 𝑡, thus resulting in coincident power profiles for 8 WECs, which is unrealistic. Hence, in order to avoid this situation, an additional uniformly distributed random distance 𝑑 𝑟𝑎𝑛𝑑 , arbitrarily selected as ranging between -50 m and +50 m (in order to represent WECs linear drift), is added to the fixed inter-WEC distance 𝑑 such as: 𝑑 𝑡𝑑 = 𝑑 + 𝑑 𝑟𝑎𝑛𝑑 where 𝑑 𝑟𝑎𝑛𝑑 ∈ [-50; 50] m (11) Ten time delay sets were used in this study. D. Electrical grid An electrical grid model was developed under the power system simulator PowerFactory and is shown in Fig. 3. This model is inspired from the Atlantic Marine Energy Test Site (AMETS) [START_REF] Amets | SEAI website[END_REF] located off Belmullet, Ireland for the onshore local grid part. It is composed of a 10/20kV transformer whose impedance is equal to 2.10 -4 +𝑗0.06 pu (where 𝑗 is the imaginary unit) and of a 0.1 MW load representing the onshore substation connected to the rest of the national network through a 5 km-long overhead line of impedance 0.09+𝑗0.3 Ω/km. On the 20 kV bus (which is the Point of Common Coupling (PCC)), a VAr compensator maintains power factor at unity. Then, a 20/38 kV transformer (of impedance equal to 2.10 -4 +𝑗0.06 pu) connects to the farm to the local (national) grid where a 2 MW load (representing the consumption of a local town) is also connected. The rest of the national grid is modelled by means of a 38 kV voltage source in series with an impedance. This impedance magnitude is selected to be equal to Z=20 Ω (i.e. equal to a short-circuit level of 72 MVA), and its angle is selected to be equal to 30°, which corresponds to a "weak grid" and constitutes thus a worst case scenario in which relatively high flicker levels can be expected. The offshore grid is composed of a 20 km-long submarine cable of series impedance equal to 0.07+ 𝑗 0.11 Ω/km and capacitance equal to 0.31 µF/km, as described in [START_REF]Nexans data sheet[END_REF]. The cable distance was selected according to the values observed for two planned or already existing wave energy test sites [START_REF] Amets | SEAI website[END_REF], [START_REF][END_REF]. The offshore network is also composed of a 0.4/10 kV transformer (of impedance equal to 2.10 -4 +𝑗0.06 pu) and of 24 wave devices. The influence on the study results of the internal network between the WECs and the 0.4/10 kV transformer was deemed negligible and was therefore not included in the model. III. RESULTS A. Flicker level with respect to time delays As mentioned earlier, ten different time delay sets were used in the time delay-based approach. The corresponding minimum, maximum, and average short-term flicker levels 𝑃 𝑠𝑡 are shown in Table I and Table II for the two significant wave heights considered here (𝐻 𝑠 =1 m and 𝐻 𝑠 =3 m). The standard deviation, also shown in these tables, indicates that for most cases the deviation from the average value is relatively small, compared to the allowed flicker limits which range usually between 0.35 and 1 [START_REF] Blavette | Impact of a Medium-Size Wave Farm on Grids of Different Strength Levels[END_REF]. However, some higher values of the standard deviation indicate that the time delay set can have a non-negligible influence on flicker, and that it is therefore important to average the flicker level corresponding to several time delay sets in order to obtain a reasonable estimation of the flicker which would have been obtained through the more realistic, full hydrodynamic approach. As the average flicker level is mostly representative of the order of magnitude of the flicker level corresponding to the 10 different time delay sets, it will be used for the comparative study between the time delay-based approach and the full hydrodynamic approach, as described in the following section. B. Comparison of the two approaches It is shown in Fig. 4 that both approaches generate similar results with a difference which is generally negligible in comparison with the usual maximum allowed flicker limits ranging between 0.35 and 1. This observation applies to both the low-energy and the mild sea-states (𝐻 𝑠 =1 m and 𝐻 𝑠 =3 m respectively). Hence, it can be concluded that the flicker level generated by a wave energy farm can be estimated with a relatively high level of accuracy in most cases by means of the average flicker level corresponding to several time delay sets (here, ten time delay sets were used). In other words, this means flicker can be estimated from a single WEC power output, without further requirement for modelling the hydrodynamic conditions at each node in the farm where a WEC is expected to be located. This means also, in physical terms, that the wave dispersion phenomenon can usually be considered as negligible when it comes to flicker studies under the conditions considered in this study. Fig. 4 Short-term flicker level 𝑃 𝑠𝑡 as a function of the sea-state peak period 𝑇 𝑝 for 𝐻 𝑠 =1 m and 𝐻 𝑠 =3 m, and for the two considered approaches IV. CONCLUSIONS This paper has described a comparative study between a time delay-based approach and a more realistic, full hydrodynamic approach for determining the flicker level generated by a wave farm composed of 24 devices. The results have shown that in most cases, using the average flicker level corresponding to 10 different time delay sets leads to a negligible error compared to the full hydrodynamic approach. This means that the wave dispersion phenomenon has a limited impact on flicker. However, some non-negligible values for the flicker level error in some rare cases suggest that the time delay-based approach should be restricted to estimating flicker at a first stage before more refined studies based on the full hydrodynamic approach are conducted. Future work will focus on the comparative analysis, in terms of flicker level, between the two approaches described in this paper and a more comprehensive hydrodynamic approach including the hydrodynamic interactions between WECs. It will also investigate the influence of several parameters such as inter-WEC distance, WEC spatial arrangement, device number, etc. Fig. 1 1 Fig. 1 Wave farm spatial layout Fig. 3 3 Fig. 3 Electrical grid model developed under DIgSILENT PowerFactory Table I I Short-term flicker levels 𝑃 𝑠𝑡 for the ten time delay sets (𝐻 𝑠 =1 m) Peak period 𝑇 𝑝 (s) 7 9 10 12 level 𝑃 𝑠𝑡 Short-term flicker Average Standard deviation Minimum Maximum 0.09 0.04 0.04 0.14 0.40 0.12 0.26 0.68 0.50 0.14 0.31 0.84 0.76 0.16 0.60 1.14 Table II Short II -term flicker levels 𝑃 𝑠𝑡 for the ten time delay sets (𝐻 𝑠 =3 m) Peak period 𝑇 𝑝 (s) 7 9 10 12 Short-term level 𝑃 𝑠𝑡 flicker Average Standard deviation Minimum Maximum 0.63 0.12 0.43 0.80 0.80 0.09 0.67 0.94 0.96 0.21 0.73 1.40 0.83 0.10 0.73 1.09 ACKNOWLEDGMENT The research work presented in this paper was partly conducted in the frame of the QUALIPHE project (ANR-11-PRGE-0013) funded by the French National Agency of Research (ANR) which is gratefully acknowledged.
20,305
[ "17411", "4229", "5181", "17416", "16359" ]
[ "1194", "247362", "441569", "247362", "1194", "952", "111023", "121067", "121067", "1194", "247362", "247362", "1194" ]
01754060
en
[ "spi" ]
2024/03/05 22:32:10
2017
https://hal.science/hal-01754060/file/Matine-Optimal%20sizing%20of%20submarine%20cables--EWTEC2017.pdf
Abdelghani Matine Charles-Henri Bonnard Anne Blavette Salvy Bourguet François Rongère Thibaut Kovaltchouk Emmanuel Schaeffer Optimal sizing of submarine cables from an electro-thermal perspective Keywords: Submarine cables, optimal sizing, electro-thermal, wave energy converter (WEC), finite element analysis (FEA) à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Optimal sizing of submarine cables from an electro-thermal perspective Abdelghani Matine #1 , Charles-Henri Bonnard #2 , Anne Blavette* 3 , Salvy Bourguet #4 , François Rongère +5 , Thibaut Kovaltchouk°6, Emmanuel Schaeffer #7 # IREENA, Université de Nantes, 37 boulevard de l'Université, 44602 Saint-Nazaire, France 1 abdelghani.matine@univ-nantes.fr , 2 charles-henri.bonnard@univ-nantes.fr, 4 salvy.bourguet@univ-nantes.fr, 7 emmanuel.schaeffer@univ-nantes.fr * SATIE (UMR 8029), Ecole Normale Supérieure de Rennes, avenue Robert Schuman, 35170 Bruz, France 3 anne.blavette@ens-rennes.fr + LHEEA (UMR 6598), Ecole Centrale de Nantes, 1 Rue de la Noë, 44300 Nantes, France 5 francois.rongere@ec-nantes.fr °Lycée F. Roosevelt, 10 rue du Président Roosevelt, 51100 Reims, France 6 thibaut.kovaltchouk@ac-reims.fr Abstract-In similar fashion to most renewables, wave energy is capital expenditure (CapEx)-intensive: the cost of wave energy converters (WECs), infrastructure, and installation is estimated to represent 60-80% of the final energy cost. In particular, grid connection and cable infrastructure is expected to represent up to a significant 10% of the total CapEx. However, substantial economical savings could be realised by further optimising the electrical design of the farm, and in particular by optimising the submarine cable sizing. This paper will present the results of an electro-thermal study focussing on submarine cables temperature response to fluctuating electrical current profiles as generated by wave device arrays, and obtained through the finite element analysis tool COMSOL. This study investigates the maximum fluctuating current loading which can be injected through a submarine cable compared to its current rating which is usually defined under steady-state conditions, and is therefore irrelevant in the case of wave energy. Hence, using this value for design optimisation studies in the specific context of wave energy is expected to lead to useless oversizing of the cables, thus hindering the economic competitiveness of this renewable energy source. I. INTRODUCTION In similar fashion to most renewables, harnessing wave energy is capital expenditure (CapEx)-intensive: the cost of wave energy converters (WECs), infrastructure, and installation is estimated to represent 60-80% of the final energy cost [START_REF]Ocean Energy Strategic Roadmap 2016 -Building ocean energy for Europe[END_REF]. In particular, cable costs (excluding installation costs) are expected to represent up to a significant 10% of the total CapEx, based on the offshore wind energy experience [START_REF] On | Wind Turbine Technology and Operations Factbook[END_REF][START_REF] Macaskill | Offshore wind-an overview[END_REF]. However, substantial economical savings could be realised by further optimising the electrical design of the farm, and in particular by optimising the submarine cable sizing. Several studies have focussed on optimising the electrical network composed of the wave farm offshore network and/or of the local onshore network [START_REF] Nambiar | Optimising power transmission options for marine energy converter farms[END_REF][START_REF] Blavette | Dimensioning the Equipment of a Wave Farm: Energy Storage and Cables[END_REF]. In [START_REF] Nambiar | Optimising power transmission options for marine energy converter farms[END_REF], a techno-economic analysis was conducted on maximising the real power transfer between a wave energy farm and the grid by varying three design parameters of the considered array, including the export cable length, but excluding its current rating which was thus not considered as an optimisation variable. It seems that this latter parameter was selected as equal to the maximum current which may theoretically flow through the cable. This theoretical value was obtained from the maximum theoretical power output supposedly reached for a given sea-state and which was extracted from a power matrix. Following this, the corresponding scalar value for the maximum current for a given sea-state was obtained by means of load flow calculations based on a pre-defined electrical network model. Hence, the cable rating was assumed to be equal to the maximum value among the different current scalar values corresponding to several sea-states. In similar fashion, in [START_REF] Beels | A methodology for production and cost assessment of a farm of wave energy converters[END_REF] where a power transfer maximisation is also conducted, the cable rating is determined as well based on the maximum power output by a wave device array for different sea-states. In both these papers, there is no limitation on the considered sea-states from which energy can be extracted, apart from the operational limits of the wave energy device itself in [START_REF] Nambiar | Optimising power transmission options for marine energy converter farms[END_REF]. In other words, as long as the sea-state characteristics are compatible with the wave energy device operational limits in terms of significant wave height and period, it is considered that wave energy is harnessed. Another approach challenging this idea was proposed in [START_REF] Sharkey | Maximising value of electrical networks for wave energy converter arrays[END_REF] based on the offshore wind energy experience [START_REF] Crown | Round 3 Offshore Wind Farm Connection Study[END_REF]. This approach consists in rating the submarine export cable to a current level less than the maximum current which could be theoretically harnessed when the wave device operational constraints only are considered. The rationale underpinning this approach consists in considering that the most highly energetic seastates contribute to a negligible fraction of the total amount of energy harnessed every year. Hence, this corresponds to a negligible part of the annual revenue. However, harnessing wave energy during these highly energetic sea-states leads to an increased required current rating for the export cable whose associated cost is expected to be significantly greater than the corresponding revenue. Consequently, it seems more reasonable, from a profit maximisation perspective, to decrease the export cable current rating, even if it means shedding a part of the harnessable wave energy. However, in similar fashion to the papers mentioned previously, current is calculated in this paper as a scalar value representing the maximum level which can be reached during a given sea-state. In other words, the fluctuating nature of the current profile during a sea-state is not considered. However, the maximum current value, from which the cable current rating is usually calculated, flows in the cables during only a fraction of the sea-state duration. Based on a very simple model, it was shown in [START_REF] Blavette | Dimensioning the Equipment of a Wave Farm: Energy Storage and Cables[END_REF] that the slow thermal response of the cable (relatively to the fast current fluctuations generated from the waves) [START_REF] Adapa | Dynamic thermal ratings: monitors and calculation methods[END_REF][START_REF] Hosek | Dynamic thermal rating of power transmission lines and renewable resource[END_REF] leads to temperature fluctuations of limited relative amplitude compared to the current fluctuations. Hence, this implies that it could be feasible to inject a current profile whose maximum value is greater than the current rating without exceeding the conductor maximum allowed temperature, which is usually equal to 90°C for XLPE cable [START_REF] Nexans | Submarine cable 10kV[END_REF][START_REF] Abb | XLPE Submarine Cable Systems Attachment to XLPE Land Cable Systems -User's Guide[END_REF][START_REF]Nexans data sheet[END_REF]. Downrating submarine cables in this manner, compared to rating them with respect to the maximum but transient, current level flowing through it, could lead to significant savings from a CapEx point of view. In this perspective, this paper presents a detailed study on the thermal response of a submarine cable subject to a fluctuating current as generated by a wave device array. Section II will describe the development of a finite element analysis (FEA) based on a 2D thermal model of a 20kV XLPE submarine cable and performed using commercial FEA software COMSOL. The thermal response of a submarine cable to the injection of a fluctuating current profile as generated by wave energy arrays is analysed in Section III. The objective of this study is to determine the maximum current loading which can be injected through a submarine cable without exceeding the conductor thermal limit (equal to 90°C here) and compare this value to the cable rating. As mentioned earlier, this latter value is usually defined under static conditions which are irrelevant in the case of wave energy. Hence, its use for design optimisation studies in this specific context is expected to lead to useless oversizing of the cables, thus hindering the economic competitiveness of wave energy. II. THERMAL MODELLING OF THE SUBMARINE POWER CABLE A. Cable design and characteristics This study considers a 20 kV XLPE insulated power cable containing three copper conductors, each with a cross section of 95 mm² and having each a copper screen, as shown in I. The static current carrying capacity of the considered cable is equal to 290A and is calculated according to IEC standards 60287-1-1 [START_REF]Calculation of the current rating: Part 1-1 Current rating equations (100% load factor) and calculation of losses[END_REF] and 60287-2-1 [START_REF]Electric cables -Calculation of the current rating -Part 2-1: Calculation of Thermal Resistance[END_REF]. It is based on the following assumptions: B. Thermal Model This section describes the development of a 2D finite element analysis (FEA) of the submarine cable thermal model using commercial software COMSOL. In order to predict the temperature distribution within the cable, the heat transfer equation of thermal conduction in transient state is applied [START_REF] Long | Essential Heat Transfer. s.l[END_REF]: ( ) where ρ is the mass density (kg.m -3 ), is the specific heat capacity (J.kg -1 .K -1 ), T is the cable absolute temperature (K), K is the thermal conductivity (W.m -1 .K -1 ) and Q is a heat source (W.m -3 ). The heat sources in cable installations can be divided into two generic groups: heat generated in conductors and heat generated in insulators. The losses in metallic elements are the most significant losses in a cable. They are caused by Joule losses due to impressed currents, circulating currents or induced currents (also referred to as "eddy currents"). The heat produced by the cable metallic components (namely conductors, sheath and armour) can be calculated based on equations provided in IEC standard 60287-1-1 [START_REF]Calculation of the current rating: Part 1-1 Current rating equations (100% load factor) and calculation of losses[END_REF]. First, the Joule losses W c of the conductor can be calculated by using the following formula: ( ( )) ( ) where R 20°C is the resistance of the cable conductor at 20°C (W/km), α is the constant mass temperature coefficient at 20°C (K -1 ), and I c (A) is the current density. Terms y p and y s are the skin effect factor and the proximity factor respectively. The sheath and armour losses (W s and W a respectively) can be calculated such as: where U 0 is the applied voltage, δ is the loss angle, and ω is the angular frequency. However, the heat produced in the insulating layers expected to be significant, compared to the heat produced by the metallic components, under certain high voltage conditions only. Finally, the boundary conditions for this model are illustrated in Fig. 2. The modelled region is 7 m deep (H=7 m). The two side boundaries A and B are placed sufficiently far away from the cable so that there is no appreciable change in temperature with distance in the xdirection close to the boundaries. The cable is placed in the middle of the modelled region with respect to the x-direction, and its length is equal to 10 m (this value was proved sufficient in [START_REF] Swaffield | Methods for rating directly buried high voltage cable circuits[END_REF] for meeting the zero heat flux boundary conditions for sides A and B). The soil surface C is assumed to be at constant ambient temperature of 12 ºC. The vertical sides A and B of the model are assumed to have a zero net heat flux across them due to their distance from the heat source. The equation in the side A and B is defined as: ( ) where n is the unit vector normal to the surface. On side D, the thermal exchanges via convection between the sea bed and the seawater must be taken into account. The heat convection exchange is defined as: ( ) ( ) where h is the heat transfer coefficient, T out is the sea temperature and T the temperature of the upper boundary of the sea bed (side D). C. Current temporal profiles injected in the cable The wave farm is composed of 15 to 20 identical heaving buoys controlled passively and described in another paper [START_REF] Kovaltchouk | Influence of control strategy on the global efficiency of a Direct Wave Energy Converter with Electric Power Take-Off[END_REF]. Different power output temporal profiles for a single WEC were computed by means of several combined simulation programmes described in [START_REF] Blavette | Influence of the wave dispersion phenomenon on the flicker generated by a wave farm[END_REF]. The first programme computes the wave excitation force at a single location in the wave farm. Then, the temporal profile of the excitation force is injected into a wave device model to obtain the corresponding electrical power output profile from which the wave farm power output is calculated. In order to model the device aggregation effect, the power output profiles of the other WECs composing the farm are computed by shifting the power profile of a single device by a random time delay, as described in a paper mentioned earlier [START_REF] Blavette | Influence of the wave dispersion phenomenon on the flicker generated by a wave farm[END_REF]. These power profiles are then injected into an electrical grid numerical model which is shown in Fig. 3. This model has been developed under the power system simulator PowerFactory [START_REF]DIgSILENT PowerFactory[END_REF] and is described in more detail in [START_REF] Blavette | Influence of the wave dispersion phenomenon on the flicker generated by a wave farm[END_REF]. The components of the offshore grid are highlighted in [START_REF] Blavette | Influence of the wave dispersion phenomenon on the flicker generated by a wave farm[END_REF]. A. Model validation under steady-state conditions A steady-state load current equal to 290A (i.e. equal to the steady-state capacity of the considered cable) is used for the analysis. If the model is valid, the conductor temperature should remain below its maximum allowed limit which is equal to 90°C. Table III summarizes the calculation of the different heat sources needed to solve the heat transfer problem according to the equations given in IEC standard 60287-1-1 [START_REF]Calculation of the current rating: Part 1-1 Current rating equations (100% load factor) and calculation of losses[END_REF]. The same heat fluxes are used as source terms for the FEA model, which allows to compare the calculated temperature with the two methods. Fig. 8 shows the meshing of the submarine cable and its surrounding area that need finer elements because these areas are the most important sections of the presented analysis. Then, the areas which are far away from the cables can be modelled with a coarser mesh. The steady-state temperature field distributions in the cable and in its environment are shown in Fig. 9 and Fig. 10 respectively. The calculation from the IEC standard leads to a temperature of 67°C for the copper cores and 39°C for the external sheath. It can be seen that the maximum temperature of the copper conductor resulting from the FEA is 75°C while the external sheath reached 44°C, a little bit higher but in the same order of magnitude than the IEC standards. Note that both methods return copper core temperature below the critical temperature of 90°C, which provide a safety margin with a normal current load of 290 A. Despite the higher temperatures resulting from the FEA, one can see this results comparison as a form of validation of the model, especially considering that IEC standard uses a simplified model to calculate the temperature, i.e. an electricequivalent circuit composed of thermal resistors and current sources. Hence, it leads us to conclude that the FEA model can be used to calculate the temperature of a submarine cable under fluctuating current as generated by a group of WECs. B. Thermal response under a fluctuating current profile This section describes the transient thermal response of the submarine cable to different current profiles as generated by an array of wave energy devices considering several sea states, as described in Table II. The objective of this study is to investigate the levels of current which can be transmitted through a submarine cable without the conductor exceeding the thermal limit of 90°C. For each simulated case, we consider the maximum value of the current and its percentage with respect to the continuous current rating of the cable, i.e. 290 A. Table IV shows that the maximum current of each current profile. It is important to highlight that these maximum currents can be far above the continuous current rating in all cases. Simulation results of such a thermal problem depend on the initial conditions. Hence, it is important to accurately define the initial thermal conditions of the surrounding soil. The simplest initial condition which can be defined is a uniform temperature field. The value of this initial thermal condition should correspond to the case where the cable is subject to a current load equal to the average of the fluctuating current profile to be applied afterwards. The role of this first phase of the simulation is to quickly bring the cable temperature close to the expected range within which it is expected to vary once the fluctuating current profile is applied, thus reducing the simulation time. We used enough sequential repetitions of the current depicted in Figs 4 to 7 to reach a simulation time of 100 ks, i.e. a duration that is necessary to reach close-toequilibrium conditions for the thermal problem. Figs. 11, 12, 13 and 14 show the conductor thermal response versus time, for Cases 1 to 4 respectively. In these cases, the current maximal values are equal to 97% to 287% of the cable capacity. The maximum temperature does not exceed the allowed limit of 90°C for the first two cases. In other words, as the temperature is below the allowed limit of 90°C, the cable could be considered as overrated with respect to the considered current profiles. The third case presents good agreement between the sea state and the sizing of the cable (Fig. 13), as the temperature is close to the maximum allowed value of 90°C. Fig. 14 shows the simulation results for Case 4. In this case, the current maximal value is up to 287 % of the cable capacity but the temperature exceeds 90°C. Hence, the cable can be considered as underrated here. In summary, under the conditions considered in this study, it appears that the cable is able to carry a fluctuating current profile whose maximum value is approximately equal to two and a half times the cable steady-state current rating. CONCLUSIONS This paper describes the results of a study focusing on the electrothermal response of a submarine cable to fluctuating current profiles as generated by a wave farm under different conditions, in particular regarding sea-state characteristics and the number of devices composing the farm. It was shown that the cable temperature remained below the allowed limit (equal to 90°C for the considered cable) when the maximum value of the injected current profile is as high as about two and a half times the (steady-state) rated current. Hence, it could be possible to size a cable to be included in a wave farm to the half of the maximum current, rather than to 100% of this current while keeping a good margin of safety. This could lead to significant savings in the CapEx of wave farm projects, thus contributing to improve the economic competitiveness of wave energy. ACKNOWLEDGMENT The research work presented in this paper was conducted in the frame of the EMODI project (ANR-14-CE05-0032) funded by the French National Agency of Research (ANR) which is gratefully acknowledged. This project is also supported by the S2E2 innovation centre ("pôle de compétitivité"). Fig. 1 1 Fig. 1 Cross section of the considered three-phase export cable as modelled under COMSOL Fig. 1. The sheaths are made of polyethylene and the bedding of polypropylene yarn. The surrounding armour is made of galvanized steel. This cable is currently installed in the SEM-REV test site located off Le Croisic, France and managed by Ecole Centrale de Nantes. Usual values regarding the cable material thermal properties, as provided in IEC standard 60853-2 [15], are presented in TableI. Fig. 2 2 Fig. 2 Illustration of the cable environment and of its boundary conditions.-Maximal allowed conductor temperature at continuous current load : 90°C -Current frequency : 50 Hz -Ambient temperature : 12°C -Cable burial depth : 1.5 m -Thermal resistivity of surroundings: 0.7 K.m/W Fig. 3 3 Fig. 3 Wave farm electrical network model as developed under PowerFactory λ λ where λ 1 and λ 2 are dissipation factors in the sheath and in the armour respectively. Insulating materials also produce heat. Dielectric losses in the insulation are given by: Fig. 4 Fig. 5 Fig. 6 Fig. 7 Fig. 8 45678 Fig. 4 Current profile flowing through the cable (Case 1) Fig. 9 9 Fig. 9 Steady-state temperature field simulation (°C) of the submarine cable under normal load conditions. Fig. 10 10 Fig. 10 Steady-state temperature field simulation (°C) of the cable environment. The blue arrows represent the heat flux. Fig. 13 13 Fig.13 Cable temperature versus time (Case 3) It is composed of 15 to 20 WECs, of a 0.4/10 kV transformer, of a 6.5 km-long cable, and of a 10/20 kV transformer. The rest of the network is located onshore is shaded in blue in the figure. Four current temporal profiles were simulated for different sea-state characteristics (the significant wave height H s and the peak period T p ) and device numbers, as detailed in TableII. They are shown in Fig.4to 7. TABLE III HEAT III LOSSES IN THE SUBMARINE CABLE Numerical Losses type Formula value (W/m) Conductor Joule losses W c 21.025 Sheath losses W s 0.2 Armour losses W a 0.027 Dielectric losses W d 0.028
23,456
[ "964269", "17411", "931284", "5181", "4229", "1243599" ]
[ "10823", "93263", "10823", "93263", "1194", "247362", "441569", "10823", "93263", "111023", "232891", "247362", "1194", "10823", "93263" ]
00175431
en
[ "spi" ]
2024/03/05 22:32:10
2006
https://hal.science/hal-00175431/file/wodes06_VG-ODS-JMF.pdf
Vincent Gourcuff email: gourcuff@lurpa.ens-cachan.fr Olivier De Smet email: desmet@lurpa.ens-cachan.fr Jean-Marc Faure email: faure@lurpa.ens-cachan.fr Efficient representation for formal verification of PLC programs * This paper addresses scalability of model-checking using the NuSMV model-checker. To avoid or at least limit combinatory explosion, an efficient representation of PLC programs is proposed. This representation includes only the states that are meaningful for properties proof. A method to translate PLC programs developed in Structured Text into NuSMV models based on this representation is described and exemplified on several examples. The results, state space size and verification time, obtained with models constructed using this method are compared to those obtained with previously published methods so as to assess efficiency of the proposed representation. I. INTRODUCTION Formal verification of PLC (Programmable Logic Controllers) programs thanks to model-checking tools has been addressed by many researchers ( [START_REF] Moon | Modeling programmable logic controllers for logic verification[END_REF], [START_REF] Rausch | Formal verification of PLC programs[END_REF], [START_REF] Huuck | A model-checking approach to safe SFCs[END_REF], [START_REF] Zoubek | Automatic verification of temporal and timed properties of control programs[END_REF], [START_REF] Frey | Formal methods in PLC programming[END_REF], [START_REF] Smet | Verification of a controller for a flexible manufacturing line written in ladder diagram via model-checking[END_REF], [START_REF] Bel Mokadem | Verification of a timed multitask system with Uppaal[END_REF], [START_REF] Jiménez-Fraustro | A synchronous model of IEC 61131 PLC languages in SIGNAL[END_REF]). These works have yielded formal semantics of the IEC 61131-3 standardized languages [START_REF]Programmable controllers -Part 3[END_REF] as well as rules to translate PLC programs into formal models that can be taken as inputs of model-checkers such as SMV [START_REF] Mcmillan | The SMV Language[END_REF] or UPPAAL [START_REF] Bengtsson | UP-PAAL -a tool suite for automatic verification of real-time systems[END_REF]. Despite these valuable results, it is easy to observe that model-checking is not employed daily in companies that develop PLC programs (see ( [START_REF] Lucas | A study of current logic design practices in the automotive manufacturing industry[END_REF]) for a comprehensive study of logic design practices). Automation engineers prefer to use the traditional, while being tedious and not exhaustive, simulation techniques to verify that programs they have developed fulfill the application requirements. Several reasons can be put forward to explain this situation: specifying formal properties in temporal logic or in the form of timed automata is an extremely tough task for most engineers; modelcheckers provide, in case of negative proof, counterexamples that are difficult to interpret; PLC vendors do not propose commercial software able to translate automatically PLC programs into formal models, ... All these difficulties are real and solutions must be found to overcome them, e.g. libraries of application-oriented properties, explanations of counterexamples in suitable languages, automatic translation software. Nevertheless, in our view, the main obstacle to industrial use of formal verification is combinatory explosion that occurs when dealing with large size control programs. Formal models that underlie modelchecking are indeed discrete state models such as finite state machines or timed automata. Even if properties are proved symbolically, using binary decision diagrams (BDDs) for instance, existing methods produce, from industrial, large size, PLC programs, models that include too many states to be verified by the present model-checking tools. In that case, no proof can be obtained and formal verification is then useless. The aim of the research presented in this paper is to tackle out, or at least to lessen, this problem by proposing a translation method that yields, from PLC programs, formal models far smaller than those obtained with existing methods. These novel models will include only the states that are meaningful for properties proof and then will be less sensitive to combinatory explosion. This efficient representation of PLC programs will contribute to improve scalability of modelcheckers and to favor their industrial use. This paper includes five sections. Section 2 delineates the frame of our research. The principle of the translation method is explained in section 3. Section 4 describes how efficient NuSMV models can be obtained from PLC programs developed in a standardized language thanks to this method, while section 5 presents experimental results. Prospects for extending these works are given in section 6. PLCs (Figure 1) are automation components that receive logic input signals coming from sensors, operators or other PLCs and send logic output signals to actuators or other controllers. The control algorithms that specify the values of outputs according to the current values of inputs and the previous values of outputs are implemented within PLCs in programs written in standardized languages, such as Ladder Diagram (LD), Structured Text (ST) or Instruction List (IL). These programs run under a real-time operating system whose scheduler may be multi-or mono-task. This paper focuses only on mono-task schedulers. Given this restriction, a PLC performs a cyclic task, termed PLC cycle, that includes three steps : inputs reading, program execution, outputs updating. The period of this task may be constant (periodic scan) or may vary (cyclic scan). II. MODEL-CHECKING OF LOGIC CONTROLLERS Previous works that have been carried out to check PLC programs properties by using existing model-checkers addressed either timed ( [START_REF] Zoubek | Automatic verification of temporal and timed properties of control programs[END_REF], [START_REF] Bel Mokadem | Verification of a timed multitask system with Uppaal[END_REF]) or untimed ([1], [START_REF] Rausch | Formal verification of PLC programs[END_REF], [START_REF] Huuck | A model-checking approach to safe SFCs[END_REF], [START_REF] Smet | Verification of a controller for a flexible manufacturing line written in ladder diagram via model-checking[END_REF], [START_REF] Jiménez-Fraustro | A synchronous model of IEC 61131 PLC languages in SIGNAL[END_REF]) model-checking. Since our objective is to facilitate industrial use of formal verification techniques by avoiding or limiting combinatory explosion and that this objective seems more easily reachable for untimed systems, only untimed [START_REF] Cimatti | NuSMV Version 2: An OpenSource Tool for Symbolic Model Checking[END_REF], though similar results would be obtained with that of other model-checkers of the same class. It matters also to point out that, given the kind of systems that are considered, periodic and cyclic tasks behave in the same fashion: PLC cycle duration is meaningless. Several approaches have been proposed to translate a PLC program into a formal untimed model. For room reasons, only two of them will be sketched below. [START_REF] Rossi | Validation formelle de programmes ladder diagram pour automates programmables industriels (formal verification of PLC program written in ladder diagram)[END_REF] for instance expresses the semantics of each element (contact, coil, links,...) of LD in the form of a small state automaton. The formal behavior of a given program is then obtained by composition of the different state automata that describe its elements. This method relies upon a detailed semantics of ladder diagram and can be extended to programs written in several languages, but it gives rise easily to state space explosion, even for rather small examples. A more efficient approach ([2], [START_REF] Smet | Verification of a controller for a flexible manufacturing line written in ladder diagram via model-checking[END_REF]) translates each program statement into a SMV next function. Each PLC cycle is then modeled by a sequence of states, the first and last states being characterized respectively by the values of input-output variables at the input reading and output updating steps, the intermediary states by the values of these variables after execution of each statement. Figure 2 illustrates this method on a didactic example written in ST. Thorough this paper, PLC programs examples will be given in ST. ST is a a textual language, similar to PASCAL, but tailor-made for automation engineers, for it includes statements to invoke and to use the outputs of Function Blocks (FB) such as RS (SR) -reset (set) dominant memory -, RE (FE) -rising (falling) edge. This language is advocated for the control systems of power plants that are targeted in the project. Equivalent programs in other sequentially executed languages, like programs written in IL or LD, can be obtained without difficulty. The program presented in Figure 2 includes four statements: two assignments followed by one IF selection and one assignment. From this program, it is possible to obtain by using the previous method (translation of each statement into a SMV next function) an execution trace whose part is shown on Figure 2, assuming that the values of the variables in the initial state (defined when setting up the controller) and the values of the input variables at the inputs reading steps of the first and second PLC cycles are respectively: • Initial values of variables: In addition to the formal model of the controller, modelcheckers need a set of formal properties to prove. Two kinds of properties are generally considered: I 1 = 1, I 2 = 0, I 3 = 1, I 4 = 0, O 1 = 0, O 2 = 0, O 3 = • Intrinsic properties, such as absence of infinite loop, no deadlock, ..., which refer to the behavior of the controller independently of its environment; • Extrinsic properties which refer to the behavior of inputs and outputs, e.g. commission of outputs for a given combination of inputs, always forbidden combination of outputs, allowed sequences of inputs-outputs,... This paper focuses only on extrinsic properties. Referring to outputs behavior, these properties impact indeed directly safety and dependability of the controlled process and then are more crucial. If one of them (or several) are not satisfied, hazardous events may occur, leading to significant failures. If focus is put on extrinsic properties verification, the two approaches described above lead to state automata with numerous states that are not meaningful. It can be seen indeed on Figure 2 that the intermediary states defined for each statement are not useful in that case; extrinsic properties are related only to the values of input-output variables when updating the outputs, i.e. at the end of the PLC cycle. A similar reasoning may be done for the other method. Hence efficient representation for formal verification will include only the states describing the values of input-output variables when updating outputs (shaded states in Figure 2). This representation may be obtained directly from a PLC program by applying the method whose principle is explained in the next section. III. METHOD PRINCIPLE A. Assumptions In what follows it is assumed that: • PLC programs are executed sequentially; • only Boolean variables are used; • internal variables may be included in the program; The fourth and fifth ones apply only to ST programs but similar assumptions for LD or IL programs can be easily drawn up. Iterations are forbidden because they can lead to too long cycle times that do not comply with real-time requirements. The sixth assumption may be puzzling, for contrary to the usual programming rule that advocates that each variable must be assigned only once. Even if this programming rule is helpful when developing a software module from scratch, this assumption must be introduced to cope with industrial PLC programs in which it is quite usual to find multiple assignments of the same variable. Two reasons can be put forward to explain this situation. First industrial PLC programs are often developed from previous similar ones; then programs designers copy and paste parts of previous programs in the new program. This reuse practice may lead to assign one variable several times. Second a ST program may contain both normal assignments and assignments included within selection statements; this is an other reason that explains multiple assignments. As our objective is to proof properties on existing programs, without modifying them prior to verification, this specific feature must be taken into account. It will be shown below that multiple assignments do not impede to construct efficient representation. Figure 3 outlines the translation method that has been developed to obtain efficient representation of PLC programs. As shown on this figure, this method includes two main steps: static analysis of the program and generation of the NuSMV model that describes formally the behavior of the program with regards to its inputs-outputs. In the second case, computation of the value of one output variable must use the values of output variables for this cycle if the last assignment of these output variables is located upstream in the program, or the values of output variables at the previous PLC cycle (cycle i) if those variables are assigned downstream; this computation will use obviously the values of input variables for cycle i+1. Hence, the main objective of static analysis is to determine, for each output variable, whether the value of each variable involved in computation of the value of this output variable at PLC cycle i+1 is related to PLC cycle i+1 or to PLC cycle i. Static analysis is exemplified on the program given in Figure 4. This ST program computes the values of five output variables (O 1 , ..., O 5 ) from those of four input variables (I 1 , ..., I 4 ) and includes only allowed statements. Some specific features of this example are to be highlighted: • the IF statement does not specify the value of O 3 if the condition following the IF is not true; this is allowed in ST language and means that the value of O 3 remains the same when this condition is false; • the assignment of O 4 uses the output of a RS (reset dominant memory) FB; • one output variable (O 1 ) is assigned twice. Scanning sequentially the program from top to bottom, statement by statement, static analysis yields dependency relations represented graphically in Figure 5 a). In this figure, an arrow from variable X to variable Y means that the value of Y depends on the value of X (or that the value of X is used to compute the value of Y). Each statement gives rise to one dependency relation. For instance, the dependency relation obtained from the first statement means that the value of O 1 depends on the values of I 1 and I 2 , the third relation that the value of O 3 is computed from the values of I 3 , I 4 , O 1 , and O 3 itself (in case of false condition), the fourth relation that the value of O 4 is computed from the values of I 1 , O 5 , and O 4 itself (if the two inputs of a memory are false, the output stays in its previous state),.... From this first set of relations, it is then possible to build an other set of more detailed relations such as: • there is only one dependency relation for each output variable (multiple assignments are removed); • dependency relations are developed, if possible; • the value of each output variable O j (j: positive integer) at PLC cycle i+1, noted O j,i+1 , is obtained from values of input variables for this cycle, noted I k,i+1 (k: positive integer), and from values of output variables for this cycle (O j,i+1 ) or for the previous one (O j,i ). This second set of relations is presented in Figure 5b This set of dependency relations involving the values of output variables for two successive PLC cycles permits to translate efficiently PLC programs into NuSMV models as explained in the next section. IV. TRANSLATING ST PROGRAMS INTO NUSMV MODELS It is assumed in this section that the reader has a basic knowledge of the model-checker NuSMV; readers who want to know more on this proof tool can refer to [START_REF] Cimatti | NuSMV Version 2: An OpenSource Tool for Symbolic Model Checking[END_REF]. To check a system, NuSMV takes as input a transition relation that specify the behavior of a Finite State Machine (FSM) which is assumed to represent this system. The transition relation of the FSM is expressed by defining the values of variables in A. Translation algorithm Each ST statement that gave rise to one of the final dependency relations is translated into one NuSMV assignment; then useless ST statements (assignments that are cancelled by other upstream assignments) are not translated. The set of useful statements is noted P r in what follows. The values of the variables within one assignment are obtained from the corresponding dependency relation. If the value of a variable in this relation is that at PLC cycle i+1, then the next value of this variable will be introduced in the corresponding NuSMV assignment, using the next function; if the dependency relation mentions the value at cycle i, then the corresponding NuSMV assignment will employ the current value of the variable. Given these translation rules, the translation algorithm described Figure 6 has been developed. This algorithm yields a NuSMV model from a set of statements P r issued from a PLC program. BEGIN PLC prog TO NuSMV model(P r) FOR each statement S i of P r: IF S i is an assignment (V i := expression i ) THEN FOR each variable V k in expression i : Replace V k by the variable pointed out in the dependency graph (V k,i or V k,i+1 ) ELIF S i is a conditional structure (if cond; then stmt 1 ; else stmt 2 ) FOR each variable V k in cond: Replace V k by the variable pointed out in the dependency graph (V k,i or V k,i+1 ) FOR each variable Vm assigned in S i : Replace Vm assignment by: "case cond :≺assignment of Vm in PLC prog TO NuSMV model(stmt 1 ) ; !cond : ≺assignment of Vm in PLC prog TO NuSMV model(stmt 2 ) ; esac ; " Fig. 6. Translation algorithm B. Taking into account Function Blocks If a ST assignment includes an expression involving a Boolean Function Block (FB), the behavior of this FB must be detailed in the corresponding NuSMV assignment. Hence a library of generic models describing in NuSMV syntax the behavior of the usual FBs has been developed. When translating ST assignments that include instances of FBs, instances of these generic models will be introduced into the NuSMV assignments. The RS (reset dominant memory) FB, for instance, has two inputs, noted Set and Reset, and one output Q. Its behavior is recalled below: Using the algorithm of Figure 6, the NuSMV model presented in Figure 7 can be obtained from the program of the previous section. • If Reset is true, then Q is false; • If It matters to emphasize that the translation algorithm does not introduce auxiliary variables, such as line counter, end of cycle, unlike the method proposed in [START_REF] Smet | Verification of a controller for a flexible manufacturing line written in ladder diagram via model-checking[END_REF]. It remains nevertheless to assess the efficiency of this representation. V. ASSESSMENT OF THE REPRESENTATION EFFICIENCY Several experiments have been carried out to assess efficiency of the representation proposed in this paper. To facilitate these experiments, an automatic translation program based on the method presented in the previous sections has been developed. A. First experiment The objective of this experiment was to compare, on the simple example of Figure 4, the sizes of the state spaces of the NuSMV models obtained with the representation proposed in [START_REF] Smet | Verification of a controller for a flexible manufacturing line written in ladder diagram via model-checking[END_REF], i.e. direct translation of each statement of the PLC program into one NuSMV assignment, and with that presented in this paper. Reachable states System diameter representation of [START_REF] Smet | Verification of a controller for a flexible manufacturing line written in ladder diagram via model-checking[END_REF] 314 out of 14336 22 proposed representation 21 out of 512 2 The two NuSMV models have been first compared, using behavioral equivalence techniques, so as to verify that they behave in the same manner. This comparison gave a positive result: the sequence of outputs generated by the two models is the same whatever the sequence of inputs. Then the sizes of their state spaces have been computed, using the NuSMV forward check function, as shown in Table I. This table contains, for each representation, the number of reachable states among the possible states, e.g. 314 among 14336 means that 314 states are really reachable among the 14336 possible, as well as the system diameter: minimum number of iterations of the NuSMV model to obtain all the reachable states. These results shows clearly that, even for a simple example, the proposed representation reduces the size of the state space by roughly one order of magnitude. B. Second experiment The second experiment was aiming at assessing the gains in time and in memory size, if any, due to the new representation when proving properties. This experiment has been performed using the test-bed example presented in [START_REF] Smet | Verification of a controller for a flexible manufacturing line written in ladder diagram via model-checking[END_REF]: controller of a Fischertechnik system, for which numerical results were already available. Once again two models have been developed and the same properties have been checked on both. This experiment shows that the proposed representation reduces significantly the verification time and the memory consumption. The ratio between the verification times obtained with the two representations, for instance, varies between 9000 and 600, depending on the property. Similar results are obtained with the other properties. C. Third experiment This third experiment has been performed with industrial programs developed for the control of a thermal power plant. The control system of this plant comprises 175 PLCs connected by networks. All the programs running on these PLCs have been translated as explained previously. The objective of this experiment was merely to assess the maximum, medium and minimum sizes of the state spaces of the models obtained from this set of industrial programs when using the proposed representation. These values are given on the fourth line of Table III. Even if the sizes of the state spaces are very different, this experiment shows clearly the possibility of translating real PLC programs without combinatory explosion. Moreover these state spaces can be explored by the model-checker in a reasonable time, a mandatory condition for checking properties; only 8 seconds are necessary indeed to explore all the state spaces of these programs. A secondary result is given at the last line of this Even if it is not possible to obtain from these three experiments definitive numerical conclusions, such as state space reduction rate, verification time improvement ratio, ... they have allowed to illustrate the benefits of the proposed representation on a large concrete example, coming from industry. VI. CONCLUSION The representation of PLC programs proposed in this paper can contribute to favor dissemination of model-checking techniques, for it enables to lessen strongly state space explosion problems and to reduce verification time. The examples given in the paper were written in ST language. Nevertheless programs written in LD or in IL languages can be represented in the same manner; the principle of the translation method is the same, only the translation rules of statements are to be modified. Ongoing works concern an extension of this representation to take into account integer variables and the development of a similar representation for timed model-checking. Fig. 1 . 1 Fig. 1. PLC basic components 0 and O 4 = 1 • 1 Input variables values at the beginning of the first PLC cycle: I 1 = 0, I 2 = 0, I 3 = 1 and I 4 = 1 • Input variables values at the beginning of the second PLC cycle: I 1 = 1, I 2 = 1, I 3 = 0 and I 4 = 1 It matters to highlight that the values of input variables remain constant in all the states of one PLC cycle. Fig. 2 . 2 Fig.2. A simple program and part of the resulting trace with the method presented in[START_REF] Smet | Verification of a controller for a flexible manufacturing line written in ladder diagram via model-checking[END_REF] • only the Boolean operators defined in IEC 61131-3 standard (NOT, AND, OR, XOR) are allowed; • only the following statements of ST language are allowed: assignment, function and function block (FB) control statements, IF and CASE selection statements; iteration statements (FOR, WHILE, REPEAT) are forbidden; • multiple assignments of the same variable are possible; • Boolean FBs, such as set and reset dominant memories defined in the standard or FBs that implement application specific control rules, like actuators starting or shutting down sequences, may be included in a program. The first two assumptions are simple and can be made for programs in ST, LD or IL. The third assumption means that a program computes the values of internal and output variables from those of input variables and of computed (internal and output) variables; this allows us to consider internal variables in the same way as outputs in what follows. Fig. 3 . 3 Fig. 3. Method overview ). Only the relation coming from the latter assignment of O 1 has been kept. The first relation of the previous relations set has nevertheless permitted to obtain the final dependency relation of O 3 : the value of this variable at cycle i+1 is obtained from the values of I 1 , I 2 , I 3 , I 4 for cycle i+1 and the value of O 3 at cycle i. The computation of the value of O 4 at cycle i+1 uses the value of O 5 at cycle i for this variable is assigned after O 4 in the program whilst the value of O 5 at cycle i+1 is computed from the values of O 2 and O 4 at this same cycle because these two variables have been assigned upstream in the program. O 1 :Fig. 4 . 14 Fig. 4. PLC program example Fig. 5 . 5 Fig. 5. Dependency relations obtained by static analysis. a) ordered intermediate relations; b) final relations Fig. 7 . 7 Fig. 7. NuSMV model of the program presented in Figure 4 TABLE I STATE I SPACE SIZES OF THE PROGRAM PRESENTED INFIGURE 4 Table II gives duration and memory consumption of the checking process for two properties. These results were obtained by using NuSMV, version 2.3.1, on a PC P4 3.2 GHz, with 1 GB of RAM, under Windows XP. representation of [6] proposed representation liveness property 5h / 526MB 2s / 8MB safety property 20min / 200MB 2s / 8MB TABLE II TIME II AND MEMORY REQUIRED FOR PROPERTIES VERIFICATION table; the translation time, time necessary to obtain from the set of programs a set of NuSMV models in the presented representation complies with engineering constraints; translation of one PLC program into one NuSMV model will not slow down PLC program design process. Number of programs 175 Output variables max:47 min:1 sum:1822 Input variables max:50 min:2 sum:2329 State space size of each program max:8.10 28 min: 10 5 mean:5.10 26 Strutation time of all state spaces 8 sec Whole time for translation 50 sec TABLE III RESULTS FOR A SET OF INDUSTRIAL PROGRAMS * This work was carried out in the frame of a research project funded by Alstom Power Plant Information and Control Systems, Engineering tools Department.
27,878
[ "842993", "735194" ]
[ "30464", "30464", "30464" ]
01754478
en
[ "info" ]
2024/03/05 22:32:10
2017
https://hal.science/tel-01754478/file/New%20Architectures%20for%20Handwritten%20Mathematical%20Expressions%20Recognition_vf-fr.pdf
Cyk Cock Younger Kasami Delaunay Dt Triangulation CROHME Competition on Recognition of Handwritten Mathematical Expressions. CTC Connectionist Temporal Classification Keywords: Dimensional Probabilistic Context-Free Grammars. AC Averaged Center. ANNs Artificial Neural Networks. BAR Block Angle Range. BB Bounding Box. BBC Bounding Box Center BPTT Back Propagation Through Time CPP Closest Point Pair UAR Unblocked Angle Range. VAR Visibility Angle Range Reconnaissance d'expressions mathématiques, Réseaux de neurones récurrents, BLSTM, Écriture en ligne Mathematical expression recognition, recurrent neural networks, BLSTM, online handwriting Thanks to the various encounters and choices in life, I could have an experience studying in France at a fairly young age. Along the way, I met a lot of beautiful people and things. Christian and Harold, you are so nice professors. This thesis would not have been possible without your considerate guidance, advice and encouragement. Thank you for sharing your knowledge and experience, for reading my papers and thesis over and over and providing meaningful comments. Your serious attitude towards work has a deep impact on me, today and tomorrow. Harold, thanks for your help in technique during the 3 years' study. Thank all the colleagues from IVC/IRCCyN or IPI/LS2N for giving me such a nice working environment, for so many warm moments, for giving me help when I need some one to speak French to negotiate on the phone, many times. Suiyi and Zhaoxin, thanks for being rice friends with me each lunch in Polytech. Thanks all the friends I met in Nantes LIST OF TABLES 6.11 The expression level evaluation results on CROHME 2014 test set with 11 trees. . . . . . . 6.12 Illustration of node (SLG) label errors of (Merge 9 ) on CROHME 2014 test set. We only list the cases that occur ≥ 10 times. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.13 Illustration of edge (SLG) label errors of (Merge 9 ) on CROHME 2014 test set. . . . . . . 'Sup' denotes Superscript relationship. . . 2.9 Example of a search for most likely expression candidate using the CYK algorithm. Extracted from [Yamamoto et al., 2006]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.10 The system architecture proposed in [START_REF] Awal | A global learning approach for an online handwritten mathematical expression recognition system[END_REF]. Extracted from [START_REF] Awal | A global learning approach for an online handwritten mathematical expression recognition system[END_REF]. 2.11 A simple example of Fuzzy r-CFG. Extracted from [START_REF] Maclean | A new approach for recognizing handwritten mathematics using relational grammars and fuzzy sets[END_REF]. . . . . . 2.12 (a) An input handwritten expression; (b) a shared parse forest of (a) considering the grammar depicted in Figure 2.11. Extracted from [START_REF] Maclean | A new approach for recognizing handwritten mathematics using relational grammars and fuzzy sets[END_REF] [START_REF] Deng | What you get is what you see: A visual markup decompiler[END_REF] . . . . . . . . . . 3.5 An unfolded bidirectional network. Extracted from [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF]. . . . . . . . . . . 3.6 LSTM memory block with one cell. Extracted from [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF]. . . . . . . . . . 3.7 A deep bidirectional LSTM network with two hidden levels. . . . . . . . . . . . . . . . . 3.8 (a) A chain-structured LSTM network; (b) A tree-structured LSTM network with arbitrary branching factor. Extracted from [START_REF] Tai | Improved semantic representations from tree-structured long short-term memory networks[END_REF]. . . . . . . . . . . . . . . . . . . . . . 3.9 Illustration of CTC forward algorithm. Blanks are represented with black circles and labels are white circles. Arrows indicate allowed transitions. Adapted from [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF]. 3.10 Mistake incurred by best path decoding. Extracted from [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF]. . . . . . . . 3.11 Prefix search decoding on the alphabet {X, Y}. Extracted from [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF] Extracted from [START_REF] Matsakis | Recognition of handwritten mathematical expressions[END_REF]. (b) An example of Delaunay-triangulation-based graph at symbol level. Extracted from [START_REF] Hirata | Automatic labeling of handwritten mathematical symbols via expression matching[END_REF]. . . . . . . . . . . . . . . . . . 5.2 An example of line of sight graph for a math expression. Extracted from [START_REF] Hu | Features and Algorithms for Visual Parsing of Handwritten Mathematical Expressions[END_REF]. . . . 6.17 (d)Tree-Left-R1 ; (e)Tree-0-R1 ; (f)the built SLG after merging several trees and performing other post process steps; (g) the built SLG with N oRelation edges removed. There is a node label error: the stroke 2 with the ground truth label '9' was wrongly classified as '→'. Introduction In this thesis, we explore the idea of online handwritten Mathematical Expression (ME) interpretation using Bidirectional Long Short-Term Memory (BLSTM) and Connectionist Temporal Classification (CTC) topology, and finally build a graph-driven recognition system, bypassing the high time complexity and manual work with the classical grammar-driven systems. Advanced recurrent neural network BLSTM with a CTC output layer achieved great success in sequence labeling tasks, such as text and speech recognition. However, the move from sequence recognition to mathematical expression recognition is far from being straightforward. Unlike text or speech where only left-right (or past-future) relationship is involved, ME has a 2 dimensional (2-D) structure consisting of relationships like subscript and superscript. To solve this recognition problem, we propose a graph-driven system, extending the chain-structured BLSTM to a tree structure topology allowing to handle the 2-D structure of ME, and extending CTC to local CTC to relatively constrain the outputs. In the first section of the this chapter, we introduce the motivation of our work from both the research point and the practical application point. Section 1.2 provides a global view of the mathematical expression recognition problem, covering some basic concepts and the challenges involved in it. Then in Section 1.3, we describe the proposed solution concisely, to offer the readers an overall view of main contributions of this work. The thesis structure will be presented in the end of the chapter. Motivation A visual language is defined as any form of communication that relies on two-or three-dimensional graphics rather than simply (relatively) linear text [Kremer, 1998]. Mathematical expressions, plans and musical notations are commonly used cases in visual languages [START_REF] Marriott | A survey of visual language specification and recognition[END_REF]. As an intuitive and easily (relatively) comprehensible knowledge representation model, mathematical expression (Figure 1.1) could help the dissemination of knowledge in some related domains and therefore is essential in scientific documents. Currently, common ways to input mathematical expressions into electronic devices include typesetting systems such as L A T E X and mathematical editors such as the one embedded in MS-Word. But these ways require that users could hold a large number of codes and syntactic rules, or handle the troublesome manipulations with keyboards and mouses as interface. As another option, being able to input mathematical expressions by hand with a pen tablet, as we write them on paper, is a more efficient and direct mean to help the preparation of scientific document. Thus, there comes the problem of handwritten mathematical expression recognition. Incidentally, the recent large developments of touch screen devices also drive the research of this field. Handwritten mathematical expression recognition is an appealing topic in pattern recognition field since it exhibits a big research challenge and underpins many practical applications. From a scientific point of view, a large set of symbols (more than 100) needs to be recognized, and also the 2 dimensional (2-D) structures (specifically the relationships between a pair of symbols, for example superscript and subscript), both of which increase the difficulty of this recognition problem. With regard to the application, it offers an easy and direct way to input MEs into computers, and therefore improves productivity for scientific writers. Research on the recognition of math notation began in the 1960's [START_REF] Anderson | Syntax-directed recognition of hand-printed two-dimensional mathematics[END_REF], and several research publications are available in the following thirty years [START_REF] Chang | A method for the structural analysis of two-dimensional mathematical expressions[END_REF][START_REF] Martin | Computer input/output of mathematical expressions[END_REF][START_REF] Anderson | Two-dimensional mathematical notation[END_REF]. Since the 90's, with the large developments of touch screen devices, this field has started to be active, gaining amounts of research achievement and considerable attention from the research community. A number of surveys [START_REF] Blostein | Recognition of mathematical notation[END_REF][START_REF] Chan | Mathematical expression recognition: a survey[END_REF][START_REF] Tapia | A survey on recognition of on-line handwritten mathematical notation[END_REF][START_REF] Zanibbi | Recognition and retrieval of mathematical expressions[END_REF] summarize the proposed techniques for math notation recognition. This research domain has been boosted by the Competition on Recognition of Handwritten Mathematical Expressions (CROHME) [START_REF] Mouchère | Advancing the state of the art for handwritten math recognition: the crohme competitions, 2011-2014[END_REF], which began as part of the International Conference on Document Analysis and Recognition (ICDAR) in 2011. It provides a platform for researchers to test their methods and compare them, and then facilitate the progress in this field. It attracts increasing participation of research groups from all over the world. In this thesis, the provided data and evaluation tools from CROHME will be used and results will be compared to participants. Mathematical expression recognition We usually divide handwritten MEs into online and offline domains. In the offline domain, data is available as an image, while in the online domain it is a sequence of strokes, which are themselves sequences of points recorded along the pen trajectory. Compared to the offline ME, time information is available in online form. This thesis will be focused on online handwritten ME recognition. For the online case, a handwritten mathematical expression could have one or more strokes and a stroke is a sequence of points sampled from the trajectory of the writing tool between a pen-down and a pen-up at a fixed interval of time. For example, the expression z d + z shown in Figure 1.2 is written with 5 strokes, two strokes of which belong to the symbol '+'. Generally, ME recognition involves three tasks [START_REF] Zanibbi | Recognition and retrieval of mathematical expressions[END_REF]]: (1) Symbol Segmentation, which consists in grouping strokes that belong to the same symbol. In Figure 1.3, we illustrate the segmentation of the expression z d + z where stroke3 and stroke4 are grouped as a symbol candidate. This task becomes very difficult in the presence of delayed strokes, which occurs when interspersed symbols are written. For example, it could be possible in the real case that someone write first a part of the symbol '+' (stroke3), and then the symbol 'z' (stroke5), in the end complete the other part of the symbol '+' (stroke4). Thus, in fact any combination of any number of strokes could form a symbol candidate. It is exhausting to take into account each possible combination of strokes, especially for complex expressions having a large number of strokes. (2) Symbol Recognition, the task of labeling the symbol candidates to assign each of them a symbol class. Still considering the same sample z d + z, Figure 1.4 presents the symbol recognition of it. This is as well a difficult task because the number of classes is quite important, more than one hundred different symbols including digits, alphabet, operators, Greek letters and some special math symbols; it exists an overlapping between some symbol classes: (1) for instance, digit '0', Greek letter 'θ', and character 'O' might look about the same when considering different handwritten samples (inter-class variability); (2) there is a large intra-class variability because each writer has his own writing style. Being an example of inter-class variability, the stroke5 in Figure 1.4 looks like and could be recognized as 'z', 'Z' or '2'. To address these issues, it is important to design robust and efficient classifiers as well as a large training data set. Nowadays, most of the proposed solutions are based on machine learning algorithms such as neural networks or support vector machines. (3) Structural Analysis, its goal is to identify spatial relations between symbols and with the help of a 2-D language to produce a mathematical interpretation, such as a symbol relation tree which will be emphasized in later chapter. For instance, the Superscript relationship between the first 'z' and 'd', and the Right relationship between the first 'z' and '+' as illustrated in Figure 1.5. Figure 1.6 provides the corresponding symbol relation tree which is one of the possible ways to represent math expressions. Structural analysis strongly depends on the correct understanding of relative positions among symbols. Most approaches consider only local information (such as relative symbol positions and their sizes) to determine the relation between a pair of symbols. Although some approaches have proposed the use of contextual information to improve system performances, modeling and using such information is still challenging. These three tasks can be solved sequentially or jointly. In the early stages of the study, most of the proposed solutions [START_REF] Chou | Recognition of equations using a two-dimensional stochastic context-free grammar[END_REF][START_REF] Koschinski | Segmentation and recognition of symbols within handwritten mathematical expressions[END_REF], Winkler et al., 1995[START_REF] Matsakis | Recognition of handwritten mathematical expressions[END_REF][START_REF] Zanibbi | Recognizing mathematical expressions using tree transformation[END_REF][START_REF] Tapia | Recognition of on-line handwritten mathematical expressions using a minimum spanning tree construction and symbol dominance[END_REF][START_REF] Tapia | Understanding mathematics: A system for the recognition of on-line handwritten mathematical expressions[END_REF][START_REF] Zhang | Using fuzzy logic to analyze superscript and subscript relations in handwritten mathematical expressions[END_REF] are sequential ones which treat the THE PROPOSED SOLUTION recognition problem as a two-step pipeline process, first symbol segmentation and classification, and then structural analysis. The task of structural analysis is performed on the basis of the symbol segmentation and classification result. The main drawback of these sequential methods is that the errors from symbol segmentation and classification will be propagated to structural analysis. In other words, symbol recognition and structural analysis are assumed as independent tasks in the sequential solutions. However, this assumption conflicts with the real case in which these three tasks are highly interdependent by nature. For instance, human beings recognize symbols with the help of global structure, and vice versa. The recent proposed solutions, considering the natural relationship between the three tasks, perform the task of segmentation at the same time build the expression structure: a set of symbol hypotheses maybe generated and a structural analysis algorithm may select the best hypotheses while building the structure. The integrated solutions use contextual information (syntactic knowledge) to guide segmentation or recognition, preventing from producing invalid expressions like [a + b). These approaches take into account contextual information generally with grammar (string grammar [Yamamoto et al., 2006, Awal et al., 2014, Álvaro et al., 2014b, 2016[START_REF] Maclean | A new approach for recognizing handwritten mathematics using relational grammars and fuzzy sets[END_REF] and graph grammar [Celik andYanikoglu, 2011, Julca-Aguilar, 2016]) parsing techniques, producing expressions conforming to the rules of a manually defined grammar. Either string or graph grammar parsing, each one has a high computational complexity. In conclusion, generally the current state of the art systems are grammar-driven solutions. For these grammar-driven solutions, it requires not only a large amount of manual work for defining grammars, but also a high computational complexity for grammar parsing process. As an alternative approach, we propose to explore a non grammar-driven solution for recognizing math expression. This is the main goal of this thesis, we would like to propose new architectures for mathematical expression recognition with the idea of taking advantage of the recent advances in recurrent neural networks. The proposed solution As well known, Bidirectional Long Short-term Memory (BLSTM) network with a Connectionist Temporal Classification (CTC) output layer achieved great success in sequence labeling tasks, such as text and speech recognition. This success is due to the LSTM's ability of capturing long-term dependency in a sequence and the effectiveness of CTC training method. Unlike the grammar-driven solutions, the new architectures proposed in this thesis include contextual information with BLSTM instead of grammar parsing technique. In this thesis, we will explore the idea of using the sequence-structured BLSTM with a CTC stage to recognize 2-D handwritten mathematical expression. Mathematical expression recognition with a single path. As a first step to try, we consider linking the last point and the first point of a pair of strokes successive in the input time to allow the handwritten ME to be handled with BLSTM topology. As shown in Figure 1.7, after processing, the original 5 visible strokes Figure 1.7 -Introduction of traits "in the air" turn out to be 9 strokes; in fact, they could be regarded as a global sequence, just as same as the regular 1-D text. We would like to use these later added strokes to represent the relationships between pairs of stokes by assigning them a ground truth label. The remaining work is to train a model using this global sequence with a BLSTM and CTC topology, and then label each stroke in the global sequence. Finally, with the sequence of outputted labels, we explore how to build a 2-D expression. The framework is illustrated in Figure 1.8. Mathematical expression recognition by merging multiple paths. Obviously, the solution of linking only pairs of strokes successive in the input time could handle just some relatively simple expressions. For complex expressions, some relationships could be missed such as the Right relationship between stroke1 and stroke5 in Figure 1.7. Thus, we turn to a graph structure to model the relationships between strokes in mathematical expressions. We illustrate this new proposal in Figure 1.9. As shown, the input of the recognition system is an handwritten expression which is a sequence of strokes; the output is the stroke label graph which consists of the information about the label of each stroke and the relationships between stroke pairs. As the first step, we derive an intermediate graph from the raw input considering both the temporal and spatial information. In this graph, each node is a stroke and edges are added according to temporal or spatial properties between strokes. We assume that strokes which are close to each other in time and space have a high probability to be a symbol candidate. Secondly, several 1-D paths will be selected from the graph since the classifier model we are considering is a sequence labeller. Indeed, a classical BLSTM-RNN model is able to deal with only sequential structure data. Next, we use the BLSTM classifier to label the selected 1-D paths. This stage consists of two steps --the training and recognition process. Finally, we merge these labeled paths to build a complete stroke label graph. Mathematical expression recognition by merging multiple trees. Human beings interpret handwritten math expression considering the global contextual information. However, in the current system, even though several paths from one expression are taken into account, each of them is considered individually. The classical BLSTM model could access information from past and future in a long range but the information outside the single sequence is of course not accessible to it. Thus, we would like to develop a neural network model which could handle directly a structure not limited to a chain. With this new neural network model, we could take into account the information in a tree instead of a single path at one time when dealing with one expression. We extend the chain-structured BLSTM to tree structure topology and apply this new network model for online math expression recognition. Figure 1.10 provides a global view of the recognition system. Similar to the framework presented in Figure 1.9, we first drive an intermediate graph from the raw input. Then, instead of 1-D paths, we consider from the graph deriving trees which will be labeled by tree-based BLSTM model as a next step. In the end, these labeled trees will be merged to build a stroke label graph. Thesis structure Chapter 2 describes the previous works on ME representation and recognition. With regards to representation, we introduce the symbol relation tree (symbol level) and the stroke label graph (stroke level). Furthermore, as an extension, we describe the performance evaluation based on stroke label graph. For ME recognition, we first review the entire history of this research subject, and then only focus on more recent solutions which are used for a comparison with the new architectures proposed in this thesis. Chapter 3 is focused on sequence labeling using recurrent neural networks, which is the foundation of our work. First of all, we explain the concept of sequence labeling and the goal of this task shortly. Then, the next section introduces the classical structure of recurrent neural network. The property of this network is that it can memorize contextual information but the range of the information could be accessed is quite limited. Subsequently, long short-term memory is presented with the aim of overcoming the disadvantage of the classical recurrent neural network. The new architecture is provided with the ability of accessing information over long periods of time. Finally, we introduce how to apply recurrent neural network for the task of sequence labeling, including the existing problems and the solution to solve them, i.e. the connectionist temporal classification technology. In Chapter 4, we explore the idea of recognizing ME expressions with a single path. Firstly, we globally introduce the proposal that builds stroke label graph from a sequence of labels, along with the existing limitations in this stage. Then, the entire process of generating the sequence of labels with BLSTM and local CTC given the input is presented in detail, including firstly feeding the inputs of BLSTM, then the training and recognition stages. Finally, the experiments and discussion are described. One main drawback of the strategy proposed in this chapter is that only stroke combinations in time series are used in the representation model. Thus, some relationships are missed at the modeling stage. In Chapter 5, we explore the idea of recognizing ME expressions by merging multiple paths, as a new model to overcome some limitations in the system of Chapter 4. The proposed solution will take into account more possible stroke combinations in both time and space such that less relationships will be missed at the modeling stage. We first provide an overview of graph representation related to build a graph from raw mathematical expression. Then we globally describe the framework of mathematical expression recognition by merging multiple paths. Next, all the steps of the recognition system are explained one by one in detail. Finally, the experiment part and the discussion part are presented respectively. One main limitation is that we use the classical chain-structured BLSTM to label a graph-structured input data. In Chapter 6, we explore the idea of recognizing ME expressions by merging multiple trees, as a new model to overcome the limitation of the system of Chapter 5. We extend the chain-structured BLSTM to tree structure topology and apply this new network model for online math expression recognition. Firstly, a short overview with regards to the non-chain-structured LSTM is provided. Then, we present the new proposed neural network model named tree-based BLSTM. Next, the framework of ME recognition system based on tree-based BLSTM is globally introduced. Hereafter, we focus on the specific techniques involved in this system. Finally, experiments and discussion parts are covered respectively. In Chapter 7, we conclude the main contributions of this thesis and give some thoughts about future work. I State of the art 2 Mathematical expression representation and recognition This chapter introduces the previous works regarding to ME representation and ME recognition. In the first part, we will review the different representation models on symbol and stroke level respectively. On symbol level, symbol relation (layout) tree is the one we mainly focus on; on stroke level, we will introduce stroke label graph which is a derivation of symbol relation tree. Note that stroke label graph is the final output form of our recognition system. As an extension, we also describe the performance evaluation based on stroke label graph. In the second part, we review first the history of this recognition problem, and then put emphasize on more recent solutions which are used for a comparison with the new architectures proposed in this thesis. Mathematical expression representation Structures can be depicted at three different levels: symbolic, object and primitive [START_REF] Zanibbi | Evaluating structural pattern recognition for handwritten math via primitive label graphs[END_REF]. In the case of handwritten ME, the corresponding levels are expression, symbol and stroke. In this section, we will first introduce two representation models of math expression at the symbol level, especially Symbol Relation Tree (SRT). From the SRT, if going down to the stroke level, a Stroke Label Graph (SLG) could be derived, which is the current official model to represent the ground-truth of handwritten math expressions and also for the recognition outputs in Competitions CROHME. Symbol level: Symbol relation (layout) tree It is possible to describe a ME at the symbol level using a layout-based SRT, as well as an operator tree which is based on operator syntax. Symbol layout tree represents the placement of symbols on baselines (writing lines), and the spatial arrangement of the baselines [START_REF] Zanibbi | Recognition and retrieval of mathematical expressions[END_REF]. As shown in Figure 2.1a, symbols '(', 'a', '+', 'b', ')' share a writing line while '2' belongs to the other writing line. An operator tree represents the operator and relation syntax for an expression [START_REF] Zanibbi | Recognition and retrieval of mathematical expressions[END_REF]. The operator tree for (a + b) 2 shown in Figure 2.1b represents the addition of 'a' and 'b', squared. We will focus only on the model of symbol relation tree in the coming content since it is closely related to our work. In SRT, nodes represent symbols, while labels on the edges indicate the relationships between symbols. For example, in Figure 2.2a, the first symbol '-' on the base line is the root of the tree; the symbol 'a' is Above '-' and the symbol 'c' is Below '-'. In Figure 2.2b, the symbol 'a' is the root; the symbol '+' is on the Right of 'a'. As a matter of fact, the node inherits the spatial relationships of its ancestor. In Figure 2.2a, node '+' inherits the Above relationship of its ancestor 'a'. Thus, '+' is also Above '-' as 'a'. Similarly, 'b' is on the Right of 'a' and Above the '-'. Note that all the inherited relationships are ignored when we depict the SRTs in this work. This will be also the case in the evaluation stage since knowing the original edges is enough to ensure a proper representation. 101 classes of symbols have been collected in CROHME data set, including digits, alphabets, operators and so on. Six spatial relationships are defined in the CROHME competition, they are: Right, Above, Below, Inside (for square root), Superscript, Subscript. For the case of nth-Roots, like 3 √ x as illustrated in Figure 2.3a, we define that the symbol '3' is Above the square root and 'x' is Inside the square root. The limits of an integral and summation are designated as Above or Superscript and Below or Subscript depending on the actual position of the bounds. For example, in expression n i=0 a i , 'n' is Above the ' ' and 'i' is Below the ' ' (Figure 2.3b). When we consider another case n i=0 a i , 'n' is Superscript the ' ' and 'i' is Subscript the ' '. The same strategy is held for the limits of integral. As can be seen in Figure 2.3c, the first 'x' is Subscript the ' ' in the expression x xdx. File formats for representing SRT File formats for representing SRT include Presentation MathML1 and L A T E X, as shown in Figure 2.4. Compared to L A T E X, Presentation MathML contains additional tags to identify symbols types; these are primarily for formatting [START_REF] Zanibbi | Recognition and retrieval of mathematical expressions[END_REF]. By the way, there are several files encoding for operator trees, including Content MathML and OpenMath [START_REF] Davenport | Unifying math ontologies: A tale of two standards[END_REF]Kohlhase, 2009, Dewar, 2000]. SRT represents math expression at the symbol level. If we go down at the stroke level, a stroke label graph (SLG) can be derived from the SRT. In SLG, nodes represent strokes, while labels on the edges encode either segmentation information or symbol relationships. Relationships are defined at the level of symbols, implying that all strokes (nodes) belonging to one symbol have the same input and output edges. Consider the simple expression 2+2 written using four strokes (two strokes for '+') in The four strokes are indicated as s1, s2, s3, s4 in writing order. 'R' is for left-right relationship corresponds to segmentation information; it indicates that a pair of strokes belongs to the same symbol. In this case, the edge label is the same as the common symbol label. On the other hand, the non-dashed edges define spatial relationships between nodes and are labeled with one of the different possible relationships between symbols. As a consequence, all strokes belonging to the same symbol are fully connected, nodes and edges sharing the same symbol label; when two symbols are in relation, all strokes from the source symbol are connected to all strokes from the target symbol by edges sharing the same relationship label. Since CROHME 2013, SLG has been used to represent mathematical expressions [START_REF] Mouchère | Advancing the state of the art for handwritten math recognition: the crohme competitions, 2011-2014[END_REF]. As the official format to represent the ground-truth of handwritten math expressions and also for the recognition outputs, it allows detailed error analysis on stroke, symbol and expression levels. In order to be comparable to the ground truth SLG and allow error analysis on any level, our recognition system aims to generate SLG from the input. It means that we need a label decision for each stroke and each stroke pair used in a symbol relation. File formats for representing SLG The file format we are using for representing SLG is illustrated with the example 2 + 2 in Figure 2.6a. For each node, the format is like 'N, N odeIndex, N odeLabel, P robability' where P robability is always 1 in ground truth and depends on the classifier in system output. When it comes to edges, the format will be 'E, F romN odeIndex, T oN odeIndex, EdgeLabel, P robability'. An alternative format could be like the one shown in Figure 2.6b, which contains the same information as the previous one but with a more compact appearance. We take symbol as an individual to represent in this compact version but include the stroke level information also. For each object (or symbol), the format is 'O, ObjectIndex, ObjectLabel, P robability, StrokeList' in which StrokeList' lists the indexes of the strokes this symbol consists of. Similarly, the representation for relationships is formatted as 'EO, F romObjectIndex, T oObjectIndex, RelationshipLabel, P robability'. Performance evaluation with stroke label graph As mentioned in last section, both the ground truth and the recognition output of expression in CROHME are represented as SLGs. Then the problem of performance evaluation of a recognition system is essentially measuring the difference between two SLGs. This section will introduce how to compute the distance between two SLGs. A SLG is a directed graph that can be visualized as an adjacency matrix of labels (Figure 2.7). Figure 2.7a provides the format of the adjacency matrix: the diagonal refers stroke (node) labels and other cells interpret stroke pair (edge) labels [START_REF] Zanibbi | Evaluating structural pattern recognition for handwritten math via primitive label graphs[END_REF]. Figure 2.7b presents the adjacency matrix of labels corresponding to the SLG in Figure 2.5c. The underscore '_' identifies that this edge exists and the label of it is N oRelation, or this edge does not exist. The edge e14 with the label of R is an inherited relationship which is not reflected in SLG as we said before. Suppose we have 'n' strokes in one expression, the number of cells in the adjacency matrix is n 2 . Among these cells, 'n' cells represent the labels of strokes while the other 'n(n -1)' cells interpret the segmentation information and relationships. In order to analyze recognition errors in detail, Zanibbi et al. defined for SLGs a set of metrics in [START_REF] Zanibbi | Evaluating structural pattern recognition for handwritten math via primitive label graphs[END_REF]. They are listed as follows: • ∆C, the number of stroke labels that differ. • ∆S, the number of segmentation errors. • ∆R, the number of spatial relationship errors. • ∆L = ∆S + ∆R, the number of edge labels that differ. • ∆B = ∆C + ∆L = ∆C + ∆S + ∆R, the Hamming distance between the adjacency matrices. Suppose that the sample '2 + 2' was interpreted as '2 -1 2 ' as shown in Figure 2.8, we now compare the two adjacency matrices (the ground truth in • ∆C = 2, cells l2 and l3. The stroke s2 was wrongly recognized as 1 while s3 was incorrectly labeled as -. • ∆S = 2, cells e23 and e32. The symbol '+' written with 2 strokes was recognized as two isolated symbols. • ∆R = 1, cell e24. The Right relationship was recognized as Superscript. • ∆L = ∆S + ∆R = 2 + 1 = 3. • ∆B = ∆C + ∆L = ∆C + ∆S + ∆R = 2 + 2 + 1 = 5. Zanibbi et al. defined two additional metrics at the expression level: • ∆B n = ∆B n 2 , the percentage of correct labels in adjacency matrix where 'n' is the number of strokes. ∆B n is the Hamming distance normalized by the label graph size n 2 . • ∆E, the error averaged over three types of errors: ∆C, ∆S, ∆L. As ∆S is part of ∆L, segmentation errors are emphasized more than other edge errors ∆R in this metric [START_REF] Zanibbi | Evaluating structural pattern recognition for handwritten math via primitive label graphs[END_REF]. ∆E = ∆C n + ∆S n(n-1) + ∆L n(n-1) 3 (2.1) We still consider the sample shown in Figure 2.8b, thus: • ∆B n = ∆B n 2 = 5 4 2 = 5 16 = 0.3125 • ∆E = ∆C n + ∆S n(n-1) + ∆L n(n-1) 3 = 2 4 + 2 4(4-1) + 3 4(4-1) 3 = 0.4694 (2.2) Given the representation form of SLG and the defined metrics, 'precision' and 'recall' rates at any level (stroke, symbol and expression) could be computed [START_REF] Zanibbi | Evaluating structural pattern recognition for handwritten math via primitive label graphs[END_REF], which are current indexes for accessing the performance of the systems in CROHME. 'recall' and 'precision' rates are commonly used to evaluate results in machine learning experiments [START_REF] Powers | Evaluation: from precision, recall and f-measure to roc, informedness, markedness and correlation[END_REF]. In different research fields like information retrieval and classification tasks, different terminology are used to define 'recall' and 'precision'. However, the basic theory behind remains the same. In the context of this work, we use the case of segmentation results to explain 'recall' and 'precision' rates. To well define them, several related terms are given first as shown in Tabel 2.1. 'segmented' and 'not segmented' refer to the prediction of classifier while 'relevant' and 'non relevant' refer to the ground truth. 'recall' is defined as recall = tp tp + f n (2.3) and 'precision' is defined as precision = tp tp + f p (2.4) In Figure 2.8, '2+2' written with four strokes was recognized as '2-1 2 '. Obviously in this case, tp is equal to 2 since two '2' symbols were segmented and they exist in the ground truth. f p is equal to 2 also because '-' and '1' were segmented but they are not the ground truth. f n is equal to 1 as '+' was not segmented but it is the ground truth. Thus, 'recall' is 2 2+1 and 'precision' is 2 2+2 . A larger 'recall' than 'precision' means the symbols are over segmented in our context. Mathematical expression recognition In this section, we first review the entire history of this research subject, and then only focus on more recent solutions which are provided as a comparison to the new architectures proposed in this thesis. Overall review Research on the recognition of math notation began in the 1960's [START_REF] Anderson | Syntax-directed recognition of hand-printed two-dimensional mathematics[END_REF], and several research publications are available in the following thirty years [START_REF] Chang | A method for the structural analysis of two-dimensional mathematical expressions[END_REF][START_REF] Martin | Computer input/output of mathematical expressions[END_REF][START_REF] Anderson | Two-dimensional mathematical notation[END_REF]. Since the 90's, with the large developments of touch screen devices, this field has started to be active, gaining amounts of research achievement and considerable attention from the research community. A number of surveys [START_REF] Blostein | Recognition of mathematical notation[END_REF][START_REF] Chan | Mathematical expression recognition: a survey[END_REF][START_REF] Tapia | A survey on recognition of on-line handwritten mathematical notation[END_REF][START_REF] Zanibbi | Recognition and retrieval of mathematical expressions[END_REF], Mouchère et al., 2016] summarize the proposed techniques for math notation recognition. As described already in Section 1.2, ME recognition involves three interdependent tasks [START_REF] Zanibbi | Recognition and retrieval of mathematical expressions[END_REF]: (1) Symbol segmentation, which consists in grouping strokes that belong to the same symbol; (2) symbol recognition, the task of labeling the symbol to assign each of them a symbol class; (3) structural analysis, its goal is to identify spatial relations between symbols and with the help of a grammar to produce a mathematical interpretation. These three tasks can be solved sequentially or jointly. Sequential solutions. In the early stages of the study, most of the proposed solutions [START_REF] Chou | Recognition of equations using a two-dimensional stochastic context-free grammar[END_REF][START_REF] Koschinski | Segmentation and recognition of symbols within handwritten mathematical expressions[END_REF], Winkler et al., 1995[START_REF] Lehmberg | A soft-decision approach for symbol segmentation within handwritten mathematical expressions[END_REF][START_REF] Matsakis | Recognition of handwritten mathematical expressions[END_REF][START_REF] Zanibbi | Recognizing mathematical expressions using tree transformation[END_REF][START_REF] Tapia | Recognition of on-line handwritten mathematical expressions using a minimum spanning tree construction and symbol dominance[END_REF][START_REF] Toyozumi | A study of symbol segmentation method for handwritten mathematical formula recognition using mathematical structure information[END_REF][START_REF] Tapia | Understanding mathematics: A system for the recognition of on-line handwritten mathematical expressions[END_REF][START_REF] Zhang | Using fuzzy logic to analyze superscript and subscript relations in handwritten mathematical expressions[END_REF][START_REF] Yu | A unified framework for symbol segmentation and recognition of handwritten mathematical expressions[END_REF] are sequential ones which treat the recognition problem as a two-step pipeline process, first symbol segmentation and classification, and then structural analysis. The task of structural analysis is performed on the basis of the symbol segmentation and classification result. Considerable works are done dedicated to each step. For segmentation, the proposed methods include Minimum Spanning Tree (MST) based method [START_REF] Matsakis | Recognition of handwritten mathematical expressions[END_REF], Bayesian framework [START_REF] Yu | A unified framework for symbol segmentation and recognition of handwritten mathematical expressions[END_REF], graph-based method [START_REF] Lehmberg | A soft-decision approach for symbol segmentation within handwritten mathematical expressions[END_REF][START_REF] Toyozumi | A study of symbol segmentation method for handwritten mathematical formula recognition using mathematical structure information[END_REF]] and so on. The symbol classifiers used consist of Nearest Neighbor, Hidden Markov Model, Multilayer Perceptron, Support Vector Machine, Recurrent neural networks and so on. For spatial relationship classification, the proposed features include symbol bounding box [START_REF] Anderson | Syntax-directed recognition of hand-printed two-dimensional mathematics[END_REF], relative size and position [START_REF] Aly | Statistical classification of spatial relationships among mathematical symbols[END_REF], and so on. The main drawback of these sequential methods is that the errors from symbol segmentation and classification will be propagated to structural analysis. In other words, symbol recognition and structural analysis are assumed as independent tasks in the sequential solutions. However, this assumption conflicts with the real case in which these three tasks are highly interdependent by nature. For instance, human beings recognize symbols with the help of structure, and vice versa. Integrated solutions. Considering the natural relationship between the three tasks, researchers mainly focus on integrated solutions recently, which performs the task of segmentation at the same time build the expression structure: a set of symbol hypotheses maybe generated and a structural analysis algorithm may select the best hypotheses while building the structure. The integrated solutions use contextual information (syntactic knowledge) to guide segmentation or recognition, preventing from producing invalid expressions like [a + b). These approaches take into account contextual information generally with grammar (string grammar [Yamamoto et al., 2006, Awal et al., 2014, Álvaro et al., 2014b, 2016[START_REF] Maclean | A new approach for recognizing handwritten mathematics using relational grammars and fuzzy sets[END_REF] and graph grammar [Celik andYanikoglu, 2011, Julca-Aguilar, 2016]) parsing techniques, producing expressions conforming to the rules of a manually defined grammar. String grammar parsing, along with graph grammar parsing, has a high time complexity in fact. In the next section we will analysis deeper these approaches. Instead of using grammar parsing technique, the new architectures proposed in this thesis include contextual information with bidirectional long short-term memory which can access the content from both the future and the past in an unlimited range. End-to-end neural network based solutions. Inspired by recent advances in image caption generation, some end-to-end deep learning based systems were proposed for ME recognition [START_REF] Deng | What you get is what you see: A visual markup decompiler[END_REF], Zhang et al., 2017]. These systems were developed from the attention-based encoder-decoder model which is now widely used for machine translation. They decompile an image directly into presentational markup such as L A T E X. However, considering we are given trace information in the online case, despite the final L A T E X string, it is necessary to decide a label for each stroke. This information is not available now in end-to-end systems. The recent integrated solutions In [Yamamoto et al., 2006], a framework based on stroke-based stochastic context-free grammar is proposed for on-line handwritten mathematical expression recognition. They model handwritten mathematical expressions with a stochastic context-free grammar and formulate the recognition problem as a search problem of the most likely mathematical expression candidate, which can be solved using the Cock Younger Kasami (CYK) algorithm. With regard to the handwritten expression grammar, the authors define production rules for structural relation between symbols and also for a composition of two sets of strokes to form a symbol. Figure 2.9 illustrates the process of searching the most likely expression candidate with Figure 2.9 -Example of a search for most likely expression candidate using the CYK algorithm. Extracted from [Yamamoto et al., 2006]. the CYK algorithm on an example of x y + 2. The algorithm which fill the CYK table from bottom to up is as following: • For each input stroke i, corresponding to cell M atrix(i, i) shown in Figure 2.9, the probability of each stroke label candidate is computed. This calculation is the same as the likelihood calculation in isolated character recognition. In this example, the 2 best candidates for the first stroke of the presented example are ')' with the probability of 0.2 and the first stroke of x (denoted as x 1 here) with the probability of 0.1. • In cell M atrix(i, i+1), the candidates for strokes i and i+1 are listed. As shown in cell M atrix (1,2) of the same example, the candidate x with the likelihood of 0.005 is generated with the production rule < x → x 1 x 2 , SameSymbol >. The structure likelihood computed using the bounding boxes is 0.5 here. Then the product of stroke and structure likelihoods is 0.1 × 0.1 × 0.5 = 0.005. • Similarly, in cell M atrix(i, i + k), the candidates for strokes from i to i + k are listed with the corresponding likelihoods. • Finally, the most likely EXP candidate in cell M atrix (1, n) is the recognition result. In this work, they assume that symbols are composed only of consecutive (in time) strokes. In fact, this assumption does not work with the cases when the delayed strokes take place. In [START_REF] Awal | A global learning approach for an online handwritten mathematical expression recognition system[END_REF]], the recognition system handles mathematical expression recognition as a simultaneous optimization of expression segmentation, symbol recognition, and 2D structure recognition under the restriction of a mathematical expression grammar. The proposed approach is a global strategy allowing learning mathematical symbols and spatial relations directly from complete expressions. The general architecture of the system in illustrated in Figure 2.10. First, a symbol hypothesis generator based on 2-D Figure 2.10 -The system architecture proposed in [START_REF] Awal | A global learning approach for an online handwritten mathematical expression recognition system[END_REF]. Extracted from [START_REF] Awal | A global learning approach for an online handwritten mathematical expression recognition system[END_REF]. dynamic programming algorithm provides a number of segmentation hypotheses. It allows grouping strokes which are not consecutive in time. Then they consider a symbol classifier with a reject capacity in order to deal with the invalid hypotheses proposed by the previous hypothesis generator. The structural costs are computed with Gaussian models which are learned from a training data set. The spatial information used are baseline position (y) and x-height (h) of one symbol or sub-expression hypothesis. The language model is defined by a combination of two 1-D grammars (horizontal and vertical). The production rules are applied successively until reaching elementary symbols, and then a bottom-up parse (CYK) is applied to construct the relational tree of the expression. Finally, the decision maker selects the set of hypotheses that minimizes the global cost function. A fuzzy Relational Context-Free Grammar (r-CFG) and an associated top-down parsing algorithm are proposed in [START_REF] Maclean | A new approach for recognizing handwritten mathematics using relational grammars and fuzzy sets[END_REF]. Fuzzy r-CFGs explicitly model the recognition process as a fuzzy relation between concrete inputs and abstract expressions. The production rules defined in this grammar have the form of: in this work is a tabular variant of Unger's method for CFG parsing [START_REF] Unger | A global parser for context-free phrase structure grammars[END_REF]. This process is divided into two steps: forest construction, in which a shared parse forest is created from the start non-terminal to the leafs that represents all recognizable parses of the input, and tree extraction, in which individual parse trees are extracted from the forest in decreasing order of membership grade. Figure 2.12 show an handwritten expression and a shared parse forest of it representing some possible interpretations. A 0 r ⇒ A 1 A 2 • • • A k , In [Álvaro et al., 2016], they define the statistical framework of a model based on Two-Dimensional Probabilistic Context-Free Grammars (2D-PCFGs) and its associated parsing algorithm. The authors also regard the problem of mathematical expression recognition as obtaining the most likely parse tree given a sequence of strokes. To achieve this goal, two probabilities are required, symbol likelihood and structural probability. Due to the fact that only strokes that are close together will form a mathematical symbol, a symbol likelihood model is proposed based on spatial and geometric information. Two concepts (visibility and closeness) describing the geometric and spatial relations between strokes are used in this work to characterize a set of possible segmentation hypotheses. Next, a BLSTM-RNN are used to calculate the probability that a certain segmentation hypothesis represents a math symbol. BLSTM possesses the ability to access context information over long periods of time from both past and future and is one of the state of the art models. With regard to the structural probability, both the probabilities of the rules of the grammar and a spatial relationship model which provides the probability p(r|BC) that two sub-problems B and C are arranged according to spatial relationship r are required. In order to train a statistical classifier, given two regions B and C, they define nine geometric features based on their bounding boxes (Figure 2.13). Then these nine features are rewrote as the feature vector h(B, C) representing a spatial relationship. Next, a GMM is trained with the labeled feature vector such that the probability of the spatial relationship model can be computed as the posterior probability provided by the GMM for class r. Finally, they define a CYK-based algorithm for 2D-PCFGs in the statistical framework. Unlike the former described solutions which are based on string grammar, in [START_REF] Julca-Aguilar | Recognition of Online Handwritten Mathematical Expressions using Contextual Information[END_REF], the authors model the recognition problem as a graph parsing problem. A graph grammar model for mathematical expressions and a graph parsing technique that integrates symbol and structure level information are proposed in this work. The recognition process is illustrated in Figure 2.14. Two main components are involved in this process: (1) hypotheses graph generator and (2) graph parser. The hypotheses graph generator builds a graph that defines the search space of the parsing algorithm and the graph parser does the parsing itself. In the hypotheses graph, vertices represent symbol hypotheses and edges represent relations [Álvaro et al., 2016] between symbols. The labels associated to symbols and relations indicate their most likely interpretations. Of course, these labels are the outputs of symbol classifier and relation classifier. The graph parser uses the hypotheses graph and the graph grammar to generate first a parse forest consisting of several parse trees, each one representing an interpretation of the input strokes as a mathematical expression, and then extracts a best tree among the forest as the final recognition result. In the proposed graph grammar, production rules have the form of A → B, defining the replacement of a graph by another graph. With regard to the parsing technique, they propose an algorithm based on the Unger's algorithm which is used for parsing strings [START_REF] Unger | A global parser for context-free phrase structure grammars[END_REF]. The algorithm presented in this work is a top-down approach, starting from the top vertex (root) to the bottom vertices. End-to-end neural network based solutions In [START_REF] Deng | What you get is what you see: A visual markup decompiler[END_REF], the proposed model WYGIWYS (what you get is what you see) is an extension of the attention-based encoder-decoder model. The structure of WYGIWYS is shown in Figure 2.15. As can be seen, given an input image, a Convolutional Neural Network (CNN) is applied first to extract image features. Then, for each row in the feature map, they use an Recurrent Neural Network (RNN) encoder to re-encodes it expecting to catch the sequential information. Next, the encoded features are decoded by an RNN decoder with a visual attention mechanism to generate the final outputs. In parallel to the work of [START_REF] Deng | What you get is what you see: A visual markup decompiler[END_REF], [START_REF] Zhang | A Tree-BLSTM based Recogniton System for Online Handwritten Mathematical Expression[END_REF]] also use the attention based encoder-decoder framework to translate MEs into L A T E X notations. Compared to the recent integrated solutions, the end-to-end neural network based solutions require no large amount of manual work for defining grammars or a high computational complexity for grammar parsing process, and achieve the state of the art recognition results. However, considering we are given trace information in the online case, despite the final L A T E X string, it is necessary to decide a label for each stroke. This alignment is not available now in end-to-end systems. Discussion In this section, we first introduce the development of mathematical expression recognition in general, and then put emphasis on the more recent proposed solutions. Instead of analyzing the advantages and disadvantages of the existing approaches consisting of variable grammars and their associated parsing techniques, the aim of this section is to provide a comparison to the new architectures proposed in this thesis. In spite of considerable different methods related to the three sub-tasks (symbol segmentation, symbol recognition and structural analysis), and variable grammars and parsing techniques, the key idea behind these [START_REF] Deng | What you get is what you see: A visual markup decompiler[END_REF] integrated techniques is relying on explicit grammar rules to solve the ambiguity in symbol recognition and relation recognition. In other words, the existing solutions take into account contextual or global information generally with the help of a grammar. However, using either string or graph grammar, a large amount of manual work is needed for defining grammars and a high computational complexity for grammar parsing process. BLSTM neural network is able to model the dependency in a sequence over indefinite time gaps, overcoming the short-term memory of classical recurrent neural networks. Due to this ability, BLSTM achieved great success in sequence labeling tasks, such as text and speech recognition. Instead of using grammar parsing technique, the new architectures proposed in this thesis will include contextual information with bidirectional long short-term memory. In [Álvaro et al., 2016], it has been used an elementary function to recognize symbols or to control segmentation, which is itself included in an overall complex system. The goal of our work is to develop a new architecture where a recurrent neural network is the backbone of the solution. In next chapter, we will introduce how the advanced neural network take the contextual information into consideration for the problem of sequence labeling. Sequence labeling with recurrent neural networks This chapter will be focused on sequence labeling using recurrent neural networks, which is the foundation of our work. Firstly, the concept of sequence labeling will be introduced in Section 3.1. We explain the goal of this task. Next, Section 3.2 introduces the classical structure of recurrent neural network. The property of this network is that it can memorize contextual information but the range of the information which could be accessed is quite limited. Subsequently, in Section 3.3 long short-term memory is presented. This architecture is provided with the ability of accessing information over long periods of time. Finally, we introduce how to apply recurrent neural network for the task of sequence labeling, including the existing problems and the solutions to solve them, i.e. the connectionist temporal classification technique. In this chapter, considerable amount of variables and formulas are involved in order to clearly describe the content, likewise to extend easily the algorithms in later chapters. We use here the same notations as in [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF]. In fact, this chapter is a short version of Alex Graves' book «Supervised sequence labeling with recurrent neural networks». We use the same figures and similar outline to introduce this entire framework. Since the architecture of BLSTM and CTC is the backbone of our solution, thus we take a whole chapter to elaborate this topology to help to understand our work. Sequence labeling In machine learning, the term 'sequence labeling' encompasses all tasks where sequences of data are transcribed with sequences of discrete labels [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF]. Well known examples include handwriting and speech recognition (Figure 3.1), gesture recognition and protein secondary structure. In this thesis, we only consider supervised sequence labeling cases in which the ground-truth is provided during the training process. The goal of sequence labeling is to transcribe sequences of input data into sequences of labels, each label coming from a fixed alphabet. For example looking at the top row of Figure 3.1, we would like to assign the sequence "FOREIGN MINISTER" of which each label is from English alphabet, to the input signal on the left side. Suppose that X denotes a input sequence and l is the corresponding ground truth, being a sequence of labels, the set of training examples could be referred as T ra = {(X, l)}. The task is to use T ra to train a sequence labeling algorithm to label each input sequence in a test data set, as accurately as possible. In fact when people try to recognize a handwriting or speech signal, we focus on not only local input signal, but also a global, contextual information to help the transcription process. Thus, we hope the sequence labeling algorithm could have the ability also to take advantage of contextual information. Recurrent neural networks Artificial Neural Networks (ANNs) are computing systems inspired by the biological neural networks [START_REF] Jain | Artificial neural networks: A tutorial[END_REF]. It is hoped that such systems could possess the ability to learn to do tasks by considering some given examples. An ANN is a network of small units, joined to each other by weighted connections. Whether connections form cycles or not, usually we can divide ANNs into two classes: ANNs without cycles are referred to as Feed-forward Neural Networks (FNNs); ANNs with cycles, are referred to as feedback, recurrent neural networks (RNNs). The cyclical connections could model the dependency between past and future, therefore RNNs possess the ability to memorize while FNNs do not have memory capability. In this section, we will focus on recurrent networks with cyclical connections. Thanks to RNN's memory capability, it is suitable for sequence labeling task where the contextual information plays a key role. Many varieties of RNN were proposed, such as Elman networks, Jordan networks, time delay neural networks and echo state networks [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF]. We introduce here a simple RNN architecture containing only a single, self connected hidden layer (Figure 3.3). Topology In order to better understand the mechanism of RNNs, we first provide a short introduction to Multilayer Perceptron (MLP) [START_REF] Rumelhart | Learning internal representations by error propagation[END_REF][START_REF] Werbos | Generalization of backpropagation with application to a recurrent gas market model[END_REF][START_REF] Bishop | Neural networks for pattern recognition[END_REF] which is the most widely used form of FNNs. As illustrated in Figure 3.2, a MLP has an input layer, one or more hidden layers and an output layer. The S-shaped curves in the hidden and output layers indicate the application of 'sigmoidal' nonlinear activation functions. The number of units in the input layer is equal to the length of feature vector. Both the number of units in the output layer and the choice of output activation function depend on the task the network is applied to. When dealing with binary classification tasks, the standard configuration is a single unit with a logistic sigmoid activation. For classification problems with K > 2 classes, usually we have K output units with the soft-max function. Since there is no connection from past to future or future to past, MLP depends only on the current input to compute the output and therefore is not suitable for sequence labeling. Unlike the feed forward network architecture, in a neural network with cyclical connections presented in Figure 3.3, the connections from the hidden layer to itself (red) could model the dependency between past and future. However, the dependencies between different time-steps can not be seen clearly in this figure. Thus, we unfold the network along the input sequence to visualize them in Figure 3.4. Different with Figure 3.2 and 3.3 where each node is a single unit, here each node represents a layer of network units at a single time-step. The input at each time step is a vector of features; the output at each time step is a vector of probabilities regarding to different classes. With the connections weighted by 'w1' from the input layer to hidden layer, the current input flows to the current hidden layer; with the connections weighted by 'w2' from the hidden layer to itself, the information flows from the the hidden layer at t -1 to the hidden layer at t; with the connections weighted by 'w3' from the hidden layer to the output layer, the activation flows from the hidden layer to the output layer. Note that 'w1', 'w2' and 'w3' represent vectors of weights instead of single weight values, and they are reused for each time-step. Forward pass The input data flow from the input layer to hidden layer; the output activation of the hidden layer at t -1 flows to the hidden layer at t; the hidden layer sums up the information from two sources; finally the summed and processed information flows to the output layer. This process is referred to as the forward pass of RNN. Suppose that an RNN has I input units, H hidden units, and K output units, let w ij denote the weight of the connection from unit i to unit j, a t j and b t j represent the network input activation to unit j and the output activation of unit j at time t respectively. Specifically, we use use x t i to denote the input i value at time t. Considering an input sequence X of length T , the network input activation to the hidden units could be computed like: a t h = I i=1 w ih x t i + H h =1 w h h b t-1 h (3.1) In this equation, we can see clearly that the activation arriving at the hidden layer comes from two sources: (1) the current input layer through the 'w1' connections; (2) the hidden layer of previous time step through the 'w2' connections. The size of 'w1' and 'w2' are respectively size(w1) = I × H + 1(bias) and size(w2) = H × H. Then, the activation function θ h is applied: b t h = θ h (a t h ) (3.2) We calculate a t h and therefore b t h from t = 1 to T . This is a recursive process where a initial configuration is required of course. In this thesis, the initial value b 0 h is always set to 0. Now, we consider propagating the hidden layer output activation b t h to the output layer. The activation arriving at the output units can be calculated as following: a t k = H h=1 w hk b t h (3.3) The size of 'w3' is size(w3) = H × K. Then applying the activation function θ k , we get the output activation b t k of the output layer unit k at time t. We use a a special name y t k to represent it: y t k = θ k (a t k ) (3.4) We introduce the definition of the loss function in Section 3.4. Backward pass With the loss function, we could compute the distance between the network outputs and the ground truths. The aim of backward pass is to minimize the distance to train an effective neural network. The widely used solution is gradient descent of which the idea is to first calculate the derivative of the loss function with respect to each weight and then adjust the weights in the direction of negative slope to minimize the loss function [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF]. To compute the derivative of the loss function with respect to each weight in the network, the common technique used is known as Back Propagation (BP) [START_REF] Rumelhart | Learning internal representations by error propagation[END_REF][START_REF] Williams | Gradient-based learning algorithms for recurrent networks and their computational complexity[END_REF][START_REF] Werbos | Generalization of backpropagation with application to a recurrent gas market model[END_REF]. As there are recurrent connections in RNNs, researchers designed the special algorithms to calculate weight derivatives efficiently for RNNs, two well known methods being Real Time Recurrent Learning (RTRL) [START_REF] Robinson | The utility driven dynamic error propagation network[END_REF] and Back Propagation Through Time (BPTT) [Williams andZipser, 1995] [Werbos, 1990]. Like Alex Graves, we introduce BPTT only as it is both conceptually simpler and more efficient in computation time. We define δ t j = ∂L ∂a t j (3.5) Thus the partial derivatives of the loss function L with respect to the inputs of the output units a t k is δ t k = ∂L ∂a t k = K k =1 ∂L ∂y t k ∂y t k ∂a t k (3.6) Afterwards, the error will be back propagated to the hidden layer. Note that the loss function depends on the activation of the hidden layer not only through its influence on the output layer, but also through its influence on the hidden layer at the next time-step. Thus, δ t h = ∂L ∂a t h = ∂L ∂b t h ∂b t h ∂a t h = ∂b t h ∂a t h ( K k=1 ∂L ∂a t k ∂a t k ∂b t h + H h =1 ∂L ∂a t+1 h ∂a t+1 h ∂b t h ) (3.7) δ t h = θ h a t h K k=1 δ t k w hk + H h =1 δ t+1 h w hh (3.8) δ t h terms can be calculated recursively from T to 1. Of course this requires the initial value δ T +1 h to be set. As there is no error coming from beyond the end of the sequence, δ T +1 h = 0 ∀h. Finally, noticing that the same weights are reused at every time-step, we sum over the whole sequence to get the derivatives with respect to the network weights ∂L ∂w ij = T t=1 ∂L ∂a t j ∂a t j ∂w ij = T t=1 δ t j b t i (3.9) The last step is to adjust the weights based on the derivatives we have computed above. It is an easy procedure and we do not discuss it here. Bidirectional networks The RNNs we have discussed only possess the ability to access the information from past, not the future. In fact, future information is important to sequence labeling task as well as the past context. For example when we see the left bracket '(' in the handwritten expression 2(a + b), it seems easy to answer '1', 'l' or '(' if only focusing on the signal on the left side of '('. But if we consider the signal on the right side also, the answer is straightforward, being '(' of course. An elegant solution to access context from both directions is Bidirectional Recurrent Neural Networks (BRNNs) (BRNNs) [START_REF] Schuster | Bidirectional recurrent neural networks[END_REF][START_REF] Schuster | On supervised learning from sequential data with applications for speech recognition[END_REF][START_REF] Baldi | Exploiting the past and the future in protein secondary structure prediction[END_REF]. Figure 3.5 shows an unfolded bidirectional network. As we can see, there are 2 separate recurrent hidden layers, forward and backward, each of them process the input sequence from one direction. No information flows between the forward and backward hidden layers and these two layers are both connected to the same output layer. With the bidirectional structure, we could use the complete past and future context to help recognizing each point in the input sequence. Long short-term memory (LSTM) In Section 3.2, we discussed RNNs which have the ability to access contextual information from one direction and BRNNs which have the ability to visit bidirectional contextual information. Due to their memory capability, lots of applications are available in sequence labeling tasks. However, there is a problem that the range of context that can be in practice accessed is quite limited. The influence of a given input on the hidden layer, and therefore on the network output, either decays or blows up exponentially as it cycles around the network's recurrent connections [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF]. This effect is often referred to in the literature as the vanishing gradient problem [START_REF] Hochreiter | Gradient flow in recurrent nets: the difficulty of learning long-term dependencies[END_REF][START_REF] Bengio | Learning long-term dependencies with gradient descent is difficult[END_REF]. To address this problem, many methods were proposed such as simulated annealing and discrete error propagation [START_REF] Bengio | Learning long-term dependencies with gradient descent is difficult[END_REF], explicitly introduced time delays [START_REF] Lang | A time-delay neural network architecture for isolated word recognition[END_REF][START_REF] Lin | Learning long-term dependencies in narx recurrent neural networks[END_REF] or time constants [START_REF] Mozer | Induction of multiscale temporal structure[END_REF], and hierarchical sequence compression [START_REF] Schmidhuber | Learning complex, extended sequences using the principle of history compression[END_REF]. In this section, we will focus on Long Short-Term Memory (LSTM) architecture [START_REF] Hochreiter | Long short-term memory[END_REF]]. Topology We replace the summation unit in the hidden layer of a standard RNN with memory block (Figure 3.6), generating an LSTM network. There are three gates (input gate, forget gate and output gate) and one or more cells in a memory block. Figure 3.6 shows a LSTM memory block with one cell. We list below the activation arriving at three gates at time t: Input gate: the current input, the activation of hidden layer at time t -1, the cell state at time t -1 Forget gate: the current input, the activation of hidden layer at time t -1, the cell state at time t -1 Output gate: the current input, the activation of hidden layer at time t -1, the current cell state The connections shown by dashed lines from the cell to three gates are named as 'peephole' connections which are the only weighted connections inside the memory block. Just because of the three 'peephole's, the cell state is accessible to the three gates. These three gates sum up the information from inside and outside the block with different weights and then apply gate activation function 'f', usually the logistic sigmoid. Thus, the gate activation are between 0 (gate closed) and 1 (gate open). We present below how these three gates control the cell via multiplications (small black circles): Input gate: the input gate multiplies the input of the cell. The input gate activation decides how much information the cell could receive from the current input layer, 0 representing no information and 1 repre-Figure 3.6 -LSTM memory block with one cell. Extracted from [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF]. senting all the information. Forget gate: the forget gate multiplies the cell's previous state. The forget gate activation decides how much context should the cell memorize from its previous state, 0 representing forgetting all and 1 representing memorizing all. Output gate: the output gate multiplies the output of the cell. It controls to which extent the cell will output its state, 0 representing nothing and 1 representing all. The cell input and output activation functions ('g' and 'h') are usually tanh or logistic sigmoid, though in some cases 'h' is the identity function [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF]. Output gate controls to which extent the cell will output its state, and it is the only outputs from the block to the rest of the network. As we discussed, the three control gates could allow the cell to receive, memorize and output information selectively, thereby easing the vanishing gradient problem. For example the cell could memorize totally the input at first point as long as the forget gates are open and the input gates are closed at the following time steps. Forward pass As in [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF], we only present the equations for a single memory block since it is just a repeated calculation for multiple blocks. Let w ij denote the weight of the connection from unit i to unit j, a t j and b t j represent the network input activation to unit j and the output activation of unit j at time t respectively. Specifically, we use use x t i to denote the input i value at time t. Considering a recurrent network with I input units, K output units and H hidden units, the subscripts ς, φ, ω represent the input, forget and output gate and the subscript c represents one of the C cells. Thus, the connections from the input layer to the three gates are weighted by w iς , w iφ , w iω respectively; the recurrent connections to the three gates are weighted by w hς , w hφ , w hω ; the peep-hole weights from cell c to the input, forget, output gates can be denoted as w cς , w cφ , w cω . s t c is the state of cell c at time t. We use f to denote the activation function of the gates, and g and h to denote respectively the cell input and output activation functions. b t c is the only output from the block to the rest of the network. As with the standard RNN, the forward pass is a recursive calculation by starting at t = 1. All the related initial values are set to 0. Equations are given below: Input gates a t ς = I i=1 w iς x t i + H h=1 w hς b t-1 h + C c=1 w cς s t-1 c (3.10) b t ς = f (a t ς ) (3.11) Forget gates a t φ = I i=1 w iφ x t i + H h=1 w hφ b t-1 h + C c=1 w cφ s t-1 c (3.12) b t φ = f (a t φ ) (3.13) Cells a t c = I i=1 w ic x t i + H h=1 w hc b t-1 h (3.14) s t c = b t φ s t-1 c + b t ς g(a t c ) (3.15) Output gates a t ω = I i=1 w iω x t i + H h=1 w hω b t-1 h + C c=1 w cω s t c (3.16) b t ω = f (a t ω ) (3.17) Cell Outputs b t c = b t ω h(s t c ) (3.18) Backward pass As can be seen in Figure 3.6, a memory block has 4 interfaces receiving inputs from outside the block, 3 gates and one cell. Considering the hidden layer, the total number of input interfaces is defined as G. For the memory block consisting only one cell, G is equal to 4H. We recall Equation 3.5 δ t j = ∂L ∂a t j (3.19) Furthermore, define t c = ∂L ∂b t c t s = ∂L ∂s t c (3.20) Cell Outputs t c = K k=1 w ck δ t k + G g=1 w cg δ t+1 g (3.21) As b t c is propagated to the output layer and the hidden layer of next time step in the forward pass, when computing t c , it is natural to receive the derivatives from both the output layer and the next hidden layer. G is introduced for the convenience of representation. Output gates δ t w = f (a t w ) C c=1 h(s t c ) t c (3.22) States t s = b t w h (s t c ) t c + b t+1 φ t+1 s + w cς δ t+1 ς + w cφ δ t+1 φ + w cω δ t ω (3.23) Cells δ t c = b t ς g (a t c ) t s (3.24) Forget gates δ t φ = f (a t φ ) C c=1 s t-1 c t s (3.25) Input gates δ t ς = f (a t ς ) C c=1 g(a t c ) t s (3.26) Variants There exists many variants of the basic LSTM architecture. Globally, they can be divided into chainstructured LSTM and non-chain-structured LSTM. Bidirectional LSTM Replacing the hidden layer units in BRNN with LSTM memory blocks generates Bidirectional LSTM [START_REF] Graves | Framewise phoneme classification with bidirectional lstm networks[END_REF]. LSTM network processes the input sequence from past to future while Bidirectional LSTM, consisting of 2 separated LSTM layers, models the sequence from two opposite directions (past to future and future to past) in parallel. Both of 2 LSTM layers are connected to the same output layer. With this setup, complete long-term past and future context is available at each time step for the output layer. Deep BLSTM DBLSTM [START_REF] Graves | Hybrid speech recognition with deep bidirectional lstm[END_REF] can be created by stacking multiple BLSTM layers on top of each other in order to get higher level representation of the input data. As illustrated in Figure 3.7, the outputs of 2 opposite hidden layer at one level are concatenated and used as the input to the next level. Non-chain-structured LSTM A limitation of the network topology described thus far is that they only allow for sequential information propagation (as shown in Figure 3.8a) since the cell contains a single recurrent connection (modulated by a single forget gate) to its own previous value. Recently, research on LSTM has been beyond sequential structure. The one-dimensional LSTM was extended to n dimensions by using n recurrent connections (one for each of the cell's previous states along every dimension) with n forget gates. It is named Multidimensional LSTM (MDLSTM) dedicated to the graph structure of an n-dimensional grid such as images [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF]. In [START_REF] Tai | Improved semantic representations from tree-structured long short-term memory networks[END_REF], the basic LSTM architecture was extend to tree structures, the Child-sum Tree-LSTM and the N-ary Tree-LSTM, allowing for richer network topology (Figure 3.8b) where each unit is able to incorporate information from multiple child units. In parallel to the work in [START_REF] Tai | Improved semantic representations from tree-structured long short-term memory networks[END_REF], [START_REF] Zhu | Long short-term memory over recursive structures[END_REF] explores the similar idea. The DAG-structured LSTM was proposed for semantic compositionality [START_REF] Zhu | Dag-structured long short-term memory for semantic compositionality[END_REF]. In later chapter, we will extend the chain-structured BLSTM to tree-based BLSTM which is similar to the above mentioned work, and apply this new network model for online math expression recognition. Connectionist temporal classification (CTC) RNNs' memory capability greatly meet the sequence labeling tasks where the context is quite important. To apply this recurrent network into sequence labeling, at least a loss function should be defined for the training process. In the typical frame wise training method, we need to know the ground truth label for each time step to compute the errors which means pre-segmented training data is required. The network is trained to make correct label prediction at each point. However, either the pre-segmentation or making label prediction at each point, both are large burdens to users or networks. The technique of CTC was proposed to solve these two points. It is specifically designed for sequence labeling problems where the alignment between the inputs and the target labels is unknown. By introducing an additional 'blank' class, CTC allows the network to make label predictions at some points instead of each point in the input sequence, so long as the overall sequence of character labels is correct. We introduce CTC briefly here; for a more detailed description, refer to A. Graves' book [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF]. From outputs to labelings CTC consists of a soft max output layer with one more unit (blank) than there are labels in alphabet. Suppose the alphabet is A (|A| = N ), the new extended alphabet is A which is equal to A ∪ [blank]. Let y t k denote the probability of outputting the k label of A at the t time step given the input sequence X of length T , where k is from 1 to N + 1 and t is from 1 to T . Let A T denote the set of sequences over A with length T and any sequence π ∈ A T is referred to as a path. Then, assuming the output probabilities at each time-step to be independent of those at other time-steps, the probability of outputting a sequence π would be: p(π|X) = T t=1 y t πt (3.27) The next step is from π to get the real possible labeling of X. A many-to-one function F : A T → A ≤T is defined from the set of paths onto the set of possible labeling of X to do this task. Specifically, first remove the repeated labels and then the blanks (-) from the paths. For example considering an input sequence of length 11, two possible paths could be cc --aaa -tt-, c ---aa --ttt. The mapping function works like: F (cc --aaa -tt-) = F (c ---aa --ttt) = cat. Since the paths are mutually exclusive, the probability of a labeling sequence l ∈ A ≤T can be calculated by summing the probabilities of all the paths mapped onto it by F : p(l|X) = π∈F -1 (l) p(π|X) (3.28) Forward-backward algorithm In section 3.4.1, we defined the probability p(l|X) as the sum of the probabilities of all the paths mapped onto l. The calculation seems to be problematic because the number of paths grows exponentially with the length of the input sequence. Fortunately it can be solved with a dynamic-programming algorithm similar to the forward-backward algorithm for Hidden Markov Model (HMM) [START_REF] Bourlard | Connectionist speech recognition: a hybrid approach[END_REF]. Consider a modified label sequence l with blanks added to the beginning and the end of l, and inserted between every pair of consecutive labels. Suppose that the length of l is U , apparently the length of l is U = 2U + 1. For a labeling l, let the forward variable α(t, u) denote the summed probability of all length t paths that are mapped by F onto the length u/2 prefix of l, and let the set V (t, u) be equal to {π ∈ A t : F (π) = l 1:u/2 , π t = l u }, where u is from 1 to U and u/2 is rounded down to an integer value. Thus: α(t, u) = π∈V (t,u) t i=1 y i π i (3.29) All the possible paths mapped onto l start with either a blank (-) or the first label (l 1 ) of l, so we have the formulas below: α(1, 1) = y 1 - (3.30) α(1, 2) = y 1 l 1 (3.31) α(1, u) = 0, ∀u > 2 (3.32) In fact, the forward variables at time t can be calculated recursively from those at time t -1. α(t, u) = y t l u u i=f (u) α(t -1, i), ∀t > 1 (3.33) where f (u) = u -1 if l u = blank or l u-2 = l u u -2 otherwise (3.34) Note that α(t, u) = 0, ∀u < U -2(T -t) -1 (3.35) Given the above formulation, the probability of l can be expressed as the sum of the forward variables with and without the final blank at time T . p(l|X) = α(T, U ) + α(T, U -1) (3.36) Figure 3.9 illustrates the CTC forward algorithm. Similarly, we define the backward variable β(t, u) as the summed probabilities of all paths starting at t + 1 that complete l when appended to any path contributing to α(t, u). Let W (t, u) = {π ∈ A T -t : F (π + π) = l, ∀π ∈ V (t, u)} denote the set of all paths starting at t + 1 that complete l when appended to any path contributing to α(t, u). Thus: β(t, u) = π∈W (t,u) T -t i=1 y t+i π i (3.37) The formulas below are used for the initialization and recursive computation of β(t, u): β(T, U ) = 1 (3.38) β(T, U -1) = 1 (3.39) β(T, u) = 0, ∀u < U -1 (3.40) β(t, u) = g(u) i=u β(t + 1, i)y t+1 l i (3.41) where g(u) = u + 1 if l u = blank or l u+2 = l u u + 2 otherwise (3.42) Note that β(t, u) = 0, ∀u > 2t (3.43) If we reverse the direction of the arrows in Figure 3.9, it comes to be an illustration of the CTC backward algorithm. Loss function The CTC loss function L(S) is defined as the negative log probability of correctly labeling all the training examples in some training set S. Suppose that z is the ground truth labeling of the input sequence X, then: L(S) = -ln (X,z)∈S p(z|X) = - (X,z)∈S ln p(z|X) (3.44) BLSTM networks can be trained to minimize the differentiable loss function L(S) using any gradient-based optimization algorithm. The basic idea is to find the derivative of the loss function with respect to each of the network weights, then adjust the weights in the direction of the negative gradient. The loss function for any training sample is defined as: L(X, z) = -ln p(z|X) (3.45) and therefore L(S) = (X,z)∈S L(X, z) (3.46) The derivative of the loss function with respect to each network weight can be represented as: ∂L(S) ∂w = (X,z)∈S ∂L(X, z) ∂w (3.47) The forward-backward algorithm introduced in Section 3.4.2 can be used to compute L(X, z) and the gradient of it. We only provide the final formula in this thesis and the process of derivation can be found in [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF]. L(X, z) = -ln |z | u=1 α(t, u)β(t, u) (3.48) To find the gradient, the first step is to differentiate L(X, z) with respect to the network outputs y t k : ∂L(X, z) ∂y t k = - 1 p(z|X)y t k u∈B(z,k) α(t, u)β(t, u) (3.49) where B(z, k) = {u : z u = k} is the set of positions where label k occurs in z . Then we continue to backpropagate the loss through the output layer: ∂L(X, z) ∂a t k = y t k - 1 p(z|X) u∈B(z,k) α(t, u)β(t, u) (3.50) and finally through the entire network during training. Decoding We discuss above how to train a RNN with CTC technique, and the next step is to label some unknown input sequence X in the test set with the trained model by choosing the most probable labeling l * : l * = arg max l p(l|X) (3.51) The task of labeling unknown sequences is denoted as decoding, being a terminology coming from hidden Markov models (HMMs). In this section, we will introduce in brief several approximate methods that perform well in practice. Likewise, we refer the interested readers to [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF] for the detailed description. We also design new decoding methods which are suitable to the tasks of this thesis in later chapters. Best path decoding Best path decoding is based on the assumption that the most probable path corresponds to the most probable labeling l * ≈ F (π * ) (3.52) where π * = arg max π p(π|X). It is simple to find π * , just concatenating the most active outputs at each time-step. However best path decoding could lead to errors in some cases when a label is weakly predicted for several successive time-steps. Figure 3.10 illustrates one of the failed cases. In this simple case where there are just two time steps, the most probable path found with best path decoding is '--' with the probability of 0.42 = 0.7 * 0.6, and therefore the final labeling is 'blank'. In fact, the summed probabilities of the paths corresponding to the labeling of 'A' is 0.58, greater than 0.42. Prefix search decoding Prefix search decoding is a best-first search through the tree of labelings, where the children of a given labeling are those that share it as a prefix. At each step the search extends the labeling whose children have the largest cumulative probability. As can be seen in Figure 3.11, there exist in this tree 2 types of nodes, end node ('e') and extending node. An extending node extends the prefix at its parent node and the number above it is the total probability of all labelings beginning with that prefix. An end node denotes that the labeling ends at its parent and the number above it is the probability of the single labeling ending at its parent. At each iteration, we explore the extending of the most probable remaining prefix. Search ends when a single labeling is more probable than any remaining prefix. Prefix search decoding could find the most probable labeling with enough time. However the fact that the number of prefixes it must expand grows exponentially with the input sequence length, affects largely the feasibility of its application. Constrained decoding Constrained decoding refers to the situation where we constrain the output labelings according to some predefined grammar. For example, in word recognition, the final transcriptions are usually required to form sequences of dictionary words. Here, we only consider single word decoding, which means all word-toword transitions are forbidden. With regard to single word recognition, if the number of words in the target sequence is fixed, one of the possible methods could be as following: considering an input sequence X, for each word wd in the dictionary, we firstly calculate the sum of the probabilities p(wd|X) of all the paths π which can be mapped into wd using the forward-backward algorithm described in Section 3.4.2; then, assign X with the word holding the maximum probability. II Contributions 4 Mathematical expression recognition with single path As well known, BLSTM network with a CTC output layer achieved great success in sequence labeling tasks, such as text and speeches recognition. This success is due to the LSTM's ability of capturing longterm dependency in a sequence and the effectiveness of CTC training method. In this chapter, we will explore the idea of using the sequence-structured BLSTM with a CTC stage to recognize 2-D handwritten mathematical expression (Figure 4.1). CTC allows the network to make label predictions at any point in the input sequence, so long as the overall sequence of labels is correct. It is not well suited for our cases in which a relatively precise alignment between the input and output is required. Thus, a local CTC methodology is proposed aiming to constrain the outputs to emit at least once or several times the same non-blank label in a given stroke. This chapter will be organized as follows: Section 4.1 globally introduce the proposal that builds stroke label graph from a sequence of labels, along with the existing limitations in this stage. Then, the entire process of generating the sequence of labels with BLSTM and local CTC given the input is orderly presented in detail, including firstly feeding the inputs of BLSTM, then the training and recognition stages. The experiments and discussion are introduced in Section 4.3 and Section 4.4 respectively. From single path to stroke label graph This section will be focused on introducing the idea of building SLG from a single path. First, a classification of the degree of complexity of math expressions will be given to help understanding the different difficulties and the cases that could or could not be solved by the proposed approach. Complexity of expressions Expressions could be divided into two groups: ( 1 The proposed idea Currently in CROHME, SLG is the official format to represent the ground-truth of handwritten math expressions and also for the recognition outputs. The recognition system proposed in this thesis is aiming to output the SLG directly for each input expression. As a strict expression, we use 'correct SLG' to denote the SLG which equals to the ground truth, and 'valid SLG' to represent the graph where double-direction edge corresponds to segmentation information and all strokes (nodes) belonging to one symbol have the same input and output edges. In this section, we explain how to build a valid SLG from a sequence of strokes. An input handwritten mathematical expression consists of one or more strokes. The sequence of strokes in an expression can be described as S = (s 1 , ..., s n ). For i < j, we assume s i has been entered before s j . A path (different from the notation within the CTC part) in SLG can be defined as Φ i = (n 0 , n 1 , n 2 , ..., n e ), where n 0 is the starting node and n e is the end node. The set of nodes of Φ i is n(Φ i ) = {n 0 , n 1 , n 2 , ..., n e } and the set of edges of Φ i is e(Φ i ) = {n 0 → n 1 , n 1 → n 2 , ..., n e-1 → n e }, where n i → n i+1 denotes the edge from n i to n i+1 . In fact, the sequence of strokes described as S = (s 1 , ..., s n ) is exactly the path following stroke writing order (called time path, Φ t ) in SLG. Still taking '2 + 2' as example, the time path is presented with red color in Figure 4.3a. If all nodes and edges from Φ t are well classified during the recognition process, we could obtain a chain-SLG as the Fig 4 .3b. We propose to get a complete (i.e. valid) SLG from Φ t by adding the edges which can be deduced from the labeled path to obtain a coherent SLG as depicted on Figure 4.3c. The process can be seen as: (1) complete the segmentation edges between Considering both the nodes and edges, we rewrite the time path Φ t shown in Figure 4.3b as the format of (s1, s1 → s2, s2, s2 → s3, s3, s3 → s4, s4) labeled as (2, R, +, +, +, R, 2). This sequence alternates the node labels {2, +, +, 2} and the edge labels {R, +, R}. Given the labeled sequence (2, R, +, +, +, R, 2), the information that s2 and s3 belong to the same symbol + can be derived. With the rule that doubledirection edge represents segmentation information, the edge from s3 to s2 will be added automatically. According to the rule that all strokes in a symbol have the same input and output edges, the edges from s1 to s3 and from s2 to s4 will be added automatically. The added edges are shown in bold in Figure 4.3c. In this case a correct SLG is built from Φ t . Our proposal of building SLG from the time path works well on chain-SRT expressions as long as each symbol is written successively and the symbols in such kind of expressions are entered following the order from the root to the leaf in SRT. Successful cases include linear expressions as 2 + 2 mentioned previously and a part of 2-D expressions such as P eo shown in Figure 4.4a. The sequence of strokes and edges is (P, P, P, Superscript, e, R, o). All the spatial relationships are covered in it and naturally a correct SLG can be generated. Usually users enter the expression P eo following the order of P, e, o. Yet the input order of e, o, P could be also possible. For this case, the corresponding sequence of strokes and edges is (e, R, o, _, P, P, P ). Since there is no edge from o to P in SLG, we use _ to represent it. Apparently, it is not possible to build a complete and correct SLG with this sequence of labels where the Superscript relationship from P to e is missing. As a conclusion, for a chain-SRT expression written with specific order, a correct SLG could be built using the time path. For those 2-D expressions of which the SRTs are beyond of the chain structure, the proposal presents unbreakable limitations. Figure 4.4c presents a failed case. According to time order, 2 and h are neighbors but there is no edge between them as can be seen on Figure 4.4d. In the best case the system can output a sequence of stroke and edge labels (r, Superscript, 2, _, h). The Right relationship existing between r and h drawn with red color in Figure 4.4d is missing in the previous sequence. It is not possible to build the correct SLG with (r, Superscript, 2, _, h). If we change the writing order, first r, h and then 2, the time sequence will be (r, Right, h, _, 2). Yet, we still can not build a correct SLG with Superscript relationship missing. Being aware of this limitation, the 1-D time sequence of strokes is used to train the BLSTM and the outputted sequence of labels during recognition will be used to generate a valid SLG graph. Detailed Implementation An online mathematical expression is a sequence of strokes described as S = (s 1 , ..., s n ). In this section, we present the process to generate the above-mentioned 1-D sequence of labels from S with the BLSTM and local CTC model. CTC layer only outputs the final sequence of labels while the alignment between the inputs and the labels is unknown. BLSTM with CTC model may emit the labels before, after or during the segments (strokes). Furthermore, it tends to glue together successive labels that frequently co-occur [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF]. However, the label of each stroke is required to build SLG, which means the alignment information between a sequence of strokes and a sequence of labels should be provided. Thus, we propose local CTC here, constraining the network to emit the label during the segment (stroke), not before or after. First part is to feed the inputs of the BLSTM with S. Then, we focus on the network training process-local CTC methodology. Lastly, the recognition strategies adopted in this chapter will be explained in detail. BLSTM Inputs To feed the inputs of the BLSTM, it is important to scan the points belonging to the strokes themselves (on-paper points) as well as the points separating one stroke from the next one (in-air points). We expect that the visible strokes will be labeled with corresponding symbol labels and that the non-visible strokes connecting two visible strokes will be assigned with one of the possible edge labels (could be relationship label, symbol label or '_'). Thus, besides re-sampling points from visible strokes, we also re-sample points from the straight line which links two visible strokes, as can be seen in Figure 4.5. In the rest of this thesis, Given each expression, we first re-sampled points both from visible strokes and invisible strokes which connects two successive visible strokes in the time order. 1-D unlabeled sequence can be described as {strokeD 1 , strokeU 2 , strokeD 3 , strokeU 4 , ..., strokeD K } with K being the number of re-sampled strokes. Note that if s is the number of visible strokes in this path, K = 2 * s -1. Each stroke (strokeD or strokeU ) consists of one or more points. At a time-step, the input provided to the BLSTM is the feature vector extracted from one point. Without CTC output layer, the ground-truth of every point is required for BLSTM training process. With CTC layer, only the target labels of the whole sequence is needed, the pre-segmented training data is not required. In this chapter, a local CTC technology is proposed and the ground-truth of each stroke is required. The label of strokeD i should be assigned with the label of the corresponding node in SLG; the label of strokeU i should be assigned with the label of the corresponding edge in SLG. If no corresponding edge exists, the label N oRelation will be defined as '_'. Features A stroke is a sequence of points sampled from the trajectory of a writing tool between a pen-down and a pen-up at a fixed interval of time. Then an additional re-sampling is performed with a fixed spatial step to get rid of the writing speed. The number of re-sampling points depends on the size of expression. For each expression, we re-sample with 10 × (length/avrdiagonal) points. Here, length refers to the length of all the strokes in the path (including the gap between successive strokes) and avrdiagonal refers to the average diagonal of the bounding boxes of all the strokes in an expression. Since the features used in this work are independent of scale, the operation of re-scaling can be omitted. Subsequently, we compute five local features per point, which are quite close to the state of art [Álvaro et al., 2013[Álvaro et al., , Awal et al., 2014]]. For every point p i (x, y) we obtained 5 features (see Figure 4.6a): [sin θ i , cos θ i , sin φ i , cos φ i , P enU D i ] with: • sin θ i , cos θ i are the sine and cosine directors of the tangent of the stroke at point p i (x, y); • φ i = ∆θ i , defines the change of direction at point p i (x, y); • P enU D i refers to the state of pen-down or pen-up. Even though BLSTM can access contextual information from past and future in a long range, it is still interesting to see if a better performance is reachable when contextual features are added in the recognition task. Thus, we extract two contextual features for each point (see Figure 4.6b): [sin ψ i , cos ψ i ] with: • sin ψ i , cos ψ i are the sine and cosine directors of the vector from the point p i (x, y) to its closest pen-down point which is not in the current stroke. For the single-stroke expressions, sin ψ i = 0, cos ψ i = 0. Note that the proposed features are size-independent and position-independent characteristics, therefore we omit the normalization process in this thesis. Later in different experiments,we will use the 5 shape descriptor alone or the 7 features together depending on the objective of each experiment. Training process -local connectionist temporal classification Frame-wise training of RNNs requires separate training targets for every segment or timestep in the input sequence. Even though presegmented training data is available, it is known that BLSTM and CTC stage have better performance when a 'blank' label is introduced during training [START_REF] Bluche | Framewise and ctc training of neural networks for handwriting recognition[END_REF], so that better decision can be made only at some point in the input sequence. Of course doing so, precise segmentation of the input sequence is not possible. As the label of each stroke is required to build a SLG, we should make decisions on stroke (strokeD or strokeU ) level instead of sequence level (as classical CTC) or point level during the recognition process. Thus, a correspondingly stroke level training method cc-, c --, --c, -cc and-c-('-' denotes 'blank'). More generally, the number of possible label sequences is n * (n + 1)/2 (n is the number of points), which is actually 6 with the proposed example. In Section 3.4, CTC technology proposed by Graves is introduced. We modify the CTC algorithm with a local strategy to let it output the relatively precise alignment between the input sequence and the output sequence of labels. In this way, it could be applied for the training stage in our proposed system. Given the input sequence X of length T consisting of U strokes, l is used to denote the ground truth, i.e. the sequence of labels. As one stroke belongs to at most one symbol or one relationship, the length of l is U . l represents the label sequence with blanks added to the beginning and the end of l, and inserted between every pair of consecutive labels. Apparently, the length of l is U = 2U + 1. The forward variable α(t, u) denotes the summed probability of all length t paths that are mapped by F onto the length u/2 prefix of l, where u is from 1 to U and t is from 1 to T . Given the above notations, the probability of l can be expressed as the sum of the forward variables with and without the final blank at time T . p(l|X) = α(T, U ) + α(T, U -1) (4.1) In our case, α(t, u) can be computed recursively as following: α(1, 1) = y 1 - (4.2) α(1, 2) = y 1 l 1 (4.3) α(1, u) = 0, ∀u > 2 (4.4) α(t, u) = y t l u u i=f local (u) α(t -1, i) (4.5) where f local (u) = u -1 if l u = blank u -2 otherwise (4.6) In the original Eqn. 3.34, the value u -1 was also assigned when l u-2 = l u , enabling the transition from α(t -1, u -2) to α(t, u). This is the case when there are two repeated successive symbols in the final labeling. With regard to the corresponding paths, there exists at least one blank between these two symbols. Otherwise, only one of these two symbols can be obtained in the final labeling. In our case, as one label will be selected for each stroke, the above-mentioned limitation can be ignored. Suppose that the input at time t belongs to i th stroke (i from 1 to U ), then we have α(t, u) = 0, ∀u/u < (2 * i -1), u > (2 * i + 1) (4.7) which means the only possible arrival positions for time t are l 2 * i-1 , l 2 * i , l 2 * i+1 . Figure 4.8 demonstrates the local CTC forward-backward algorithm using the example '2a' which is written with 2 visible strokes. The corresponding label sequences l and l of it are '2Ra' and '-2-R-a-' respectively (R is for Right relationship). We re-sampled 4 points for pen-down stroke '2', 5 points for pen-up stroke 'R' and 4 points for pen-down stroke 'a'. From this figure, we can see each part located on one stroke is exactly the CTC forward-backward algorithm. That is why the output layer adopted in this paper is called local CTC. Similarly, the backward variable β(t, u) denotes the summed probabilities of all paths starting at t + 1 that complete l when appended to any path contributing to α(t, u). The formulas for the initialization and recursion of the backward variable in local CTC are as follows: β(T, U ) = 1 (4.8) β(T, U -1) = 1 (4.9) β(T, u) = 0, ∀u < U -1 (4.10) β(t, u) = g local (u) i=u β(t + 1, i)y t+1 l i (4.11) where g local (u) = u + 1 if l u = blank u + 2 otherwise (4.12) Suppose that the input at time t belongs to i th stroke (i from 1 to U ), then: β(t, u) = 0, ∀u/u < (2 * i -1), u > (2 * i + 1) (4.13) With the local CTC forward-backward algorithm, the α(t, u) and β(t, u) are available for each time step t and each allowed positions u of time step t. Then the errors are backpropagated to the output layer (Equation 3.49), the hidden layer (Equation 3.50), finally to the entire network. The weights in the network are adjusted with the expectation to enabling the network output the corresponding label for each stroke. As can be seen in Figure 4.8, each part located on one stroke is exactly the CTC forward-backward algorithm. In this chapter, a sequence consisting U strokes is regarded and processed as a entirety. In fact, each stroke i could be coped with separately. To be specific, with regard to each stroke i we have α i (t, u), β i (t, u) and p(l i |X i ) associated to it. The initialization of α i (t, u) and β i (t, u) is the same as described previously. With this treatment, p(l|X) can be expressed as: p(l|X) = U i=1 p(l i |X i ) (4.14) Either way, the result is the same. We will reintroduce this point in Chapter 6 where the separate processing method is taken. Recognition Strategies Once the network is trained, we would ideally label some unknown input sequence X by choosing the most probable labeling I * : I * = argmax l p(l|X) (4.15) Since local CTC is already adopted in the training process in this work, naturally recognition should be performed at stroke (strokeD and strokeU ) level. As explained in Section 4.1 to build the Label Graph, we need to assign one single label to each stroke. At that stage, for each point or time step, the network outputs the probabilities of this point belonging to different classes. Hence, a pooling strategy is required to go from the point level to the stroke level. We propose two kinds of decoding methods: maximum decoding and local CTC decoding, both based on stroke level. Maximum decoding With the same method taken in [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF] for isolated handwritten digits recognition using a multidimensional RNN with LSTM hidden layers, we first calculate the cumulative probabilities over the entire stroke. For stroke i, let o i = {p i ct }, where p i ct is the probability of outputting the c th label at the t th point. Suppose that we have N classes of labels (including blank), then c is from 1 to N ; |s i | points are re-sampled for stroke i, then t is from 1 to |s i |. Thus, the cumulative probability of outputting the c th label for stroke i can be computed as P i c = |s i | t=1 p i ct (4.16) Then we choose for stroke i the label with the highest P i c (excluding blank). Local CTC decoding With the output o i , we choose the most probable label for the stroke i: l * i = argmax l i p(l i |o i ) (4.17) In this work, each stroke outputs only one label which means we have N -1 possibilities of label of stroke. blank is excluded because it can not be a candidate label for stroke. With the already known N -1 labels, p(l i |o i ) can be calculated using the algorithm depicted in Section 4.2.3. Specifically, based on the Eqn. 6.17 we can write Eqn. 4.18, p(l i |o i ) = α(|s i |, 3) + α(|s i |, 2) (4.18) with T = |s i | and U = 3 (l is (blank, label, blank)). For each stroke, we compute the probabilities corresponding to N -1 labels and then select the one with the largest value. In mathematical expression recognition task, more than 100 different labels are included. If Eqn. 4.18 is computed more that 100 times for every stroke, undoubtedly it would be a time-consuming task. A simplified strategy is adopted here. We sort the P i c from Eqn. 4.16 using maximum decoding and keep the top 10 probable labels (excluding blank). From these 10 candidates, we choose the one which has the highest p(l i |o i ). In this way, Eqn. 4.18 is computed only 10 times for each stroke, greatly reducing the computation time. Furthermore, we add two constraints when choosing label for stroke: (1) the label of strokeD should be one of the symbol labels, excluding the relationship labels, like strokes 1, 3, 5, 7, 9, 11 in Figure 4.9. (2) the label of strokeU i is divided into 2 cases, if the labels of strokeD i-1 and strokeD i+1 are different, it should be one of the six relationships (strokes 2, 8, 10) or '_' (stroke 4); otherwise, it should be relationships, '_' or the label of strokeD i-1 (strokeD i+1 ). Taking stroke 6 shown in Figure 4.9 for example, if '+' is assigned to it means that the corresponding pair of nodes (strokes 5 and 7) belongs to the same symbol while '_' or relationship refers to 2 nodes belonging to 2 symbols. Note that to satisfy these constraints on edges labels, the labels of pen-down strokes are chosen first and then pen-up strokes. After recognition, post-processing (adding edges) should be done in order to build the SLG. The way to proceed has been already introduced in Section 4.1. Figure 4.9 -Illustration for the decision of the label of strokes. As stroke 5 and 7 have the same label, the label of stroke 6 could be '+', '_' or one of the six relationships. All the other strokes are provided with the ground truth labels in this example. Experiments We extend the RNNLIB library 1 With the Label Graph Evaluation library (LgEval) [START_REF] Mouchère | Icfhr 2014 competition on recognition of on-line handwritten mathematical expressions (crohme 2014)[END_REF], the recognition results can be evaluated on symbol level and on expression level. We introduce several evaluation criteria: symbol segmentation ('Segments'), refers to a symbol that is correctly segmented whatever the label; symbol segmentation and recognition ('Seg+Class'), refers to a symbol that is segmented and classified correctly; spatial relationship classification ('Tree Rels.'), a correct spatial relationship between two symbols requires that both symbols are correctly segmented and with the right relationship label. For all experiments the network architecture and configuration are as follows: • The input layer size: 5 or 7 (when considering the 2 additionnal context features) • The output layer size: the number of class (up to 109) • The hidden layers: 2 layers, the forward and backward, each contains 100 single-cell LSTM memory blocks • The weights: initialized uniformly in [-0.1, 0.1] • The momentum: 0.9 This configuration has obtained good results in both handwritten text recognition [Graves et al., 2009] and handwritten math symbol classification [Álvaro et al., 2013, 2014a]. Data sets Being aware of the limitations of our proposal related to the structures of expressions, we would like to see the performance of the current system on expressions of different complexities. Thus, three data sets are considered in this chapter. The blank label is only used for local CTC training. Figure 4.10 show some handwritten math expression samples extracted from CROHME 2014 data set. Experiment 1: theoretical evaluation As discussed in Section 4.1, there exist obvious limitations in the proposed solution of this chapter. These limitations could be divided into two types: (1) to chain-SRT expressions, if users could not write a multi-stroke symbol successively or could not follow a specific order to enter symbols, it will not be possible to build a correct SLG; (2) to those expressions of which the SRTs are beyond of the chain structure, regardless of the writing order, the proposed solution will miss some relationships. In this experiment, laying the classifier aside temporarily, we would like to evaluate the limitations of the proposal itself. Thus, to carry out this theoretical evaluation, we take the ground truth labels of the nodes and edges in the time path only of each expression. Table 4.1 andTable 4.2 present the evaluation results on CROHME 2014 test set at the symbol and expression level respectively using the above-mentioned strategy. We can see from Table 4.1, the recall ('Rec.') and precision ('Prec.') rates of the symbol segmentation on all these 3 data sets are almost 100% which implies that users generally write a multi-stroke symbol successively. The recall rate of the relationship recognition is decreasing from Data set 1 to 3 while the precision rate remains almost 100%. With the growing complexity of expressions, increasing relationships are missed due to the limitations. About 5% relationships are missed in Data set 1 because of only the problem of writing order. With regards to the approximate 25% relationships omitted in Data set 3, it is owing to the writing order and the conflicts between the chain representation method and the tree structure of expression, especially the latter one. In Table 4.2, the evaluation results at the expression level are available. 86.79% of Data set 1 which contains only 1-D expressions could be recognized correctly with the proposal at most. For the complete CROHME 2014 test set, only 34.11% expressions can be interpreted correctly in the best case. Experiment 2 In this experiment, we evaluate the proposed solution with BLSTM classifier on data sets of different complexity. Local CTC training and local CTC decoding methods are used inside the recognition system. Only 5 local features are extracted at each point for training. Each system is trained only once. The evaluation results on symbol level for the 3 data sets are provided in Table 4.3 including recall ('Rec.') and precision ('Prec.') rates for 'Segments', 'Seg+Class', 'Tree Rels.'. As can be seen, the results in 'Segments' and 'Seg+Class' are increasing while the training data set is growing. The recall for 'Tree Rels.' is decreasing among the three data sets. It is understandable since the number of missed relationships grows with the complexity of expressions knowing the limitation of our method. The precision for 'Tree Rels.' fluctuates as the data set is expanding. The results of Data set 3 are comparable to the results of CROHME 2014 because the same training and testing data sets are used. The second part of Table 4.3 gives the symbol level evaluation results of the participant systems in CROHME 2014 sorted by recall of correct symbol segmentation. The best 'Rec.' of 'Segments' and 'Seg+Class' reported in CROHME 2014 are 98.42% and 93.91% respectively. Ours are 93.26% and 84.40%, both ranked 3 out of 8 systems (7 participants in CROHME 2014 + our system). Our solution presents competitive results on symbol recognition task and segmentation task even though the symbols with delayed strokes are missed. However, our proposal, at that stage, shows limited performances at the relationship level, with 'Rec.' = 61.85%, 'Prec.' = 75.06%. This is mainly because approximate 25% relationships are missed in the time sequence. If we consider only the relationships covered by the time sequence which accounts for 75.54%, the recall rate will be 61.85%/75.54% = 81.88%, close to the second ranked system in the competition. Thus, one of the main works of next chapters would be focused on proposing a solution to catch the omitted approximate 25% relationships at the modeling stage. We present a correctly recognized sample and an incorrectly recognized sample in Figure 4.11 and Figure 4.12 respectively. The expression a ≥ b (Figure 4.11) is a 1-D expression and therefore the time path could cover all the relationships in this expression. It was correctly recognized by our system in this chapter. Considering the other sample 44 -4 4 of which the SRT is a tree structure (Figure 4.12), the Right relationship from the minus symbol to fraction bar was omitted in the modeling stage, likewise the Above relationship from the fraction bar to the numerator 4. In addition, the relation from the minus symbol to the numerator 4 was wrongly recognized as Right and it should be labeled as N oRelation. Experiment 3 In this experiment, we would like to know if different training and decoding methods and the contextual features will improve or not the performance of our recognition system. We use different training methods and different features to train the recognition system, and take two kinds of strategy at the recognition stage. All the systems in this part are trained and evaluated on Data set 3. Since the weights inside the network are initialized randomly, each system is trained four times with the aim to compute mean evaluation values and standard deviations, and therefore obtain convincing conclusions and have an idea of the system stability. As shown in Discussion The capability of BLSTM networks to process graphical two-dimensional languages such as handwritten mathematical expressions is explored in this chapter as a first try. Using online math expressions, which are available as a temporal sequence of strokes, we produce a labeling at the stroke level using a BLSTM network with a local CTC output layer. Then we propose to build a two-dimensional (2-D) expression from this sequence of labels. Our solution presents competitive results with CROHME 2014 data set on symbol recognition task and segmentation task. Proposing a global solution to perform at one time segmentation, recognition and interpretation, with no dedicated stages, is a major advantage of the proposed solution. To some extent, at the present time, it fails on the relationship recognition task. This is primarily due to an intrinsic limitation, since currently, a single path following the time sequence of strokes in SLG is used to build the expression. In fact, some important relationships are omitted at the modeling stage. We only considered stroke combinations in time series in the work of this chapter. For the coming chapter, the proposed solution will take into account more possible stroke combinations in both time and space such that less relationships will be missed at the modeling stage. A sequential model could not include temporal and spacial information at the same time. To overcome this limitation, we propose to build a graph from the time sequence of strokes to model more accurately the relationships between strokes. Mathematical expression recognition by merging multiple paths In Chapter 4, we confirmed the fact of that there exists unbreakable limitations if using a single 1-D path to model expressions. This conclusion was verified from both theoretical and experimental point of view. The sequence of strokes arranged with time order was used in those experiments as an example of 1-D paths since it is the most intuitive and readily available. Due to the unbreakable limitations, in this chapter, we turn to a graph structure to model the relationships between strokes in mathematical expressions. Further, using the sequence classifier BLSTM to label the graph structure is another research focus. This chapter will be organized as follows: Section 5.1 provides an overview of graph representation related to build a graph from raw mathematical expression. Then we globally describe the framework of mathematical expression recognition by merging multiple paths in Section 5.2. Next, all the steps of the recognition system are explained one by one in detail. Finally, the experiment part and the discussion part are presented in Section 5.4 and Section 5.5 respectively. Overview of graph representation Each mathematical expression consists of a sequence of strokes. Relations between two strokes could be divided into 3 types: belong to the same symbol (segmentation), one of the 6 spatial relationships, no relation. It is possible to describe a ME at the stroke level using a SLG of which nodes represent strokes, while the edges encode either segmentation information or one of the spatial relationships. If there is no relation between two strokes, of course no corresponding edge would be found between two strokes in SLG. All the above discussion supposes the knowledge of the ground truth. In fact, given a handwritten expression, our work is to find the ground truth. Thus the first step is to derive an intermediate graph from the raw information. Specifically, it involves finding pairs of strokes between which there exist relations (represented as edges). In [START_REF] Hu | Features and Algorithms for Visual Parsing of Handwritten Mathematical Expressions[END_REF], they call this stage as graph representation. We could find out all the ground truth edges (100% recall) by adding an edge between any pair of strokes in the derived graph. However, this exhaustive approach brings at the same time the problem of low precision. For an expression with N strokes, if we consider all the possibilities, there would be N (N -1) edges in the derived graph. Compared to the ground truth SLG, many edges do not exist. Suppose that all symbols in this expression are single-stroke symbol, there are only N -1 ground truth edge, and the precision is 1 N . Apparently this exhaustive solution with a 100% recall and an around 1 N precision is unbearable in practice because even through later classifier could recognize these invalid edges to some extent it is still a big burden. Thus, better graph models should be explored. In this section, we introduce several models used in other literature provided as a basis of the proposed model in this thesis. Time Series (TS) is widely used as a model for the task of math symbol segmentation and recognition in previous works [START_REF] Hu | Segmenting handwritten math symbols using adaboost and multi-scale shape context features[END_REF][START_REF] Koschinski | Segmentation and recognition of symbols within handwritten mathematical expressions[END_REF][START_REF] Kosmala | On-line handwritten formula recognition using statistical methods[END_REF][START_REF] Yu | A unified framework for symbol segmentation and recognition of handwritten mathematical expressions[END_REF][START_REF] Smithies | A handwriting-based equation editor[END_REF], Winkler and Lang, 1997a,b]. In this model, strokes are represented as nodes, and between two successive strokes in the input order there is an edge (undirected) connecting them. We also considered this model but a directed version in Chapter 4 where it is called time path. Time Series is a good model for symbol segmentation and recognition since people usually writes symbols with no delayed stroke. However, it is not strong enough to capture the global structure of math expression, which has been clarified very well in last chapter. Unlike Time Series which is a chain structure in fact, K Nearest Neighbor (KNN) is a graph model in the true sense. In KNN graph, for each stroke, we first search for its K closest strokes. Then each undirected edge between this stroke and each of its K closest neighbors will be added into the graph. Thus, each node has at least K edges connected to it. In other words, the number of the edges connected to each node is relatively fixed in fact. However, it is not well suitable for math expression where nodes are connected to a variable number of edges. In [START_REF] Matsakis | Recognition of handwritten mathematical expressions[END_REF], Minimum spanning tree (MST) is used as the graph model. A spanning tree is a connected undirected graph, which a set of edges connect all of the nodes of the graph with no cycles. To define a minimum spanning tree, the graph edges also need to be assigned with a weight, in which case an MST is a spanning tree that has the minimum accumulative edge weight of the graph [START_REF] Matsakis | Recognition of handwritten mathematical expressions[END_REF]. Minimum spanning trees can be efficiently computed using the algorithms of Kruskal and Prim [START_REF] Cormen | Introduction to algorithms[END_REF]. In [START_REF] Matsakis | Recognition of handwritten mathematical expressions[END_REF], each stroke is represented as a node and the edge between the two strokes is assigned with a weight which is the distance of two strokes. A Delaunay Triangulation (DT) for a set P of points in a plane is a triangulation DT (P ) such that no point in P is inside the circumcircle of any triangle in DT (P ) [de Berg et al.]. [START_REF] Hirata | Automatic labeling of handwritten mathematical symbols via expression matching[END_REF] uses a graph model based on Delaunay triangulation. They assume that symbols are correctly segmented, thus, instead of stroke, each symbol is taken as a node. In [START_REF] Hu | Features and Algorithms for Visual Parsing of Handwritten Mathematical Expressions[END_REF], Line Of Sight (LOS) graph is considered since they find that, for a given stroke, it usually can see the strokes which have a relation with it in the symbol relation tree. The center of each stroke is taken as an eye, there would be an directed edge from the current stroke to each stroke it can see. A sample is available in Figure 5.2. [START_REF] Muñoz | Mathematical Expression Recognition based on Probabilistic Grammars[END_REF] use an undirected graph of which each stroke is a node and edges only connect strokes that are visible and close as a segmentation model. They also consider the visibility between strokes. Figure 5.2 -An example of line of sight graph for a math expression. Extracted from [START_REF] Hu | Features and Algorithms for Visual Parsing of Handwritten Mathematical Expressions[END_REF]. Hu carried out considerable experiments to choose the appropriate graph model in [START_REF] Hu | Features and Algorithms for Visual Parsing of Handwritten Mathematical Expressions[END_REF]. To better stress the characteristics of variable graph models, several related definitions are provided before going to the details. A stroke is a sequence of points which can be represented as a Bounding Box (BB) (Figure 5.3a) or a Convex Hull (CH) (Figure 5.3b). A subset P of the plane is called convex if and only if for any pair of (3) the distance between their closest points. Closest Point Pair (CPP) of two strokes refers to the pair of points having the minimal distance, where two points are from two strokes. The experiment results from [START_REF] Hu | Features and Algorithms for Visual Parsing of Handwritten Mathematical Expressions[END_REF] reveal that both MST and TS models achieve a high precision rate (around 90% on CROHME 2014 test set, AC distance is used for MST) but a relatively low recall (around 87% on CROHME 2014 test set). For KNN graph, the larger K is, the higher the recall is and the lower the precision is. When K = 6, the recall reach 99.4% and the precision is 28.3% Based on the previous works and our own work in Chapter 4, we develop a new graph representation model which is a directed graph built using both the temporal and spatial information of strokes. The framework In this section, we introduce the global framework (Figure 5.4) of the proposed solution of this chapter to provide the readers an intuitive look at the detailed implementation is proposed next. As depicted in Figure 5.4, the input to the recognition system is an handwritten expression which is a sequence of strokes; the output is the stroke label graph which consists of the information about the label of each stroke and the relationships between stroke pairs. As the first step, we derive an intermediate graph from the raw input considering both the temporal and spatial information. In this graph, each node is a stroke and edges are added according to temporal or spatial properties. The derived graph is expected to have a high recall and a reasonable precision compared to the ground truth SLG. The remaining work is to label each node and each edge of the graph. To this end, several 1-D paths will be selected from the graph since the classifier model we are considering is a sequence labeler. The classical BLSTM-RNN model are able to deal with only sequential structure data. Next, we use the BLSTM classifier to label the selected 1-D paths. This stage consists of two steps, being the training and recognition process. Finally, we merge these labeled paths to build a complete stroke label graph with a strategy of setting different weights for them. Detailed implementation As explained in last section, the input data is available as a sequence of strokes S = (s 0 , ..., s n-1 ) (for i < j, we assume s i has been entered before s j ) from which we would like to obtain the final SLG graph describing unambiguously the ME. In this part, we will introduce the recognition system step by step following the order within the framework. Derivation of an intermediate graph G In a first step, we will derive an intermediate graph G, where each node is a stroke and edges are added according to temporal or spatial properties. Based on the previous works on graph reprensentation that we reviewed in Section 5.1, we develop a new directed graph representation model. Some definitions regarding to the spatial relationships between strokes will be provided first. Definition 5.1. The distance between two strokes s i and s j can be defined as the Euclidean distance between their closest points. dist(s i , s j ) = min p∈s i ,q∈s j (x p -x q ) 2 + (y p -y q ) 2 (5.1) It is the CPP distance mentioned in Section 5.1 as a matter of fact. Definition 5.2. A stroke s i is considered visible from stroke s j if the bounding box of the straight line between their closest points does not cross the bounding box of any other stroke s k . For example in Figure 5.5, s 1 and s 3 can see each other because the bounding box of the straignt line between their closest points does not cross the bounding box of stroke s 2 and s 4 . In [START_REF] Muñoz | Mathematical Expression Recognition based on Probabilistic Grammars[END_REF] the visibility is defined by the straight line between their closest points does not cross any other stroke. We simplify it through replacing the stroke with its bounding box to reduce computation. As illustrated in Figure 5.6, point (0, 0) is the center of the bounding box of stroke s i . The angle of each region is π 4 . If the center of bounding box of s j is in one of these five regions, for example R1 region, we can say s j is in the R1 direction of s i . The purpose of defining these 5 regions is to look for the Above, Below, Sup, Sub and Right relationships between strokes in these 5 preferred directions, but not to recognize them. Definition 5.4. Let G be a directed graph in which each node corresponds to a stroke and edges are added according to the following criteria in succession. We defined for each stroke s i ( i from 0 to n -2): • the set of crossing strokes S cro (i) = {s cro1 , s cro2 , ...} from {s i+1 , ..., s n-1 }. • the set of closest stroke S clo (i) = {s clo } from {s i+1 , ..., s n-1 } -S cro(i) . For stroke s i ( i from 0 to n -1): • the set S vis (i) of the visible closest strokes in each of the five directions respectively from S - {s i } S cro (i) S clo (i). Here, the closeness of two strokes is decided by the distance between the centers of their bounding boxes, differently from Definition 5.1. Edges from s i to the S cro (i) S clo (i) S vis (i) will be added to G. Finally, we check if the edge from s i to s i+1 (i from 0 to n -2) exists in G. If not, then add this edge to G to ensure that the path covering the sequence of strokes in the time order is included in G. An example is presented in Figure 5.7. Mathematical expression d dx a x is written with 8 strokes (Figure 5.7a). From the sequence of 8 strokes, the graph shown in Figure 5.7b is generated with the above mentioned method. Comparing the built graph with the ground truth (Figure 5.7c), we can see the difference in Figure 5.7d. All the ground truth edges are included in the generated graph except edges (blue ones in Figure 5.7d) from strokes 4 to 3 and from strokes 7 to 6. This flaw can be overcome as long as strokes 3 and 4 and the edge from strokes 3 to 4 are correctly recognized. Because if strokes 3 and 4 are recognized as belonging to the same symbol, the edge from strokes 4 to 3 can be completed automatically, as well as the edge from strokes 7 to 6. In addition, Figure 5.7d indicates the unnecessary edges (red edges) when matching the built graph to the ground truth. It is expected that the graph built could include the ground truth edges as many as possible, simultaneously contains the unnecessary edges as few as possible. Graph evaluation Hu evaluates the graph representation model by comparing the edges of the graph with ground truth edges at the stroke level [START_REF] Hu | Features and Algorithms for Visual Parsing of Handwritten Mathematical Expressions[END_REF]. The recall and precision rates are considered. In this section, we take a x is written with 8 strokes; (b) the SLG built from raw input using the proposed method; (c) the SLG from ground truth; (d) illustration of the difference between the built graph and the ground truth graph, red edges denote the unnecessary edges and blue edges refer to the missed ones compared to the ground truth. similar but more directive method, the same solution as introduced in Section 4.3.2. Specifically, provided the ground truth labels of the nodes and edges in the graph, we would like to see evaluation results at symbol and expression levels. We reintroduce the evaluation criteria here as a kind reminder to readers: symbol segmentation ('Segments'), refers to a symbol that is correctly segmented whatever the label; symbol segmentation and recognition ('Seg+Class'), refers to a symbol that is segmented and classified correctly; spatial relationship classification ('Tree Rels.'), a correct spatial relationship between two symbols requires that both symbols are correctly segmented and with the right relationship label. Table 5.1 and Table 5.2 present the evaluation results of the graph construction on CROHME 2014 test set (provided the ground truth labels) at the symbol and expression level respectively. We re-show the evaluate results, already given in Tables 4.1 and4.2, of time graph as a reference to the new graph. Due to the delayed strokes, time graph miss a small part of segmentation edges. Thus around 0.27% symbols are wrongly segmented. The new graph achieves 100% recall rate and 99.99% precision rate on segmentation task (0.01% error is resulted from a small error in data set, not the model itself). These figures evidence that the new model could handle the case of delayed strokes. With regards to relationship recognition task, time graph model misses about 25% relationships. The new graph catches 93.48% relationships. Compared to time graph, there is a great improvement in relationship representation. However, owing to the missed 6.52% relationships, only 67.65% expressions are correctly recognized as presented in Table 5.2. These values will be upper bounds for the recognition system based on this graph model. Select paths from G The final aim of our work is to build the SLG of 2-D expression. The proposed solution is carried out with merging several 1-D paths from G. These paths are expected to cover all the nodes and as many as the edges of the ground truth SLG (at least the edges of the ground truth SRT). With the correctly recognized node and edge labels, we have the possibility to build a correct 2-D expression. Obviously, a single 1D path is not able to cover all these nodes and edges, except in some simple expressions. We have explained this point in detail in Chapter 4. This section will explain how we generate several paths from the graph, enough different to cover the SRT, and then how to merge the different decisions in a final graph. A path in G can be defined as Φ i = (n 0 , n 1 , n 2 , ..., n e ), where n 0 is the starting node and n e is the end node. The node set of Φ i is n(Φ i ) = {n 0 , n 1 , n 2 , ..., n e } and the edge set of Φ i is e(Φ i ) = {n 0 → n 1 , n 1 → n 2 , ..., n e-1 → n e }. Two types of paths are selected in this chapter: time path and random path. time path starts from the first input stroke and ends with the last input stroke following the time order. For example, in Figure 5.7d, the time path is (0, 1,2,[START_REF] Zhang | Using BLSTM for Interpretation of 2D Languages -Case of Handwritten Mathematical Expressions[END_REF]4,5,[START_REF]7 -The expression level evaluation results on CROHME 2014 test set with 3 trees, along with CROHME 2014 participant results. Network, model correct (%) ≤ 1 error ≤ 2 errors ≤ 3 errors i[END_REF]7). Then, we consider several additional random paths. To ensure a good coverage of the graph we guide the random selection choosing the less visited nodes and edges (give higher priority to less visited ones). The algorithm of selecting random path is as follows: (1) Initialize T n = 0, T e = 0. T n records the number of times that node n has been chosen as starting node, T e records the number of times of that edge e has been used in chosen paths; (2) Update T n = T n + 1 , T e = T e + 1 of all the nodes and edges in time path; (3) Randomly choose one node N from the nodes having the minimum T n , update T N = T N + 1; (4) Find all the edges connected to N , randomly choose one from the edges having the minimum T e , denoted as E, update T E = T E + 1; if no edge found, finish. (5) Reset N as the to node of E, go back to step 4. One random path could be like (1,5,[START_REF]7 -The expression level evaluation results on CROHME 2014 test set with 3 trees, along with CROHME 2014 participant results. Network, model correct (%) ≤ 1 error ≤ 2 errors ≤ 3 errors i[END_REF]7). Training process Each path, time or random, is handled independently during the training process as a training sample. In Chapter 4, we introduced the technique related to training a time path, such as how to feed BLSTM inputs (Section 4.2.1), extracting features(Section 4.2.2) and local CTC training method (Section 4.2.3); the same training process is kept for random paths in this chapter. Totally, we have 3 types of BLSTM classifiers trained with respectively only time path, only random paths and time + random paths. More related contents could be found in the experimental section of this chapter. Recognition Since we use local CTC technique in the training process in this work, naturally the recognition stage should be performed on stroke (strokeD and strokeU ) level. As explained previously, to build the SLG, we also need to assign one single label to each stroke. Considering these two causes, a pooling strategy is required to go from the point level to the stroke level since for each point or time step, the network outputs the probabilities of this point belonging to different classes. We proposed two kinds of decoding methods based on stroke level (maximum decoding, local CTC decoding) and tested the effects of them in Chapter 4. According to the evaluation results, maximum decoding is a better choice for its low computation and same level effectiveness as local CTC decoding. With the Equation 4.16, we can compute P s c , the cumulative probability of outputting the c th label for stroke s. Then we sort the normalized P s c and only keep the top n probable labels (excluding blank) with the accumulative probability ≥ 0.8. Note that n is maximum 3 even though the accumulative probability of top 3 labels is not up to 0.8. Merge paths Each stroke belongs to at least one path, but possibly to several paths. Hence, several recognition results can be available for a single stroke. At this stage, we propose to compute the probability P s (l) to assign the 86CHAPTER 5. MATHEMATICAL EXPRESSION RECOGNITION BY MERGING MULTIPLE PATHS label l to the stroke s by summing all path Φ i with the formula: P s (l) = Φ i W Φ i * P (Φ i ,s) l A (s) label(Φ i ,s) (l) Φ i W Φ i A (s) (5.2) A = n(Φ i ) ∪ e(Φ i ) (5.3) M (m) = 1 if m ∈ M 0 otherwise (5.4) W Φ i is the weight set for path Φ i and label(Φ i , s) is the set of candidate labels for stroke s from path Φ i , 1 ≤ |label(s, Φ i )| ≤ 3. If stroke s exists in path Φ i , but l / ∈ label(s, Φ i ), in this case, P (Φ i ,s) l is 0. The classifier answers that there is no possibility to output label l for stroke s from path Φ i . We still add W Φ i into the normalized factor of P s (l). If stroke s does not exist in path Φ i , the classifier's answer to stroke s is unknown. And we should not take into account this path Φ i . Thus, W Φ i will not be added into the normalized factor of P s (l). After normalization, the label with the maximum probability is selected for each stroke. As shown in Figure 5.8, we consider merging 3 paths Φ 1 , Φ 2 , Φ 3 . Stroke s only belongs to path Φ 1 , Φ 2 . In path Φ 1 , the candidate labels for stroke s are a, b, c while in path Φ 2 , the candidate labels are b, c, d. The probability of assigning a to stroke s is computed as: P s (a) = W Φ 1 * P Φ 1 ,s a + W Φ 2 * 0 + W Φ 3 * 0 W Φ 1 + W Φ 2 + W Φ 3 * 0 (5.5) Stroke s is not covered by path Φ 3 , thus W Φ 3 will not be added into the normalized factor. The probability of outputting label a for stroke s in path Φ 2 is 0. Furthermore, the probability of assigning b to stroke s is computed as: P s (b) = W Φ 1 * P Φ 1 ,s b + W Φ 2 * P Φ 2 ,s b + W Φ 3 * 0 W Φ 1 + W Φ 2 + W Φ 3 * 0 (5.6) In conclusion, we combine the recognition results from different paths (while in Chapter 4, only one path is used), and then select for each node or edge the most probable label. Afterwards, an additional process will be carried out in order to build a valid LG, i.e. adding edges, in the same way as what have done in Chapter 4. We first look for the segments (symbols) using connected component analysis: a connected component where nodes and edges have the same label is a symbol. With regards to the relationship between two symbols, we choose the label having the maximum accumulative probability among the edges between two symbols. Then, according to the rule that all strokes in a symbol have the same input and output edges and that double-direction edges represent the segments, some missing edges can be completed automatically. Experiments In the CROHME 2014 data set, there are 8834 expressions for training and 982 expressions for test. Likewise, we divide 8834 expressions for training (90%) and validation (10%); use CROHME 2014 test set for test. Based on the RNNLIB library1 , the recognition system is developed by merging multiple paths. For each training process, the network having the best CTC error on validation data set is saved. Then, we evaluate this network on the test data set. The Label Graph Evaluation library (LgEval) [START_REF] Mouchère | Icfhr 2014 competition on recognition of on-line handwritten mathematical expressions (crohme 2014)[END_REF] is used to analyze the recognition output. The specific configuration for network architecture is the same as the one we set in Section 4.3. This configuration has obtained good results in both handwritten text recognition [Graves et al., 2009] and handwritten math symbol classification [Álvaro et al., 2013, 2014a]. The size of the input layer is 5 (5 local features, as same as Chapter 4) while the output layer size in this experiment is 109 (101 symbol classes + 6 relationships + N oRelation + blank). For each expression, we extract the time path and 6 (or 10) random paths. Totally, 3 types of classifiers are trained: the first one with only time path (denoted as CLS T , actually it is the same classifier as in Chapter 4); the second BLSTM network is trained with only 6 random paths (denoted as CLS R6 , for 10 random paths we use CLS R10 ); and the third classifier uses time + 6 random paths (denoted as CLS T +R6 ). We train these 3 types of classifiers to see the effect of different training content on recognition result, also the impact of the number of paths. We use these 4 different classifiers (CLS T , CLS R6 , CLS R10 , CLS T +R6 ) to label the different types of paths extracted from the test set as presented in Table 5.3 (exp.1, uses CLS T to label time path; exp.2, uses CLS T to label both time path and random paths; exp.3, uses CLS T to label time path and CLS R to label random paths; exp.4, uses CLS T +R6 to label both time path and random paths; exp.5, use CLS T to label time path and CLS R10 to label random paths;). In exp.1, only the labeled time path is used to build a 2-D expression, actually it is the same case carried out in Chapter 4. For exp.(2 3 4), time path and 6 random paths are merged to construct a final graph. time path and 10 random paths are contained in exp.5. The weight of time path is set to 0.4 and each random path is 0.12 . 2 CLS T CLS T 3 CLS T CLS R6 4 CLS T +R6 CLS T +R6 5 CLS T CLS R10 The evaluation results at symbol level are provided in Table 5.4 including recall ('Rec.') and precision ('Prec.') rates for symbol segmentation ('Segments'), symbol segmentation and recognition ('Seg+Class'), stage, likewise the Above relationship from the fraction bar to the numerator 4. In addition, the relation from the minus symbol to the numerator 4 was wrongly recognized as Right and it should be labeled as N oRelation. In this chapter, a ≥ b is recognized correctly also (Figure 5.9). We present the the derived graph in Figure 5.9b. Then from the graph, we extract the time path and 6 random paths. In this example, all the nodes and edges of Figure 5.9b are included in the extracted 7 paths. After merging the results of 7 paths with the Equation 5.2, we can get the labeled graph illustrated as Figure 5.9c. The edge from stroke 0 to 1 is wrongly labeled as Sup. Next, we carry out the post process stage. The segments (symbols) are decided using connected component analysis: 3 symbols (a, ≥, b) in this expression. With regards to the relationship between a and ≥, we have 2 candidates Sup with the probability 0.604 and Right with the probability 0.986. With the strategy of choosing the label having the maximum accumulative probability, the relationship between a and ≥ is Right then. After post process, we construct a correct SLG provided in Figure 5.9d. The recognition result for 44 -4 4 is presented in Figure 5.10. From the handwritten expression (Figure 5.10a), we could derive a graph presented in (Figure 5.10b). Then, we extract paths from the graph, and label them, finally merge the labeled paths to built a labeled graph. Figure 5.10c provides the built SLG from which we can see several extraneous edges appear owing to multiple paths and in this sample they are all recognized correctly as N oRelation. We remove these N oRelation edges to have a intuitive judgment on the recognition result (Figure 5.10d).As can be seen, the Right relationship from the minus symbol to fraction bar is missed. This error comes from the graph representation stage where we find no edge from stroke 2 to 4. Both stroke 3 and 4 are located in R1 region of stroke 2, but stroke 3 is closer to stroke 2 than stroke 4. Thus, we miss the edge from stroke 2 to 4 at the graph representation stage, and naturally miss the Right relationship from the minus symbol to fraction bar in the built SLG. This error can be overcome by searching for a better graph model or some post process strategies regarding to the connected SLG. As discussed above, our solution presents competitive results on symbol recognition task and segmentation task, but not on relationship detection and recognition task. Compared to the work of Chapter 4, the solution in this chapter presents improvements on recall rate of 'Tree Rels.' but at the same time decreases on precision rate of 'Tree Rels.' Thus, at the expression level, the recognition rate remains the same level as the solution with single path. One of the intrinsic causes is that even though several paths from one expression is considered in this system, the BLSTM model processes each path separately which means the model only could access the contextual information in one path during training and recognition stages. Obviously it conflicts with the real case that human beings recognize the raw input using the entire contex- 92CHAPTER 5. MATHEMATICAL EXPRESSION RECOGNITION BY MERGING MULTIPLE PATHS tual information. In the coming chapter, we will search for a model which could take into account more contextual knowledge at one time instead of just the content limited in one single path. Discussion We recognize 2-D handwritten mathematical expressions by merging multiple 1-D labeled paths in this chapter. Given an expression, we propose an algorithm to generate an intermediate graph using both temporal and spatial information between strokes. Next from the derived graph, different types of paths are selected and later labeled with the strong sequence labeler-BLSTM. Finally, we merge these labeled paths to build a 2-D math expression. The proposal presents competitive results on symbol recognition task and segmentation task, promising results on relationship recognition task. Compared to the work of Chapter 4, the solution in this chapter presents improvements on recall rate of 'Tree Rels.' but at the same time decreases on precision rate of 'Tree Rels.' Thus, at the expression level, the recognition rate remains the same level as the solution with single path. Currently, even though several paths from one expression is considered in this system, in essential the BLSTM model deals with each path isolatedly. The classical BLSTM model could access information from past and future in a long range but the information outside the single sequence is of course not accessible to it. In fact, it conflicts with the real case that human beings recognize the raw input using the entire contextual information. As shown in our experiments, it is laborious to solve a 2-D problem with a chain-structured model. Thus, we would like to develop a tree-structured neural network model which could handle directly the structure not limited to a chain. With the new neural network model, we could take into account more contextual information in a tree instead of a single 1D path. merging multiple trees In Chapter 5, we concluded that it is hard to use the classical chain-structured BLSTM to solve the problem of recognizing mathematical expression which is a tree structure. In this chapter, we extend the chain-structured BLSTM to tree structure topology and apply this new network model for online math expression recognition. Firstly, we provide a short overview with regards to the Non-chain-structured LSTM. Then, we propose in Section 6.2 a new neural network model named tree-based BLSTM which seems to be appropriate for this recognition problem. Section 6.3 globally introduces the framework of mathematical expression recognition system based on tree-based BLSTM. Hereafter, we focus on the specific techniques involved in this system in Section 6.4. Finally, experiments and discussion parts are covered in Section 6.5 and Section 6.7 respectively. Overview: Non-chain-structured LSTM A limitation of the classical LSTM network topology is that they only allow for sequential information propagation (as shown in Figure 6.1a) since the cell contains a single recurrent connection (modulated by a single forget gate) to its own previous value. Recently, research on LSTM has been beyond sequential structure. The one-dimensional LSTM was extended to n dimensions by using n recurrent connections (one for each of the cell's previous states along every dimension) with n forget gates such that the new model could take into account the context from n sources. It is named Multidimensional LSTM (MDLSTM) dedicated to the graph structure of an n-dimensional grid such as images [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF]. MDLSTM model exhibits great performances on offline handwriting recognition tasks where the input is an image [Graves and Schmidhuber, 2009[START_REF] Messina | Segmentation-free handwritten chinese text recognition with lstm-rnn[END_REF][START_REF] Bluche | Scan, attend and read: End-to-end handwritten paragraph recognition with mdlstm attention[END_REF], Maalej and Kherallah, 2016, Maalej et al., 2016]. In [START_REF] Tai | Improved semantic representations from tree-structured long short-term memory networks[END_REF], the basic LSTM architecture was extend to tree structures for improving semantic representations. Two extensions, the Child-sum Tree-LSTM and the N-ary Tree-LSTM, were proposed to allow for richer network topology where each unit is able to incorporate information from multiple child units (Figure 6.1b). Since the Child-sum Tree-LSTM unit conditions its components on the sum of child hidden states, it is well-suited for trees with high branching factor or whose children are unordered. The N-ary Tree-LSTM can be used on tree structures where the branching factor is at most N and where children are ordered. In parallel to the work in [START_REF] Tai | Improved semantic representations from tree-structured long short-term memory networks[END_REF], [START_REF] Zhu | Long short-term memory over recursive structures[END_REF] explored the similar idea and proposed S-LSTM model which provides a principled way of considering long-distance interaction over hierarchies, e.g., language or image parse structures. Furthermore, the DAG-structured LSTM was proposed for semantic compositionality in [START_REF] Zhu | Dag-structured long short-term memory for semantic compositionality[END_REF], possessing the ability to incorporate external semantics including non-compositional or holistically learned semantics. The proposed Tree-based BLSTM This section will be focused on Tree-based BLSTM. Different with the tree structures depicted in [START_REF] Tai | Improved semantic representations from tree-structured long short-term memory networks[END_REF][START_REF] Zhu | Long short-term memory over recursive structures[END_REF], we devote it to the kind of structures presented in Figure 6.2 where most nodes have only one next node. In fact, this kind of structure could be regarded as several chains with shared or overlapped segments. Traditional BLSTM process a sequence both from left to right and from right to left in order to access information coming from two directions. In our case, the tree will be processed from root to leaves and from leaves to root in order to visit all the surround context. From root to leaves. There are 2 special nodes (red) having more than one next node in Figure 6.2. We name them Mul-next node. The hidden states of Mul-next node will be propagated to its next nodes equally. The forward propagation of a Mul-next node is the same as for a chain LSTM node; with regard to the error propagation, the errors coming from all the next nodes will be summed up and propagated to Mul-next node. From leaves to root. Suppose all the arrows in Figure 6.2 are reversed, we have the new structure which is actually beyond a tree in Figure 6.3. The 2 red nodes are still special cases because they have more than one previous node. We call them Mul-previous nodes. The information from all the previous nodes will be summed up and propagated to the Mul-previous node; the error propagation is processed like for a typical LSTM node. Mul We give the specific formulas below regarding to the forward propagation of Mul-previous node and the error back-propagation of Mul-next node. The same notations as in Chapter 3 and [Graves et al., 2012] are used here. The network input to unit i at node n is denoted a n i and the activation of unit i at node n is b n i . w ij is the weight of the connection from unit i to unit j. Considering a network with I input units, K output units and H hidden units, let the subscripts ς, φ, ω referring to the input, forget and output gate. The subscript c refers to one of the C cells. Thus, the peep-hole weights from cell c to the input, forget, output gates can be denoted as w cς , w cφ , w cω . s n c is the state of cell c at node n. f is the activation function of the gates, and g and h are respectively the cell input and output activation functions. L is the loss function used for training. We only give the equations for a single memory block. For multiple blocks the calculations are simply repeated for each block. Let Pr(n) denote the set of previous nodes of node n and Ne(n) denote the set of next nodes. We highlight the different parts with box compared to the classical LSTM formulas which have been recalled in Chapter 3. The forward propagation of Mul-previous node Input gates a n ς = I i=1 w iς x n i + H h=1 w hς |Pr(n)| p=1 b p h + C c=1 w cς |Pr(n)| p=1 s p c (6.1) b n ς = f (a n ς ) (6.2) Forget gates a n φ = I i=1 w iφ x n i + H h=1 w hφ |Pr(n)| p=1 b p h + C c=1 w cφ |Pr(n)| p=1 s p c (6.3) b n φ = f (a n φ ) (6.4) 96CHAPTER 6. MATHEMATICAL EXPRESSION RECOGNITION BY MERGING MULTIPLE TREES Cells a n c = I i=1 w ic x n i + H h=1 w hc |Pr(n)| p=1 b p h (6.5) s n c = b n φ |Pr(n)| p=1 s p c + b n ς g(a n c ) (6.6) Output gates a n ω = I i=1 w iω x n i + H h=1 w hω |Pr(n)| p=1 b p h + C c=1 w cω s n c (6.7) b n ω = f (a n ω ) (6.8) Cell Outputs b n c = b n ω h(s n c ) (6.9) The error back-propagation of Mul-next node We define n c = ∂L ∂b n c n s = ∂L ∂s n c δ n i = ∂L ∂a n i (6.10) Then n c = K k=1 w ck δ n k + G g=1 w cg |Ne(n)| e δ e g (6.11) Output gates δ n w = f (a n w ) C c=1 h(s n c ) n c (6.12) States n s = b n w h (s n c ) n c + |Ne(n)| e=1 b e φ |Ne(n)| e=1 e s +w cς |Ne(n)| e=1 δ e ς + w cφ |Ne(n)| e=1 δ e φ + w cω δ n ω (6.13) Cells δ n c = b n ς g (a n c ) n s (6.14) Forget gates δ n φ = f (a n φ ) C c=1 |Pr(n)| p=1 s p c n s (6.15) Input gates δ n ς = f (a n ς ) C c=1 g(a n c ) n s (6.16) The framework We would apply the proposed tree-based BLSTM model for online mathematical expression recognition. This section provides a general view of the recognition system (Figure 6.4). Similar to the framework proposed in Chapter 5, we first drive an intermediate graph from the raw input. Then, instead of 1-D paths, we consider from the graph deriving trees which will be labeled by tree-based BLSTM model as a next step. In the end, these labeled trees will be merged to build a stroke label graph. Input Output an intermediate graph G merge labeled trees derive trees from graph G label trees with tree-based BLSTM Figure 6.4 -Illustration of the proposal that uses BLSTM to interpret 2-D handwritten ME. Tree-based BLSTM for online mathematical expression recognition In this section, each step illustrated in Figure 6.4 will be elaborated in the text. The input data is available as a sequence of strokes S from which we would like to obtain the final LG graph describing unambiguously the ME. Let S = (s 0 , ..., s n-1 ), where we assume s i has been written before s j for i < j. Derivation of an intermediate graph G In a first step, we will derive an intermediate graph G where each node is a stroke and edges are added according to temporal or spatial relationships between strokes. In fact, we already introduced a graph representation model and evaluated it in Chapter 5. The evaluation results showed that around 6.5% relationships are missed compared to the ground truth graph. In this Section, we are aiming to improve the graph model to reduce the quantity of missed relationships. Similarly, we provide several definitions related to the graph building first. Definition 6.1. A stroke s i is considered visible from stroke s j if the straight line between between their closest points does not cross any other stroke s k . For example, s 1 and s 3 can see each other because the straignt line between their closest points does not cross stroke s 2 or s 4 as shown in Figure 6.5. This definition is the same as the one used in [START_REF] Muñoz | Mathematical Expression Recognition based on Probabilistic Grammars[END_REF]. Compared to Definition 5.2 where we replaced the stroke with its bounding box to reduce computation, the current one is more accurate. Figure 6.5 -Illustration of visibility between a pair of strokes. s 1 and s 3 are visible to each other. Definition 6.2. For each stroke s i , we define 5 regions (R1, R2, R3, R4, R5 shown in Figure 6.6) of it. The center of the bounding box of stroke s i is taken as the reference point (0, 0). R1 R2 R3 R4 R5 (0,0) Figure 6.6 -Five regions for a stroke s i . Point (0, 0) is the center of bounding box of s i . The angle range of R1 region is [-π 8 , π 8 ]; R2 : [ π 8 , 3 * π 8 ]; R3 : [ 3 * π 8 , 7 * π 8 ]; R4 : [-7 * π 8 , -3 * π 8 ]; R5 : [-3 * π 8 , -π 8 ]. The purpose of defining these 5 regions is to look for the Right, Supscript, Above, Below and Subscript relationships between strokes. If the center of bounding box of s j is located in one of five regions of stroke s i , for example R1 region, we say s j is in the R1 direction of s i . In Definition 5.3, the angle of each region is π 4 . Here, a wider searching range is defined for both R3 and R4 regions. That is because in some expressions like a+b+c d+e+f , a larger searching range means more possibilities to catch the Above relationship from '-' to 'a' and the Below relationship from '-' to 'd'. Definition 6.3. Let G be a directed graph in which each node corresponds to a stroke and edges are added according to the following criteria in succession. We defined for each stroke s i (i from 0 to n-2): • the set of crossing future strokes S cro (i) = {s cro1 , s cro2 , ...} from {s i+1 , ..., s n-1 }. For stroke s i (i from 0 to n-1): • the set S vis (i) of the visible leftmost (considering the center of bounding box only) strokes in five directions respectively. Edges from s i to the S cro (i) S vis (i)will be added to G. Then, we check if the edge from s i to s i+1 ( i from 0 to n-2) exists in G. If not, this edge is added to G to ensure that the path covering the sequence of strokes in the time order is included in G. Each edge is tagged depending on the specific criterion we used to find it before. Consequently, we have at most 7 types of edges (Crossing, R1, R2, R3, R4, R5 and T ime) in the graph. For those edges from s i to the S cro (i) ∩ S vis (i), the type Crossing is assigned. Figure 6.7 illustrates the process of deriving graph from raw input step by step using the example of f a = b f . First according to 10 strokes in the raw input (Figure 6.7a), we create 10 nodes, one for each stroke (Figure 6.7b); for each stroke, look for its crossing stroke or strokes and add the corresponding edges labeled with Crossing between nodes (Figure 6.7c); proceeding to next step, for each stroke, look for its the visible rightmost strokes in five directions respectively and add the corresponding edges labeled as one of R1, R2, R3, R4, R5 between nodes if the edges do not exist in the graph (Figure 6.7d); finally, check if the edge from s i to s i+1 ( i from 0 to n -2) exists in G and if not, add this edge to G labeled as T ime to ensure that the path covering the sequence of strokes in the time order is included in G (Figure 6.7e). Graph evaluation With the same method adopted in Section 6.4.2 and 4.3.2, we evaluate the new proposed graph representation model. Specifically, provided the ground truth labels of the nodes and edges in the graph, we would like to see evaluation results at symbol and expression levels. We reintroduce the evaluation criteria repeatably here as a kind reminder to readers: symbol segmentation ('Segments'), refers to a symbol that is correctly segmented whatever the label; symbol segmentation and recognition ('Seg+Class'), refers to a symbol that is segmented and classified correctly; spatial relationship classification ('Tree Rels.'), a correct spatial relationship between two symbols requires that both symbols are correctly segmented and with the right relationship label. Table 6.1 and Table 6.2 present the evaluation results on CROHME 2014 test set (provided the ground truth labels) at the symbol and expression level respectively. We re-show the evaluate results of time graph and the proposed graph in Chapter 5 as a reference to the new graph. Compared to the graph model proposed in Chapter 5, the new graph model stays at the same level with regards to the recall rate and precision rate on symbol segmentation and recognition task. When it comes to relationship classification task, the new graph presents a small improvement, about 0.5%. The new graph catch 93.99% relationships. Owing to the missed 6.00% relationships, around 30% expressions are not correctly recognized as presented in Table 6.2. Heretofore, we derive a graph from the raw input considering the the temporal and spatial information. Figure 6.8 illustrates the ME f a = b f written with 10 strokes and the derived graph G. We would like to label nodes and edges of G correctly in order to build a SLG finally. The solution proposed in this chapter is to derive trees from G, then recognize the trees using the tree-based BLSTM model. There exists different strategies to derive trees from G. In any of the cases, a start node should be selected first. We take the leftmost (considering the leftmost point in a stroke) stroke as the starter. For the example illustrated in Figure 6.8a, stroke s2 is the starter. From the starting node, we traverse the graph with the Depth-First Search algorithm. Each node should be visited only once. When there are more than one edge outputting from one node, the visiting order will follow (Crossing, R1, R3, R4, R2, R5, T ime). With this strategy, a tree is derived to which we give the name Tree-Left-R1 which is dedicated to catch R1 relationship. If the visiting order follow (Crossing, R2, R1, R3, R4, R5, T ime), another tree named Tree-Left-R2 would be derived to focus more on R2 relationship. Likewise, tree Tree-Left-R3, Tree-Left-R4, Tree-Left-R5 are derived respectively to emphasize R3, R4, R5 relationships. The Crossing edge is always on the top of list, and it is because we assume that a pair of crossing strokes belong to a single symbol. In Figure 6.8b, Tree-Left-R1 is depicted in red with the root in s2. Note that in this case, all the nodes are accessible from the start node s2. However, as G is a directed graph, some nodes are not reachable from one starter in some cases. Therefore, we consider deriving trees from different starters. Besides the leftmost stroke, it is interesting to derive trees from the first input stroke s0 since sometimes users start writing an expression from its root. Note that in some cases, the leftmost stroke and stroke s0 could be the same one. We replace the left-most stroke with stroke s0 and keep the same strategy to derive the trees. These new trees are named as Tree-0-R1, Tree-0-R2, Tree-0-R3, Tree-0-R4, Tree-0-R5 respectively. Finally, if s0 is taken as the starting point and time order is considered first, a special tree is obtained which we call Tree-Time. Tree-Time is proposed with the aim of having a good cover of segmentation edges since users usually write a multi-stroke symbol continuously. As a matter of fact, it is a chain structure. Tree-Time is defined by s0 → s1 → s2 → s3 . . . → s9 for the expression in Figure 6.8. Table 6.3 offers an clear look at different types of the derived trees from the graph. The solution is to go from the previous trees defined at the stroke level down to a tree at the point level, points being the raw information that are recorded along the pen trajectory in the online signal. To be free of the interference of the different writing speed, an additional re-sampling process should be carried out with a fixed spatial step. In the considered trees, nodes, which represent strokes, are re-sampled with a fixed spatial step, and the same holds for edges by considering the straight lines in the air between the last point and the first point of a pair of strokes that are connected in the tree. This is illustrated in Figure 6.9, where the re-sampled points are displayed inside the nodes (on-paper points for node) and Figure 6.9 -A re-sampled tree. The small arrows between points provide the directions of information flows. With regard to the sequence of points inside one node or edge, most of small arrows are omitted. above the edges (in-air points for edge). Since this tree will be processed by the BLSTM network, we need for the training stage to assign it a corresponding ground-truth. We derive it from the SLG by using the corresponding symbol label of the strokes (nodes) for the on-paper points and the corresponding symbol or relationship label for the in-air points (edges) when this edge exists in the SLG. When an edge of the tree does not exist in the SLG, the label N oRelation noted '_' will be used. In this way, an edge in the graph which was originally denoted with a C, Ri (i = 1...5) or T relation will be assigned with one of the 7 labels: (Right, Above, Below, Inside, Superscript, Subscript, _) or a symbol label when the two strokes are belonging to the same symbol. Totally, for the ground truth, we have 108 classes(101 symbol classes + 6 relationships + N oRelation). The number of re-sampling points depends on the size of expression. For each node or edge, we resample with 10 × l/d points. Here, l refers to the length of a visible stroke or a straight line connecting 2 strokes and d refers to the average diagonal of the bounding boxes of all the strokes in an expression. Subsequently, for every point p(x, y) we compute 5 features [sinθ, cosθ, sinφ, cosφ, PenUD] which are already described in Section 4.2.2. Training process Figure 6.10 illustrates a tree-based BLSTM network with one hidden level. To provide a clear view, we only draw the full network on a short sequence (red) instead of a whole tree. Globally, the data structure Figure 6.10 -A tree-based BLSTM network with one hidden level. We only draw the full connection on one short sequence (red) for a clear view. we are dealing with is a tree; locally, it consists of several short sequences. For example, the tree presented in Figure 6.10 has 6 short sequences one of which is highlighted with red color. The system processes each node or edge (which is a short sequence in fact) separately but following the order with which the correct propagation of activation or errors could be ensured. The training process of a short sequence (the red one in Figure 6.10 for example) is similar to the classical BLSTM model except that some outside information should be taken into account. In the classical BLSTM case, the incoming activation or error of a short sequence is initialized as 0. Forward pass. Here, when proceeding with the forward pass from the input layer to the output layer, for the hidden layer (from root to leaves), we need to consider the coming information from the root direction and for the hidden layer (from leaves to root), we need to consider the coming information from the leaves direction. Obviously, no matter which kind of order for processing sequence we are following, it is not possible to have the information from both directions in one run. Thus another stage which we call precomputation is required. The pre-computation stage has two runs: (1) From the input layer to the hidden layer (from root to leaves), we process the short sequence consisting of the root point first and then the next sequences (Figure 6.11a). In this run, each sequence in the tree stores the activation from the root direction. (2) From the input layer to the hidden layer (from leaves to root), we process the short sequences consisting of the leaf point first and then the next sequences (Figure 6.11b). In this run, each sequence in the tree sums and stores the activation from the leaf direction. After pre-computation stage, the information from both directions are available to each sequence thus the forward pass from input to output is straightforward. Error propagation. The backward pass of tree-based BLSTM network has 2 parallel propagation paths: (1) one is from the output layer to hidden layer (from root to leaves), then to the input layer; (2) the other one is from the output layer to hidden layer (from leaves to root), then to the input layer. As these 2 paths of propagation are independent, no pre-stage is needed here. For propagation (1), we process the short sequences consisting of the leaf point first and then the next sequences. For propagation (2), we process the short sequence consisting of the root point first and then the next sequences. Note that when there are several hidden levels in the network, a pre-stage is required also for error propagation. Loss function. It is known that BLSTM and CTC stage have better performance when a "blank" label is introduced during the training [START_REF] Bluche | Framewise and ctc training of neural networks for handwriting recognition[END_REF], so that decision can be made only at some point in the input sequence. One of the characteristics of CTC is that it does not provide the alignment between the input and output, just the overall sequence of labels. As we need to assign each stroke a label to build a SLG, a relatively precise alignment between the input and output is preferred. A local CTC algorithm was proposed in Chapter 4 aiming to limit the label into the corresponding stroke at the same time take the advantage of "blank" label, and furthermore it was verified by experiments to outperform the frame wise training method. We succeeded in realizing local CTC for a global sequence labeling task in Chapter 4. In this chapter, we will use local CTC training method in a tree labeling task. With regards to local CTC, the theory behind two types of tasks remain the same actually. The difference between them is: in Chapter 4, a global sequence consisting of several strokes is a entity being processed; here, we treat each short sequence (stroke) as a processing unit. Inside each short sequence, or we can say each node or edge, a local CTC loss function is easy to be computed from the output probabilities related to this short sequence. The total CTC loss function of a tree is defined as the sum of all local CTC loss functions regarding to all the short sequences in this tree. Since each short sequence has one label, the possible labels of the points in one short sequence are shown in Figure 6.12. The equations provided in Section 4.2.3 are for a global sequence of one or more strokes. In the remaining part of this section, the equations we present are related to a short sequence (a single stroke). Given the tree input represented as X consisting of N short sequences, each short sequence could be denoted as X i , i = 1, ..., N with the ground truth label l i and the length T i . l i represents the label sequence with blanks added to the beginning and the end of l i , i.e. l i = (blank, l i , blank) of length 3. The forward variable α i (t, u) denotes the summed probability of all length t paths that are mapped by F onto the length u/2 prefix of l i , where u is from 1 to 3 and t is from 1 to T i . Given the above notations, the probability of l i can be expressed as the sum of the forward variables with and without the final blank at point T i . p(l i |X i ) = α i (T i , 3) + α i (T i , 2) (6.17) α i (t, u) can be computed recursively as following: Figure 6.12 -The possible labels of points in one short sequence. α i (1, 1) = y 1 blank (6.18) α i (1, 2) = y 1 l i (6.19) α i (1, 3) = 0 (6.20) (a) (b) α i (t, u) = y t (l i )u u j=u-1 α i (t -1, j) (6.21) Note that α i (T i , 1) = 0 (6.22) α i (t, 0) = 0, ∀t (6.23) Figure 6.13 demonstrates the local CTC forward-backward algorithm limited in one stroke. Similarly, the backward variable β i (t, u) denotes the summed probabilities of all paths starting at t + 1 that complete l i when appended to any path contributing to α i (t, u). The formulas for the initialization and recursion of the backward variable are as follows: β i (T i , 3) = 1 (6.24) β i (T i , 2) = 1 (6.25) β i (T i , 1) = 0 (6.26) 6.27) Note that β i (1, 3) = 0 (6.28) With the local CTC forward-backward algorithm, we can compute the α i (t, u) and β i (t, u) for each point t and each allowed positions u at point t. The CTC loss function L(X i , l i ) is defined as the negative log probability of correctly labeling the short sequence X i : β i (t, u) = u+1 j=u β i (t + 1, j)y t+1 (l i ) j ( β i (t, 4) = 0, ∀t ( L(X i , l i ) = -ln p(l i |X i ) (6.30) According to the Equation 3.48, we can rewrite L(X i , l i ) as: L(X i , l i ) = -ln 3 u=1 α i (t, u)β i (t, u) (6.31) Then the errors will be back propagated to the output layer (Equation 3.49), the hidden layer (Equation 3.50), finally to the entire network. The weights in the network will be updated after each entire tree structure is processed. The CTC loss function of a entire tree structure is defined as the sum of the errors with regards to all the short sequences in this tree: L(X, l) = N i=1 L(X i , l i ) (6.32) This formula is used for evaluating the performance of the network, and therefore could be as the metric to decide the training process stops or not. Recognition process As mentioned, the system treats each node or edge as a short sequence. A simple decoding method is adopted here as in previous chapter. We choose for each node or edge the label which has the highest cumulative probability over the short sequence. Suppose that p ij is the probability of outputting the i label at the j point. The probability of outputting the i label can be computed as P i = s j=1 p ij , where s is the number of points in a short sequence. The label with the highest probability is assigned to this short sequence. Post process Several trees regarding to one expression will be merged to build a SLG after labeling. Besides the merging strategy, in this section we consider several structural constraints which are not used when building the SLG in previous chapter. Generally, 5 steps are included in post process: (1) Merge trees. Each node or edge belongs at least to one tree, but possibly to several trees. Hence, several recognition results can be available for a single node or edge. We take an intuitive and simple way to deal with the problem of multiple results, choosing the one with the highest probability. (2) Symbol segmentation. We look for the symbols using connected component analysis: a connected component where nodes and edges have the same label is a symbol. (3) Relationships. We solve two possible kinds of conflicts in this step. (a) Perhaps between two symbols, there exists edges in both directions. Then, in each direction, we choose the label having the maximum probability. If the labels in two directions are both one of (Right, Above, Below, Inside, Superscript, Subscript) as illustrated in Figure 6.14a, we also choose the one having the larger probability. (b) Another type of conflict could be the case illustrated in Figure 6.14b where one symbol has two (or more) input relationships (one of 6 relationships). When observing the structure of SRTs, we can easily find that there is at most one input relationship for each node (symbol) in SRT. Therefore, when one symbol has two (or more) input relationships, we choose for it the one having the maximum probability. (4) Make connected SRT. As SRT should be a connected tree (this is a structural constraint, not a language specific constraint), there exist one root node and one or multiple leaf nodes inside each SRT. Each node has only one input edge, except the root node. After performing the first three steps, we still have the probability to output a SRT consisting several root nodes, in other words, being a forest instead of a tree. To address this type of error, we take a hard decision but quite simple: for each root r (except the one inputted earliest), add a Right edge to r from the leaf being the one nearest to r considering input time. We choose Right since it appears most in math expressions based on the statistics. (5) Add edges. According to the rule that all strokes in a symbol have the same input and output edges and that double-direction edges represent the segments, some missing edges can be completed automatically. Setup. We constructed the tree-based BLSTM recognition system with the RNNLIB library 1 . As described in Section 3.3.4, DBLSTM [START_REF] Graves | Hybrid speech recognition with deep bidirectional lstm[END_REF] can be created by stacking multiple BLSTM layers on top of each other in order to get higher level representation of the input data. Several types of configurations are included in this chapter: Networks (i), (ii), (iii) and (iv). The first one consists of one bidirectional hidden level (two opposite LSTM layers of 100 cells). This configuration has obtained good results in both handwritten text recognition [Graves et al., 2009] and handwritten math symbol classification [Álvaro et al., 2013, 2014a]. Network (ii) is a deep structure with two bidirectional hidden levels, each containing two opposite LSTM layers of 100 cells. Network (iii) and Network (iv) have 3 bidirectional hidden levels and 4 respectively. The setup about the input layer and output layer remains the same. The size of the input layer is 5 (5 features); the size of the output layer is 109 (101 symbol classes + 6 relationships + N oRelation + blank). Evaluation. With the Label Graph Evaluation library (LgEval) [START_REF] Mouchère | Icfhr 2014 competition on recognition of on-line handwritten mathematical expressions (crohme 2014)[END_REF], the recognition results can be evaluated on symbol level and on expression level. We introduce several evaluation criteria: symbol segmentation ('Segments'), refers to a symbol that is correctly segmented whatever the label is; symbol segmentation and recognition ('Seg+Class'), refers to a symbol that is segmented and classified correctly; spatial relationship classification ('Tree Rels.'), a correct spatial relationship between two symbols requires that both symbols are correctly segmented and with the correct relationship label. Experiment 1 In this experiment, we would like to see the effects of the depth of the network on the recognition results. Then according to the results, we choose the proper network configurations for the task. For each expression, only Tree-Time is derived to train the classifier. The evaluation results on symbol level and global expression level are presented in Table 6.4 and 6.5 respectively. From the tables, we can conclude that as the network turns to be deeper, the recognition rate first increases and then stays at a relatively stable level. There is a large increase from Network (i) to Network (ii), a slight increase from Network (ii) to Network (iii) and no improvement from Network (iii) to Network (iv). These results show that 3 bidirectional hidden levels in the network is a proper option for the task in this thesis. A network with depth larger than 3 brings no improvement but higher computational complexity. Thus, for the coming experiments we will not take into account Network (iv) any more. Experiment 2 In this section, we carry out the experiments by merging several trees. As a first try, we derive only 3 trees , Tree-Time, Tree-Left-R1 and Tree-0-R1 for each expression to train classifiers separately. With regards to each tree, we consider 3 network configurations, being Network (i), Network (ii), Network (iii). Thus, we have 9 classifiers totally in this section. After training, we use these 9 classifiers to label the relevant trees and finally merge them to build a valid SLG. We merge the 3 trees labeled by the corresponding 3 classifiers which have the same network configuration to obtain the systems (i, Merge3 ), (ii, Merge3 ), (iii, Merge3 ). Then we merge the 3 trees labeled by all these 9 classifiers to obtain the system Merge9. The evaluation results on symbol level and global expression level are presented in Table 6.6 and 6.7 respectively. We give both the individual tree recognition results and the merging results in each table. Tree-Time covers all the strokes of the input expression but can miss some relational edges between strokes; Tree-Left-R1 and Tree-0-R1 could catch some additional edges which are not covered by Tree-Time. The experiment results also verified this tendency. Compared to (iii, Tree-Time), the symbol segmentation and classification results of (iii, Merge3 ) stay at almost the same level while the recall rate of relationship classification is greatly improved (about 12%). The different recognition results of network (ii) are systematically increased when compared to (i) as the deep structure could get higher level representations of the input data. The performance of network (iii) is moderately improved when compared to (ii), just as same as the case in Experiment 1. When we consider merging all these 9 classifiers, we also get a slight improvement as shown by Merge 9. We compare the result of Merge 9 to the systems in CROHME 2014. With regard to the symbol classification and recognition rates, our system performs better than the second-ranked system in CROHME 2014. For relationship classification rate, our system reaches the level between the second-ranked and the third-ranked systems in CROHME 2014. The global expression recognition rate is 29.91%, ranking third in andMerge11 ). As can be seen, all trees have the similar recognition results except Tree-Time. And more trees do bring some effects on relationship classification task. Compared to (ii, Merge3 ), the results of relationship classification are slightly improved around 1% while symbol segmentation and recognition results are slightly reduced around 0.5%. Finally, at expression level, we see no significant changes as shown in Table 6.11. Error analysis In this section, we make a deep error analysis of the recognition result of (Merge 9 ) to better understand the system and to explore the directions for improving recognition rate in future. The Label Graph Evaluation library (LgEval) [START_REF] Mouchère | Icfhr 2014 competition on recognition of on-line handwritten mathematical expressions (crohme 2014)[END_REF]] evaluate the recognition system by comparing the output SLG of each expression with its ground truth SLG. Thus, node label confusion matrix and edge label confusion matrix are available to us with the library. Based on the two confusion matrices, we analyze the errors specifically below. Node label In table 6.12, we list the types of SLG node label error which has a high frequency on CROHME 2014 test set recognized by (Merge 9 ) system. The first column gives the outputted node labels by the classifier; the second column provide the ground truth node labels; the last column records the corresponding no. of occurrences. As can be seen from the table, the most frequent error (x → X, 46) belongs to the type of the lowercase-uppercase error. Moreover, (p → P , 24), (c → C, 16), (X → x, 16) and (y → Y , 14) also belong to the same type of lowercase-uppercase error. Another type of error which happens quite often in our experiment is the similar-look error, such as (x → ×, 26), (× → x, 10), (z → 2, 10), (q → 9, 10) and so on. Theoretically, two main types of node label error, being the lowercase-uppercase error and the similar-look error, could be eased when more training data is introduced. Thus, one of future work could be trying to collect new data as much as possible. Edge label Table 6.13 provides the edge (SLG) label errors of CROHME 2014 test set using (Merge 9 ). As can be seen, a large amount of errors come from the last row which represents the missing edges. 1858 edges with label Right are missed in our system, along with 929 segmentation edges. In addition, we can see the errors of high frequency in the sixth row which represents that five relationship (exclude Right) edges or segmentation edges or N oRelation (_) edges are mis-classified as Right edges. Among them, one of the reasons since we take a hard decision (add Right edges) in this step. Another possible reason is that, as Right relationship is the most frequent relation in math expressions, maybe the classifiers answer too often this frequent class. Now we will explore deeper the problem of the missing edges which appear in the last row of the Table 6.13. In fact, there exist three sources which result in the missing edges: (1) the edges are missed during the 2) Some edges in the derived graph are recognized by the system as N oRelation (_), which actually have a ground truth label of one of 6 relationship or symbol (segmentation edge). ( 3) When deriving multiple trees from the graph G, they do not well cover the graph completely. We have tried to ease the source 3 by deriving more trees, for example 11 trees in Experiment 3. However, the idea of using more trees did not work well in fact. Thus, a better strategy for deriving trees from the graph will be explored in future works. We reconsider the 2 test samples (a ≥ b and 44 -4 4 ) from CROHME 2014 test set recognized by system (Merge9 ). We provide the handwritten input, the derived graph from the raw input, the derived trees from the graph, along with the built SLG for each test sample (Figure 6.15 and Figure 6.16). These 2 samples were recognized correctly by system (Merge9 ). As shown, several extra edges appear owing to multiple trees and they were all recognized correctly as N oRelation. We remove these N oRelation edges to have a intuitive judgment on the recognition result (Figure 6.15f and 6.16f). In addition, we present a failed case in Figure 6.17. Just like the previous samples, we illustrate the handwritten input of 9 9+ √ 9 , the derived graph from the raw input, the derived trees from the graph, as well as the final SLG built. As can be seen, the structure of this expression was correctly recognized, only one error being the first symbol '9' of the denominator was recognized as '→'. This error belongs to the type of the similar-look error we have explained in error analysis section. Enlarging the training data set could be a solution to solve it. Also, it could be eased by introducing language model since 9 →+ √ 9 is not a valid expression from the point of language model. Discussion In this chapter, we extended the classical BLSTM to tree-based BLSTM and applied the new network model for recognizing online mathematical expressions. The new model has a tree topology, and possesses the ability to model directly the dependency among the tree-structured data. The proposed tree-based BLSTM system, requiring no high time complexity or manual work involved in the classical grammardriven systems, achieves competitive results in online mathematical expression recognition domain. Another major difference with the traditional approaches is that there is no explicit segmentation, recognition and layout extraction steps but a unique trainable system that produces directly a SLG describing a mathematical expression. With regard to the symbol segmentation and classification, the proposed system performs better than the second-ranked system in CROHME 2014 (the top ranked system used a much larger training data set which is not available to the public). For relationship recognition, we achieve better results than the third-ranked system. When considering the expression recognition rate with ≤ 3 errors, our result is 50.15%, close to the second-ranked system (50.20%). In future, several directions could be explored to extend the current work. ( 1) As we analyzed in Section 6.6, we could put efforts into collecting more training data to ease the lowercase-uppercase error and the similar-look error. ( 2) The current graph model misses still around 6% relationships on CROHME 2014 test set. Now, only one rule is used to define the visibility between a pair of strokes. In future, we will try to set several rules for defining the visibility between strokes. As long as a pair of strokes meet any one among these several rules, we decide that they could see each other. (3) A better strategy for deriving trees from the graph should be explored to get a better coverage of the graph. (4) As we cover more and more edges, the precision rate will decrease relevantly as presented in the previous experiments. Thus, one future direction could be developing a better training protocol to enforce the training of the class N oRelation. Then, a stronger post process step should be considered to improve the recognition rate. Conclusion and future works In this chapter, we first summarize the works of the thesis and list the main contributions made during the research process. Then, based on the current method and experiments results, we will propose several possible directions for future work. Conclusion We study the problem of online mathematical expression recognition in this thesis. Generally, ME recognition involves three tasks: symbol segmentation, symbol recognition and structural analysis [START_REF] Zanibbi | Recognition and retrieval of mathematical expressions[END_REF]. The state of the art solutions, considering the natural relationship between the three tasks, perform these 3 tasks at the same time by using grammar parsing techniques. Commonly, a complete grammar for math expressions consists of hundreds of production rules. These rules need to be designed manually and carefully for different data sets. Furthermore, the time complexity for grammar-driven parsing is usually exponential if no constraints are set to control it. Thus, to bypass the high time complexity and manual work of the classical grammar-driven systems, we proposed a new architecture for online mathematical expression recognition in this thesis. The backbone of our system is the framework of BLSTM recurrent networks with a CTC output layer, which achieved great success in sequence labeling task such as text and speech recognition thanks to the ability of learning long-term dependency and the efficient training algorithm. Mathematical expression recognition with a single path. Since BLSTM network with a CTC output layer is capable of processing sequence-structured data, as a first step to try, we proposed a trivial strategy where a BLSTM directly labelled the sequence of pen-down and pen-up strokes respecting the time order. These later added strokes (pen-up strokes) are used to represent the relationships between pairs of visible strokes by assigning them a ground truth label. In order to assign each stroke (visible or later added) a label in the recognition process, we extended the CTC training technique to local CTC, constraining the output labels into the corresponding strokes and at the same time benefiting from introducing an addition 'blank' class. At last, we built the 2-D expression from the outputted sequence of labels. The main contributions in this first proposal consist of: (1) We propose a new method to represent the relationship of a pair of visible strokes by linking the last point and the first point of them. With this method, a global sequence is generated and could be coped with BLSTM and CTC topology. ( 2 Mathematical expression recognition by merging multiple paths. In the above-mentioned simple proposal, we considered only the pairs of strokes which are successive in the time order. Obviously, a sequence-structured model is not able to cover all the relationships in 2-D expressions. Thus, we turned to a graph structure to model the relationships between strokes in mathematical expressions. Globally, the input of the recognition system is an handwritten expression which is a sequence of strokes; the output is the stroke label graph which consists of the information about the label of each stroke and the relationships between stroke pairs. Firstly, we derived an intermediate graph from the raw input using both the temporal and spatial information between strokes. In this intermediate graph, each node represents a stroke and edges are added according to temporal or spatial properties between strokes, which represent the relations of stroke pairs. Secondly, several 1-D paths were selected from the graph since the classifier model used is a 1-D sequence labeler. Next, we used the BLSTM classifier to label the selected 1-D paths. Finally, we merged these labeled paths to build a complete stroke label graph. Compared to the proposal with a single path, the solution by merging multiple paths presented improvements on recall rate of 'Tree Rels.' but at the same time decreases the precision rate of 'Tree Rels.' Thus, at the expression level, the recognition rate remained the same level as the solution with single path. One main contribution of this proposal is that multiple paths are used to represent a 2-D expression. However, even though several paths from one expression were considered in this system, the BLSTM model dealt with each path separately in essential. The classical BLSTM model could access information from past and future in a long range but the information outside the single sequence is of course not accessible to it. In fact, it is not the real case where human beings recognize the raw input using the entire contextual information. Mathematical expression recognition by merging multiple trees. As explained above, human beings interpret handwritten math expression by considering the global contextual information. In the system by merging multiple paths, each path was processed separately implying that only contextual information in the path could be visited. Thus, we developed a neural network model which could handle directly a structure not limited to a chain. We extended the chain-structured BLSTM to tree structure topology and applied this new network model for online math expression recognition. With this new neural network model, we could take into account the information in a tree instead of a single path at one time when dealing with one expression. Similar to the framework of the solution by merging multiple paths, we first derived an intermediate graph from the raw input. Then, instead of 1-D paths, we considered from the graph deriving trees which would be labeled by tree-based BLSTM model as a next step. In the end, these labeled trees were merged to build a stroke label graph. Compared to the proposal by merging multiple paths, the new recognition system was globally improved which was verified by experiments. One main contribution of this part is that we extend the chain-structured BLSTM to tree-based BLSTM, and provide the new topology with the ability of modeling dependency in a tree. We list the main contributions here: • One major difference with the traditional approaches is that there is no explicit segmentation, recognition and layout extraction steps but a unique trainable system that produces directly a SLG describing a mathematical expression. • We propose a new method to represent the relationship of a pair of visible strokes by linking the last point and the first point of them. • We extend the CTC training technique to local CTC. The new training technique proposed could improve the system performance globally compared to frame-wise training, as well constrain the output relatively. • We extend the chain-structured BLSTM to tree-based BLSTM, and provide the new topology with the ability of modeling dependency in a tree. • The proposed system, without using any grammar, achieves competitive results in online math expression recognition domain. Future works Based on the current method and error analysis, we summarize here several possible directions for future work. • Some work should be done with regards to improve the existing method, like improving the graph model, proposing a better strategy for deriving trees and developing a stronger post process stage. • Some efforts could be put into introducing language model into the graph. For example, as known an n-gram model is widely used in 1-D language processing like text and speech, how to take into account the statistical properties of n-grams in math expression recognition task is an interesting direction to explore for us. Actually a master project have been proposed already in this direction. • Another interesting work could be to extend BLSTM model to a DAG structure which will better cover the derived graph and therefore be able to handle more contextual information compared to the tree structure BLSTM. So we could leave the stage of deriving trees aside. • The current recognition system achieves competitive results without using any grammar knowledge. In future, we could apply graph grammar to improve the current recognition rate. • In this thesis, we extend the chain-structured BLSTM to a tree topology to let it model the dependency directly in a tree structure. Furthermore, we extend the CTC training technique to local CTC to constrain the output position relatively at the same time improve the system training efficiency compared to frame-wise training. These proposed algorithm are generic ones and we will apply them into other research fields in future. Finalement, on constate que les solutions actuelles sont quasi-systématiquement pilotées par une grammaire. Cela impose à la fois une tâche laborieuse pour construire ladite grammaire et un coût calculatoire élevé pour produire l'étape d'analyse. En contraste à ces approches, la solution que nous explorons se dispense de grammaire. C'est le parti pris de cette thèse, nous proposons de nouvelles architectures pour produire directement une interprétation des expressions mathématiques en tirant avantage des récents progrès dans les architectures des réseaux récurrents. Le graphe des étiquettes (LG). En descendant au niveau trait, il est possible de dériver du SRT un graphe de traits étiqueté (LG). Dans un LG, les noeuds représentent les traits tandis que les étiquettes sur les arcs encodent soit des informations de segmentation, soit des relations spatiales. Considérons l'expression simple « 2+2 » écrite en quatre traits dont deux traits pour le symbole '+' dont le tracé est présenté Figure 8.2a et le LG 8.2b. Comme on peut le voir, les noeuds du SLG sont étiquetés avec l'étiquette du symbole auxquels ils appartiennent. Un arc en pointillés porte une information de segmentation, cela indique que la paire de traits associés appartient au même symbole. Dans ce cas, l'arc porte l'étiquette du symbole. Sinon, un arc en trait plein définit une relation spatiale entre les symboles associés. Plus précisément, tous les traits d'un symbole sont connectés à tous les traits du symbole avec lequel une relation spatiale existe. Les relations spatiales possibles ont été définies par la compétition CROHME [START_REF] Mouchère | Advancing the state of the art for handwritten math recognition: the crohme competitions, 2011-2014[END_REF], elles sont au nombre de six : Droite, Au-dessus, En-dessous, A l'intérieur (cas des racines), Exposant et Indice. Réseaux Long Short-Term Memory Réseaux récurrents (RNNs). Les RNNs peuvent accéder à de l'information contextuelle et sont prédisposés à la tâche d'étiquetage des séquences [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF]. Dans la Figure 8.3, un réseau récurrent monodirectionnel est représenté en mode déplié. Chaque noeud à un instant donné représente une couche du réseau. La sortie du réseau au temps t i dépend non seulement de l'entrée au temps t i mais aussi de l'état au temps t i-1 . Les mêmes poids (w1, w2, w3) sont partagés à chaque pas temporel. LSTM. Les architectures classiques de type RNN présentent l'inconvénient de souffrir d'un oubli exponentiel, limitant l'utilisation d'un contexte réduit [START_REF] Hochreiter | Gradient flow in recurrent nets: the difficulty of learning long-term dependencies[END_REF]. Les réseaux de type Long short-term memory LSTM [START_REF] Hochreiter | Long short-term memory[END_REF]] sont capable de circonvenir à ce problème en utilisant un bloc mémoire capable de préserver l'état courant aussi longtemps que nécessaire. Un réseau LSTM est similaire à un réseau RNN, exception faite des unités de sommation des couches cachées qui sont remplacées par des blocs mémoires. Chaque bloc contient plusieurs cellules récurrentes dotées de trois unités de contrôle : les portes d'entrée, de sortie et d'oubli. Ces portes agissent par le biais de facteur multiplicatif pour interdire ou autoriser la prise en compte de l'information se propageant. BLSTM. Les réseaux LSTM traitent la séquence d'entrée de façon directionnelle du passé vers le futur. De façon complémentaire, les Bidirectional LSTM [START_REF] Graves | Framewise phoneme classification with bidirectional lstm networks[END_REF],sont composés de 2 couches séparées de type LSTM, chacune travaillant en sens inverse de l'autre (passé vers futur et futur vers présent). Les deux couches LSTM sont complètement connectées à la même couche de sortie. De cette façon, les contextes court terme et long terme dans chaque direction sont disponibles pour chaque instant de la couche de sortie. BLSTM profonds. Les DBLSTM [START_REF] Graves | Hybrid speech recognition with deep bidirectional lstm[END_REF] peuvent être construits en empilant plusieurs BLSTM l'un sur l'autre. Les sorties des 2 couches opposées sont concaténées et utilisées en tant qu'entrée pour un nouveau niveau. BLSTM à structure non linéaire. Les structures précédentes ne sont capables que de traiter des données en séquences. Les Multidimensional LSTM [START_REF] Graves | Supervised sequence labelling with recurrent neural networks[END_REF] quant à eux peuvent traiter des informations depuis n directions en introduisant n portes d'oubli dans chaque cellules mémoire. De plus, les travaux de [START_REF] Tai | Improved semantic representations from tree-structured long short-term memory networks[END_REF], ont étendu ces réseaux pour traiter des structures d'arbres, les topologies Child-sum Tree-LSTM et N-ary Tree-LSTM permettent d'incorporer dans une unité des informations en provenance de cellules filles multiples. Des approches similaires sont proposées dans [START_REF] Zhu | Long short-term memory over recursive structures[END_REF]. Enfin, une architecture LSTM pour graphe acyclique a été proposée pour de la composition sémantique [START_REF] Zhu | Dag-structured long short-term memory for semantic compositionality[END_REF]. Cette solution consiste d'abord à construire un graphe à partir des données d'entrée en utilisant à la fois les proximités temporelle et spatiale. Dans ce graphe, chaque trait est représenté par un noeud et les arcs sont rajoutés en fonction de propriétés spatio-temporelles des traits. Nous faisons l'hypothèse que des traits qui sont soit spatialement, soit temporellement proches, peuvent appartenir au même symbole ou peuvent partager une relation spatiale. A partir de ce graphe, plusieurs chemins vont être extraits et vont constituer des séquences qui vont chacune être traitées par l'étiqueteur de séquence qu'est le BLSTM. Ensuite, une étape de fusion combine ces résultats indépendants et construit un unique SLG. La couche CTC : Connectionist temporal classification Cette façon de procéder présente l'avantage de traiter plusieurs chemins, augmentant ainsi les chances de retrouver des relations utiles. Toutefois, chaque chemin est traité individuellement et indépendamment des autres. Ainsi le contexte qui est pris en compte se limite au seul chemin courant, sans pouvoir intégrer des informations présentes sur les autres chemins. Cela constitue une limitation par rapport à l'analyse visuelle humaine. Reconnaissance d'EM par fusion d'arbres Comme nous l'avons évoqué précédemment, nous utilisons globalement tout le contexte pour reconnaitre une expression mathématique manuscrite. Cette façon de procéder nécessite d'élargir le point de vue consistant à simplement fusionner des parcours sur des chemins individuels. Pour atteindre cet objectif, une nouvelle structure de réseau est proposée. Elle permet de traiter des données qui ne se limitent pas à des chaines. Ces réseaux de type BLSTM permet de traiter des arbres et sont donc utilisables pour reconnaitre des EM. L'avantage de ce nouveau type de structure est de prendre en compte une information plus riche ne se limitant pas à un seul chemin où chaque noeud possède un seul successeur mais de traiter des arbres pour représenter une expression. La 8. 1 1 Les résultats au niveau symbole sur la base de test de CROHME 2014, comparant ces travaux et les participants à la compétition. . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Les résultats au niveau expression sur la base de test CROHME 2014, comparant ces travaux et les participants à la compétition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . List of Figures 1.1 Illustration of mathematical expression examples. (a) A simple and liner expression consisting of only left-right relationship. (b) A 2-D expression where left-right, above-below, superscript relationships are involved. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Illustration of expression z d + z written with 5 strokes. . . . . . . . . . . . . . . . . . . . 1.3 Illustration of the symbol segmentation of expression z d + z written with 5 strokes. . . . . 1.4 Illustration of the symbol recognition of expression z d + z written with 5 strokes. . . . . . 1.5 Illustration of the structural analysis of expression z d + z written with 5 strokes. Sup : Superscript, R : Right. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6 Illustration of the symbol relation tree of expression z d + z. Sup : Superscript, R : Right. 1.7 Introduction of traits "in the air" . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8 Illustration of the proposal of recognizing ME expressions with a single path. . . . . . . . 1.9 Illustration of the proposal of recognizing ME expressions by merging multiple paths. . . . 1.10 Illustration of the proposal of recognizing ME expressions by merging multiple trees. . . . 2.1 Symbol relation tree (a) and operator tree (b) of expression (a+b) 2 . Sup : Superscript, R : Right, Arg : Argument. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 The symbol relation tree (SRT) for (a) a+b c , (b) a + b c . 'R' refers to Right relationship. . . 2.3 The symbol relation trees (SRT) for (a) 3 √ x, (b) n i=0 x i and (c) x xdx. 'R' refers to Right relationship while 'Sup' and 'Sub' denote Superscript and Subscript respectively. . . . . 2.4 Math file encoding for expression (a + b) 2 . (a) Presentation MathML; (b) L A T E X. Adapted from [Zanibbi and Blostein, 2012]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 (a) 2 + 2 written with four strokes; (b) the symbol relation tree of 2 + 2; (c) the SLG of 2 + 2. The four strokes are indicated as s1, s2, s3, s4 in writing order. 'R' is for left-right relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 The file formats for representing SLG considering the expression in Figure2.5a. (a) The file format taking stroke as the basic entity. (b) The file format taking symbol as the basic entity. 2.7 Adjacency Matrices for Stroke Label Graph. (a) The adjacency matrix format: li denotes the label of stroke si and eij is the label of the edge from stroke si to stroke sj. (b) The adjacency matrix of labels corresponding to the SLG in Figure 2.5c. . . . . . . . . . . . . 2.8 '2 + 2' written with four strokes was recognized as '2 -1 2 '. (a) The SLG of the recognition result; (b) the corresponding adjacency matrix. 3. 1 1 Illustration of sequence labeling task with the examples of handwriting (top) and speech (bottom) recognition. Input signals is shown on the left side while the ground truth is on the right. Extracted from [Graves et al., 2012]. . . . . . . . . . . . . . . . . . . . . . . . . 3.2 A multilayer perceptron. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 A recurrent neural network. The recurrent connections are highlighted with red color. . . . 3.4 An unfolded recurrent network. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Illustration of the proposal that uses BLSTM to interpret 2-D handwritten ME. . . . . . . . 4.2 Illustration of the complexity of math expressions. . . . . . . . . . . . . . . . . . . . . . . 4.3 (a) The time path (red) in SLG; (b) the SLG obtained by using the time path; (c) the post-processed SLG of '2 + 2', added edges are depicted as bold. . . . . . . . . . . . . . . 4.4 (a) P eo written with four strokes; (b) the SRT of P eo ; (c) r 2 h written with three strokes; (d) the SRT of r 2 h, the red edge cannot be generated by the time sequence of strokes . . . . . 4.5 The illustration of on-paper points (blue) and in-air points (red) in time path, a 1 +a 2 written with 6 strokes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 The illustration of (a) θ i , φ i and (b) ψ i used in feature description. The points related to feature computation at p i are depicted in red. . . . . . . . . . . . . . . . . . . . . . . . . . 4.7 The possible sequences of point labels in one stroke. . . . . . . . . . . . . . . . . . . . . 4.8 Local CTC forward-backward algorithm. Black circles represent labels and white circles represent blanks. Arrows signify allowed transitions. Forward variables are updated in the direction of the arrows, and backward variables are updated in the reverse direction. . . . . 4.9 Illustration for the decision of the label of strokes. As stroke 5 and 7 have the same label, the label of stroke 6 could be '+', '_' or one of the six relationships. All the other strokes are provided with the ground truth labels in this example. . . . . . . . . . . . . . . . . . . 4.10 Real examples from CROHME 2014 data set. (a) sample from Data set 1; (b) sample from Data set 2; (c) sample from Data set 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.11 (a) a ≥ b written with four strokes; (b) the built SLG of a ≥ b according to the recognition result, all labels are correct. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.12 (a) 44 -4 4 written with six strokes; (b) the ground-truth SLG; (c) the rebuilt SLG according to the recognition result. Three edge errors occurred: the Right relation between stroke 2 and 4 was missed because there is no edge from stroke 2 to 4 in the time path; the edge from stroke 4 to 3 was missed for the same reason; the edge from stroke 2 to 3 was wrongly recognized and it should be labeled as N oRelation. . . . . . . . . . . . . . . . . . . . . . 5.1 Examples of graph models. (a) An example of minimum spanning tree at stroke level. 5. 3 9 3 Stroke representation. (a) The bounding box. (b) The convex hull. . . . . . . . . . . . . . 5.4 Illustration of the proposal that uses BLSTM to interpret 2-D handwritten ME. . . . . . . . 5.5 Illustration of visibility between a pair of strokes. s 1 and s 3 are visible to each other. . . . 5.6 Five directions for a stroke s i . Point (0, 0) is the center of bounding box of s i . The angle of each region is π 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 (a) d dx a x is written with 8 strokes; (b) the SLG built from raw input using the proposed method; (c) the SLG from ground truth; (d) illustration of the difference between the built graph and the ground truth graph, red edges denote the unnecessary edges and blue edges refer to the missed ones compared to the ground truth. . . . . . . . . . . . . . . . . . . . . 5.8 Illustration of the strategy for merge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9 (a) a ≥ b written with four strokes; (b) the derived graph from the raw input; (c) the labeled graph (provided the label and the related probability) with merging 7 paths; (d) the built SLG after post process, all labels are correct. . . . . . . . . . . . . . . . . . . . . . . . . 5.10 (a) 44 -4 4 written with six strokes; (b) the derived graph; (c) the built SLG by merging several paths; (d) the built SLG with N oRelation edges removed. . . . . . . . . . . . . . 6.1 (a) A chain-structured LSTM network; (b) A tree-structured LSTM network with arbitrary branching factor. Extracted from [Tai et al., 2015]. . . . . . . . . . . . . . . . . . . . . . 6.2 A tree based structure for chains (from root to leaves). . . . . . . . . . . . . . . . . . . . . 6.3 A tree based structure for chains (from leaves to root). . . . . . . . . . . . . . . . . . . . . 6.4 Illustration of the proposal that uses BLSTM to interpret 2-D handwritten ME. . . . . . . . 6.5 Illustration of visibility between a pair of strokes. s 1 and s 3 are visible to each other. . . . 6.6 Five regions for a stroke s i . Point (0, 0) is the center of bounding box of s i . The angle range of R1 region is [-π 8 , π 8 ]; R2 : [ π 8 , with 10 strokes; (b) create nodes; (c) add Crossing edges. C : Crossing. 6.7 (d) add R1, R2, R3, R4, R5 edges; (e) add T ime edges. C : Crossing, T : T ime. . . . . 6.8 (a) f a = b f is written with 10 strokes; (b) the derived graph G, the red part is one of the possible trees with s2 as the root. C : Crossing, T : T ime. . . . . . . . . . . . . . . . . . 6.9 A re-sampled tree. The small arrows between points provide the directions of information flows. With regard to the sequence of points inside one node or edge, most of small arrows are omitted. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.10 A tree-based BLSTM network with one hidden level. We only draw the full connection on one short sequence (red) for a clear view. . . . . . . . . . . . . . . . . . . . . . . . . . . 6.11 Illustration for the pre-computation stage of tree-based BLSTM. (a) From the input layer to the hidden layer (from root to leaves), (b) from the input layer to the hidden layer (from leaves to root). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.12 The possible labels of points in one short sequence. . . . . . . . . . . . . . . . . . . . . . 6.13 CTC forward-backward algorithm in one stroke X i . Black circle represents label l i and white circle represents blank. Arrows signify allowed transitions. Forward variables are updated in the direction of the arrows, and backward variables are updated in the reverse direction. This figure is a local part (limited in one stroke) of Figure 4.8. . . . . . . . . . . 6.14 Possible relationship conflicts existing in merging results. . . . . . . . . . . . . . . . . . . 6.15 (a) a ≥ b written with four strokes; (b) the derived graph; (b) Tree-Time; (c)Tree-Left-R1 (In this case, Tree-0-R1 is the same as Tree-Left-R1 ); (e) the built SLG of a ≥ b after merging several trees and performing other post process steps, all labels are correct; (f) the built SLG with N oRelation edges removed. . . . . . . . . . . . . . . . . . . . . . . . . . 6.16 (a) 44 -4 4 written with six strokes; (b) the derived graph; (b) Tree-Time; (c)Tree-Left-R1 (In this case, Tree-0-R1 is the same as Tree-Left-R1 ); . . . . . . . . . . . . . . . . . . . . 6.16 (b)the built SLG after merging several trees and performing other post process steps; (c) the built SLG with N oRelation edges removed. . . . . . . . . . . . . . . . . . . . . . . written with 7 strokes; (b) the derived graph; (b) Tree-Time; . . . . . . . . . . . 8. 1 L 1 'arbre des relations entre symboles (SRT) pour (a) a+b c et (b) a + b c ,'R'définit une relation à droite. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 (a) « 2 + 2 » écrit en quatre traits ; (b) le graphe SLG de « 2 + 2 ». Les quatre traits sont repérés s1, s2, s3 et s4, respectant l'ordre chronologique. (ver.) et (hor.) ont été ajoutés pour distinguer le trait horizontal et vertical du '+'. 'R' représente la relation Droite. . . . 8.3 Un réseau récurrent monodirectionnel déplié. . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Illustration de la méthode basée sur un seul chemin. . . . . . . . . . . . . . . . . . . . . . 8.5 Introduction des traits « en l'air ». . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6 Reconnaissance par fusion de chemins. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7 Reconnaissance par fusion d'arbres. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 1 . 1 - 11 Figure 1.1 -Illustration of mathematical expression examples. (a) A simple and liner expression consisting of only left-right relationship. (b) A 2-D expression where left-right, above-below, superscript relationships are involved. Figure 1 . 2 - 12 Figure 1.2 -Illustration of expression z d + z written with 5 strokes. Figure 1 . 3 - 13 Figure 1.3 -Illustration of the symbol segmentation of expression z d + z written with 5 strokes. Figure 1 . 1 Figure 1.4 -Illustration of the symbol recognition of expression z d + z written with 5 strokes. Figure 1 .Figure 1 . 6 - 116 Figure 1.5 -Illustration of the structural analysis of expression z d + z written with 5 strokes. Sup : Superscript, R : Right. Figure 1 . 1 Figure 1.8 -Illustration of the proposal of recognizing ME expressions with a single path. Figure 1 . 9 - 19 Figure 1.9 -Illustration of the proposal of recognizing ME expressions by merging multiple paths. Figure 1 . 1 Figure 1.10 -Illustration of the proposal of recognizing ME expressions by merging multiple trees. Figure 2 . 1 - 21 Figure 2.1 -Symbol relation tree (a) and operator tree (b) of expression (a + b) 2 . Sup : Superscript, R : Right, Arg : Argument. Figure 2 . 2 - 22 Figure 2.2 -The symbol relation tree (SRT) for (a) a+b c , (b) a + b c . 'R' refers to Right relationship. Figure 2 . 3 - 23 Figure 2.3 -The symbol relation trees (SRT) for (a) 3 √ x, (b) n i=0 x i and (c) x xdx. 'R' refers to Right relationship while 'Sup' and 'Sub' denote Superscript and Subscript respectively. Figure 2 2 Figure 2.4 -Math file encoding for expression (a + b) 2 . (a) Presentation MathML; (b) L A T E X. Adapted from [Zanibbi and Blostein, 2012]. Figure 2 .Figure 2 22 Figure 2.5 -(a) 2 + 2 written with four strokes; (b) the symbol relation tree of 2 + 2; (c) the SLG of 2 + 2. The four strokes are indicated as s1, s2, s3, s4 in writing order. 'R' is for left-right relationship Figure 2 . 6 - 26 Figure 2.6 -The file formats for representing SLG considering the expression in Figure2.5a. (a) The file format taking stroke as the basic entity. (b) The file format taking symbol as the basic entity. Figure 2 2 Figure 2.7 -Adjacency Matrices for Stroke Label Graph. (a) The adjacency matrix format: li denotes the label of stroke si and eij is the label of the edge from stroke si to stroke sj. (b) The adjacency matrix of labels corresponding to the SLG in Figure 2.5c. Figure 2 .Figure 2 22 Figure 2.8 -'2 + 2' written with four strokes was recognized as '2 -1 2 '. (a) The SLG of the recognition result; (b) the corresponding adjacency matrix. 'Sup' denotes Superscript relationship. where A 0 belongs to non-terminals and A 1 , • • • , A k belong to terminals. r denotes a relation between the elements A 1 , • • • , A k . They use five binary spatial relations: , →, , ↓, . The arrows indicate a general writing direction, while denotes containment (as in notations like √ x, for instance). Figure 2 . 2 Figure 2.11 -A simple example of Fuzzy r-CFG. Extracted from [MacLean and Labahn, 2013]. Figure 2 . 2 Figure 2.12 -(a) An input handwritten expression; (b) a shared parse forest of (a) considering the grammar depicted in Figure 2.11. Extracted from [MacLean and Labahn, 2013] Figure 2 . 2 Figure 2.14 -Achitecture of the recognition system proposed in [Julca-Aguilar, 2016]. Extracted from[START_REF] Julca-Aguilar | Recognition of Online Handwritten Mathematical Expressions using Contextual Information[END_REF] Figure 2 . 15 - 215 Figure 2.15 -Network architecture of WYGIWYS. Extracted from[START_REF] Deng | What you get is what you see: A visual markup decompiler[END_REF] Figure 3 . 1 - 31 Figure 3.1 -Illustration of sequence labeling task with the examples of handwriting (top) and speech (bottom) recognition. Input signals is shown on the left side while the ground truth is on the right. Extracted from [Graves et al., 2012]. Figure 3 . 2 - 32 Figure 3.2 -A multilayer perceptron. Figure 3 . 3 - 33 Figure 3.3 -A recurrent neural network. The recurrent connections are highlighted with red color. Figure 3 . 3 Figure 3.4 -An unfolded recurrent network. Figure 3 . 3 Figure 3.5 -An unfolded bidirectional network. Extracted from [Graves et al., 2012]. Figure 3 3 Figure 3.7 -A deep bidirectional LSTM network with two hidden levels. Figure 3 . 3 Figure 3.9 -Illustration of CTC forward algorithm. Blanks are represented with black circles and labels are white circles. Arrows indicate allowed transitions. Adapted from [Graves et al., 2012]. Figure 3 . 3 Figure 3.10 -Mistake incurred by best path decoding. Extracted from [Graves et al., 2012]. Figure 3 . 3 Figure 3.11 -Prefix search decoding on the alphabet {X, Y}. Extracted from [Graves et al., 2012]. Figure 4 . 1 - 41 Figure 4.1 -Illustration of the proposal that uses BLSTM to interpret 2-D handwritten ME. ) linear (1-D) expressions which consist of only Right relationships such as 2+2, a+b; (2) 2-D expressions of which relationships are not only Right relationships such as P eo , √ 36, a+b c+d . There are totally 9817 expressions (8834 for training and 983 for test) in CROHME 2014 data set. Among them, the amount of linear expressions is 2874, accounting for around 30% proportion. Furthermore, we define chain-SRT expressions as certain expressions of which the symbol relation trees are essentially a chain structure. Chain-SRT expressions contain all the linear expressions and a part of 2-D expressions such as P eo , √ 36. Figure 4.2 illustrates the classifications of expressions. Figure 4 . 2 - 42 Figure 4.2 -Illustration of the complexity of math expressions. Figure 4.3 -(a) The time path (red) in SLG; (b) the SLG obtained by using the time path; (c) the postprocessed SLG of '2 + 2', added edges are depicted as bold. Figure 4.4 -(a) P eo written with four strokes; (b) the SRT of P eo ; (c) r 2 h written with three strokes; (d) the SRT of r 2 h, the red edge cannot be generated by the time sequence of strokes Figure 4 4 Figure 4.5 -The illustration of on-paper points (blue) and in-air points (red) in time path, a 1 + a 2 written with 6 strokes. Figure 4 . 6 - 46 Figure 4.6 -The illustration of (a) θ i , φ i and (b) ψ i used in feature description. The points related to feature computation at p i are depicted in red. Figure 4 4 Figure 4.7 -The possible sequences of point labels in one stroke. Figure 4 4 Figure 4.8 -Local CTC forward-backward algorithm. Black circles represent labels and white circles represent blanks. Arrows signify allowed transitions. Forward variables are updated in the direction of the arrows, and backward variables are updated in the reverse direction. by introducing the local CTC training technique, and use the extended library to train several BLSTM models. Both frame-wise training and local CTC training are adopted in our experiments. For each training process, the network having the best classification error (frame-wise) or CTC error (local CTC) on validation data set is saved. Then, we test this network on the test data set. The maximum decoding (Eqn. 4.16) is used for frame-wise training network. With regard to local CTC, either the maximum decoding or local CTC decoding (Eqn. 4.18) can be used. Data set 1 . 1 We select the expressions which do not include 2-D spatial relation, only left-right relation from CROHME 2014 training and test data. 2609 expressions are available for training, about one third of the full training set and 265 expressions for testing. In this case, there are 91 classes of symbols. Next, we split the training set into a new training set and validation set, 90% for training and 10% for validation. The output layer size is 94 (91 symbol classes + Right + N oRelation + blank). In left-right expressions, N oRelation will be used each time when a delayed stroke breaks the left-right time order. Data set 2. The depth of expressions in this data set is limited to 1, which imposes that two subexpressions having a spatial relationship (Above, Below, Inside, Superscript, Subscript) should be leftright expressions. It adds to the previous linear expressions some more complex MEs. 5820 expressions are selected for training from CROHME 2014 training set; 674 expressions for test from CROHME 2014 test set. Also, we divide 5820 expressions into the new training set and validation set, 90% for training and 10% for validation. The output layer size is 102 (94 symbol classes + 6 relationships + N oRelation + blank). Data set 3. The complete data set from CROHME 2014, 8834 expressions for training and 983 expressions for test. Also, we divide 8834 expressions for training (90%) and validation (10%). The output layer size is 109 (101 symbol classes + 6 relationships + N oRelation + blank). Figure 4.10 -Real examples from CROHME 2014 data set. (a) sample from Data set 1; (b) sample from Data set 2; (c) sample from Data set 3. Figure 4.11 -(a) a ≥ b written with four strokes; (b) the built SLG of a ≥ b according to the recognition result, all labels are correct. Figure 5.1a presents an example of MST. Figure 5 . 1 - 51 Figure 5.1 -Examples of graph models. (a) An example of minimum spanning tree at stroke level. Extracted from [Matsakis, 1999]. (b) An example of Delaunay-triangulation-based graph at symbol level. Extracted from [Hirata and Honda, 2011]. Figure 5.1b presents an example of Delaunaytriangulation-based graph applied on math expressions. Figure 5 . 3 - 53 Figure 5.3 -Stroke representation. (a) The bounding box. (b) The convex hull. Figure 5.4 -Illustration of the proposal that uses BLSTM to interpret 2-D handwritten ME. Figure 5 Figure 5 . 6 - 556 Figure 5.5 -Illustration of visibility between a pair of strokes. s 1 and s 3 are visible to each other. Figure 5.7 -(a) d dx ax is written with 8 strokes; (b) the SLG built from raw input using the proposed method; (c) the SLG from ground truth; (d) illustration of the difference between the built graph and the ground truth graph, red edges denote the unnecessary edges and blue edges refer to the missed ones compared to the ground truth. Figure 5 5 Figure 5.8 -Illustration of the strategy for merge. Figure 5 5 Figure 5.9 -(a) a ≥ b written with four strokes; (b) the derived graph from the raw input; (c) the labeled graph (provided the label and the related probability) with merging 7 paths; (d) the built SLG after post process, all labels are correct. Figure 5.10 -(a) 44 -4 4 written with six strokes; (b) the derived graph; (c) the built SLG by merging several paths; (d) the built SLG with N oRelation edges removed. Figure 6 6 Figure 6.1 -(a) A chain-structured LSTM network; (b) A tree-structured LSTM network with arbitrary branching factor. Extracted from [Tai et al., 2015]. Figure 6 6 Figure 6.2 -A tree based structure for chains (from root to leaves). Figure 6 . 3 - 63 Figure 6.3 -A tree based structure for chains (from leaves to root). Figure 6 Figure 6 66 Figure 6.7 -(a) f a = b f is written with 10 strokes; (b) create nodes; (c) add Crossing edges. C : Crossing. 102CHAPTER 6 . 6 Figure 6.8 -(a) f a = b f is written with 10 strokes; (b) the derived graph G, the red part is one of the possible trees with s2 as the root. C : Crossing, T : T ime. Table 6 . 3 - 63 The different types of the derived trees. Type Root Traverse algorithm Visiting order Tree-Left-R1 the leftmost stroke Depth-First Search (Crossing, R1, R3, R4, R2, R5, T ime) Tree-Left-R2 the leftmost stroke Depth-First Search (Crossing, R2, R1, R3, R4, R5, T ime) Tree-Left-R3 the leftmost stroke Depth-First Search (Crossing, R3, R1, R4, R2, R5, T ime) Tree-Left-R4 the leftmost stroke Depth-First Search (Crossing, R4, R1, R3, R2, R5, T ime) Tree-Left-R5 the leftmost stroke Depth-First Search (Crossing, R5, R1, R3, R4, R2, T ime) Tree-0-R1 s0 Depth-First Search (Crossing, R1, R3, R4, R2, R5, T ime) Tree-0-R2 s0 Depth-First Search (Crossing, R2, R1, R3, R4, R5, T ime) Tree-0-R3 s0 Depth-First Search (Crossing, R3, R1, R4, R2, R5, T ime) Tree-0-R4 s0 Depth-First Search (Crossing, R4, R1, R3, R2, R5, T ime) Tree-0-R5 s0 Depth-First Search (Crossing, R5, R1, R3, R4, R2, T ime) Tree-Time s0 Depth-First Search only the time order 6.4.4 Feed the inputs of the Tree-based BLSTM In section 6.4.3, we derived trees from the intermediate graph. Nodes of the tree represent visible strokes and edges denote the relationships between pairs of strokes. We would like to label each node and edge correctly with the Tree-based BLSTM model, aiming to build a complete SLG finally. To realize this, the first step is to feed the derived tree into the Tree-based BLSTM model. Figure 6 . 6 Figure 6.11 -Illustration for the pre-computation stage of tree-based BLSTM. (a) From the input layer to the hidden layer (from root to leaves), (b) from the input layer to the hidden layer (from leaves to root). Figure 6 . 6 Figure 6.13 -CTC forward-backward algorithm in one stroke X i . Black circle represents label l i and white circle represents blank. Arrows signify allowed transitions. Forward variables are updated in the direction of the arrows, and backward variables are updated in the reverse direction. This figure is a local part (limited in one stroke) of Figure 4.8. 6.29) 108CHAPTER 6. MATHEMATICAL EXPRESSION RECOGNITION BY MERGING MULTIPLE TREES Figure 6 . 6 Figure 6.14 -Possible relationship conflicts existing in merging results. Figure 6 . 6 Figure 6.15 -(a) a ≥ b written with four strokes; (b) the derived graph; (b) Tree-Time; (c)Tree-Left-R1 (In this case, Tree-0-R1 is the same as Tree-Left-R1 ); (e) the built SLG of a ≥ b after merging several trees and performing other post process steps, all labels are correct; (f) the built SLG with N oRelation edges removed. Figure 6 .Figure 6 . 66 Figure 6.16 -(a) 44 -4 4 written with six strokes; (b) the derived graph; (b) Tree-Time; (c)Tree-Left-R1 (In this case, Tree-0-R1 is the same as Tree-Left-R1 ); 120CHAPTER 6 .Figure 6 .Figure 6 . 666 Figure 6.17 -(a) 9 9+ √ 9 written with 7 strokes; (b) the derived graph; (b) Tree-Time; ) We extend the CTC training technique to local CTC. The new training technique proposed could improve the system performance globally compared to frame-wise training, as well constrain the output relatively. The limitation of this simple proposal is that it takes into account only a pair of visible strokes successive in the input time, and therefore miss some CHAPTER 7. CONCLUSION AND FUTURE WORKS relationships for 2-D mathematical expressions. 8. 2 2 Figure 8.1 -L'arbre des relations entre symboles (SRT) pour (a) a+b c et (b) a + b c ,'R'définit une relation à droite. Figure 8.2 -(a) « 2 + 2 » écrit en quatre traits ; (b) le graphe SLG de « 2 + 2 ». Les quatre traits sont repérés s1, s2, s3 et s4, respectant l'ordre chronologique. (ver.) et (hor.) ont été ajoutés pour distinguer le trait horizontal et vertical du '+'. 'R' représente la relation Droite. Figure 8 . 3 - 83 Figure 8.3 -Un réseau récurrent monodirectionnel déplié. L 'utilisation des réseaux récurrents (RNN) trouve tout son intérêt pour les tâches d'étiquetage de séquences où le contexte joue un rôle important. L'entrainement de ces réseaux nécessite la définition d'une fonction de coût qui repose classiquement sur la connaissance des étiquettes désirées (vérité terrain) pour chaque instant des sorties. Cela impose de disposer d'une base d'apprentissage dont chacune des séquences soit complétement étiquetée au niveau de tous les points constituant la trame du signal. Cela représente un travail très fastidieux pour assigner ainsi une étiquette à chacun de ces points. L'usage d'une couche CTC permet de contourner cette difficulté. Il suffit de connaitre la séquence d'étiquettes d'un point de vue global, sans qu'un alignement complet avec le signal d'entrée ne soit nécessaire. Grace à l'utilisation d'une étiquette additionnelle « blank », le CTC autorise le réseau à ne fournir des décisions qu'en quelques instants bien spécifiques, tout en permettant une reconstruction complète de la séquence. 8. 3 Figure 8 . 6 - 386 Figure 8.4 -Illustration de la méthode basée sur un seul chemin. Figure 8.7 -Reconnaissance par fusion d'arbres. .14 Achitecture of the recognition system proposed in[START_REF] Julca-Aguilar | Recognition of Online Handwritten Mathematical Expressions using Contextual Information[END_REF]. Extracted from [Julca-Aguilar, 2016] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.15 Network architecture of WYGIWYS. Extracted from . . . . . . . . . 2.13 Geometric features for classifying the spatial relationship between regions B and C. Extracted from [Álvaro et al., 2016] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Table 2 . 2 1 -Illustration of the terminology related to recall and precision. relevant non relevant segmented true positive (tp) false positive (fp) not segmented false negative (fn) true negative (tn) Table 4 . 4 1 -The symbol level evaluation results on CROHME 2014 test set (provided the ground truth labels on the time path). Data set Segments (%) Seg + Class (%) Tree Rels. (%) Rec. Prec. Rec. Prec. Rec. Prec. 1 99.73 99.46 99.73 99.46 95.78 99.40 2 99.75 99.49 99.73 99.48 80.33 99.39 3 99.73 99.45 99.72 99.44 75.54 99.27 Table 4.2 -The expression level evaluation results on CROHME 2014 test set (provided the ground truth labels on the time path). Data set correct (%) <= 1 error <= 2 errors <= 3 errors 1 86.79 87.55 91.32 93.96 2 44.21 51.63 61.87 68.69 3 34.11 40.94 50.51 58.25 Table 4 . 4 3 -The symbol level evaluation results on CROHME 2014 test set, including the experiment results in this work and CROHME 2014 participant results. Data set, features Segments (%) Seg + Class (%) Tree Rels. (%) Rec. Prec. Rec. Prec. Rec. Prec. 1, 5 90.11 80.75 78.91 70.71 79.87 73.66 2, 5 91.88 84.47 82.42 75.77 64.75 71.96 3, 5 93.26 86.86 84.40 78.61 61.85 75.06 system CROHME 2014 participant results III 98.42 98.13 93.91 93.63 94.26 94.01 I 93.31 90.72 86.59 84.18 84.23 81.96 VII 89.43 86.13 76.53 73.71 71.77 71.65 V 88.23 84.20 78.45 74.87 61.38 72.70 IV 85.52 86.09 76.64 77.15 70.78 71.51 VI 83.05 85.36 69.72 71.66 66.83 74.81 II 76.63 80.28 66.97 70.16 60.31 63.74 Table 4 . 4 4 shows the recognition rates at the global expression level with no error, and with at most one Table 4.4 -The expression level evaluation results on CROHME 2014 test set, including the experiment results in this work and CROHME 2014 participant results. Data set, features correct (%) <= 1 error <= 2 errors <= 3 errors 1, 5 25.28 40.75 49.06 52.08 2, 5 12.76 25.07 31.16 36.20 3, 5 12.63 21.28 27.70 31.98 system CROHME 2014 participant results III 62.68 72.31 75.15 76.88 I 37.22 44.22 47.26 50.20 VII 26.06 33.87 38.54 39.96 VI 25.66 33.16 35.90 37.32 IV 18.97 28.19 32.35 33.37 V 18.97 26.37 30.83 32.96 II 15.01 22.31 26.57 27.69 to three errors in the labels of SLG. This metric is very strict. For example one label error can happen only on one stroke symbol or in the relationship between two one-stroke symbols; a labeling error on a 2-stroke symbol leads to 4 errors (2 nodes labels and 2 edges labels). As can be seen, the expression recognition rates are decreasing as the data sets are getting more and more complex from Data set 1 to 3. On Data set 1 of only linear expressions, the ME recognition rate is 25.28%. The recognition rate with no error on CROHME 2014 test set is 12.63%. The best one and worst one reported by CROHME 2014 are 62.68% and 15.01%. When looking at the recognition rate having less than three errors, four participants ranked between 27% and 37%, while our result is 31.98%. Table 4.5, with the first 2 networks, we can conclude that local CTC training can improve Table 4.5 -The symbol level evaluation results (mean values) on CROHME 2014 test set with different training and decoding methods, features. Feat. Train Decode Segments (%) Seg + Class (%) Tree Rels. (%) Rec. Prec. Rec. Prec. Rec. Prec. 5 frame-wise maximum 92.71 85.88 83.76 77.59 59.84 73.71 5 local CTC maximum 93.21 86.73 84.11 78.26 61.75 74.51 5 local CTC local CTC 93.2 86.71 84.11 78.25 61.71 74.67 7 local CTC maximum 93 86.43 84 78.06 61.73 74.06 Table 4.6 -The standard derivations of the symbol level evaluation results on CROHME 2014 test set with local CTC training and maximum decoding method, 5 local features. Feat. Train Decode Segments (%) Seg + Class (%) Tree Rels. (%) Rec. Prec. Rec. Prec. Rec. Prec. 5 local CTC maximum 0.12 0.26 0.26 0.32 0.29 0.83 Consequently, in the coming chapters we will use local CTC training method instead of frame-wise training, maximum decoding instead of local CTC decoding, 5 local features instead of 7 features for all the experiments aimed at making an effective and efficient system. the system performance globally compared to frame-wise training. Furthermore, the proposed local CTC training method is able to accelerate the convergence process and therefore reduce the training time significantly. Comparing the results from the second and third systems, the conclusion is straightforward that local CTC decoding does not help the recognition process but cost more computation. Maximum decoding is a better choice in this work. In the fourth system, we test the effect of contextual features on BLSTM networks. The results stay at the same level with system trained with only local features. In addition, to have a look at the stability of our recognition system, we provide in Table 4 .6 the standard derivations of the symbol level evaluation results on CROHME 2014 test set with local CTC training and maximum decoding methods, 5 local features. As shown, the standard derivations are quite low, indicating that our system has a high stability. BAR) of the convex hull is calculated. If the Visibility Angle Range (VAR) = overlap of BAR and UAR is nonzero, s i could see s j . Then UAR is updated with UAR -VAR. The model called LOS CH symmetric (CH: the convex hull is used to compute the block angle range of each stroke; symmetric: for a edge from s i to s j , there would be the reverse edge from s j to s i ) has a recall 99.9% of and a precision of 29.7% (CROHME 2014 test set, CPP distance is used). In fact, each graph model has its strong points and also limitations. Hu choose LOS CH symmetric as the graph representation in his work as a high recall and a reasonable precision is required. (CROHME 2014 test set, CPP distance is used). The recall and the precision of DT graph are 97.3% and 39.1% respectively (CROHME 2014 test set, AC distance is used). In LOS graph model, the bounding box center is taken as eye of a stroke. Each stroke s i has an Unblocked Angle Range (UAR) which is initialized as [0, 2π]. For any other stroke s j , the Block Angle Range ( Table 5 . 5 1 -The symbol level evaluation results on CROHME 2014 test set (provided the ground truth labels of the nodes and edges of the built graph). Model Segments (%) Seg + Class (%) Tree Rels. (%) Rec. Prec. Rec. Prec. Rec. Prec. time graph 99.73 99.45 99.72 99.44 75.54 99.27 new graph 100.00 99.99 99.99 99.98 93.48 99.95 Table 5 . 5 2 -The expression level evaluation results on CROHME 2014 test set (provided the ground truth labels of the nodes and edges of the built graph). Model correct (%) <= 1 error <= 2 errors <= 3 errors time graph 34.11 40.94 50.51 58.25 new graph 67.65 76.70 85.76 90.74 Table 5 . 5 3 -Illustration of the used classifiers in the different experiments depending of the type of path. exp. Path Type Time Random 1 CLS T Table 5 . 5 5 -The expression level evaluation results on CROHME 2014 test set, including the experiment results in this work and CROHME 2014 participant results exp. correct (%) ≤ 1 error ≤ 2 errors ≤ 3 errors 1 11.80 19.33 26.55 31.43 2 8.55 16.89 23.40 29.91 3 13.02 22.48 30.21 35.71 4 11.19 19.13 26.04 31.13 5 13.02 21.77 30.82 36.52 system CROHME 2014 participant results III 62.68 72.31 75.15 76.88 I 37.22 44.22 47.26 50.20 VII 26.06 33.87 38.54 39.96 VI 25.66 33.16 35.90 37.32 IV 18.97 28.19 32.35 33.37 V 18.97 26.37 30.83 32.96 II 15.01 22.31 26.57 27.69 Table 6 . 6 1 -The symbol level evaluation results on CROHME 2014 test set (provided the ground truth labels). Model Segments (%) Seg + Class (%) Tree Rels. (%) Rec. Prec. Rec. Prec. Rec. Prec. time graph 99.73 99.45 99.72 99.44 75.54 99.27 graph (Chapter 5) 100.00 99.99 99.99 99.98 93.48 99.95 new graph 99.97 99.93 99.96 99.92 93.99 99.86 Table 6.2 -The expression level evaluation results on CROHME 2014 test set (provided the ground truth labels). Model correct (%) <= 1 error <= 2 errors <= 3 errors time graph 34.11 40.94 50.51 58.25 graph (Chapter 5) 67.65 76.70 85.76 90.74 new graph 69.89 77.21 85.96 90.54 6.4.3 Derivation of trees from G The complete data set from CROHME 2014 is used, 8834 expressions for training and 983 expressions for test. We extract randomly 10% of the 8834 expressions of the training set as a validation set. To get more recent comparison with the state of the art, we have also use the last CROHME 2016 data set to evaluate the best configuration. The training data set remains the same as CROHME 2014. 1147 expressions are included in CROHME 2016 test data set. 6.5 Experiments Data sets. Table 6 . 6 4 -The symbol level evaluation results on CROHME 2014 test set with Tree-Time only. Network, model Segments (%) Seg + Class (%) Tree Rels. (%) Rec. Prec. Rec. Prec. Rec. Prec. i, Tree-Time 92.93 84.82 84.12 76.78 60.70 76.19 ii, Tree-Time 95.10 90.47 87.53 83.27 65.06 83.18 iii, Tree-Time 95.43 91.13 88.26 84.28 65.45 83.57 iv, Tree-Time 95.57 91.21 87.81 83.80 65.98 82.85 Table 6.5 -The expression level evaluation results on CROHME 2014 test set with Tree-Time only. Network, model correct (%) ≤ 1 error ≤ 2 errors ≤ 3 errors i, Tree-Time 12.41 20.24 26.14 30.93 ii, Tree-Time 16.09 25.46 32.28 37.27 iii, Tree-Time 16.80 25.56 32.89 38.09 iv, Tree-Time 16.19 25.97 33.20 38.09 Table 6 . 6 6 -The symbol level evaluation results on CROHME 2014 test set with 3 trees, along with CROHME 2014 participant results. Network, model Segments (%) Seg + Class (%) Tree Rels. (%) Rec. Prec. Rec. Prec. Rec. Prec. i, Tree-Time 92.93 84.82 84.12 76.78 60.70 76.19 i, Tree-Left-R1 84.82 72.49 72.80 62.21 44.34 57.78 i, Tree-0-R1 85.31 72.88 74.17 63.37 42.92 60.08 i, Merge3 93.53 87.20 86.10 80.28 71.16 66.13 ii, Tree-Time 95.10 90.47 87.53 83.27 65.06 83.18 ii, Tree-Left-R1 86.71 75.64 76.85 67.03 48.14 61.91 ii, Tree-0-R1 87.52 76.66 77.00 67.45 48.14 63.04 ii, Merge3 95.01 90.05 88.38 83.76 76.20 72.28 iii, Tree-Time 95.43 91.13 88.26 84.28 65.45 83.57 iii, Tree-Left-R1 88.03 78.13 78.56 69.72 50.31 65.87 iii, Tree-0-R1 87.41 77.02 77.63 68.40 48.23 64.28 iii, Merge3 95.25 90.70 88.90 84.65 77.33 73.72 Merge 9 95.52 91.31 89.55 85.60 78.08 74.64 system CROHME 2014 participant results III 98.42 98.13 93.91 93.63 94.26 94.01 I 93.31 90.72 86.59 84.18 84.23 81.96 VII 89.43 86.13 76.53 73.71 71.77 71.65 V 88.23 84.20 78.45 74.87 61.38 72.70 IV 85.52 86.09 76.64 77.15 70.78 71.51 VI 83.05 85.36 69.72 71.66 66.83 74.81 II 76.63 80.28 66.97 70.16 60.31 63.74 Table 6 . 6 12 -Illustration of node (SLG) label errors of (Merge 9 ) on CROHME 2014 test set. We only list the cases that occur ≥ 10 times. output label ground truth label no. of occurrences x X 46 x × 26 p P 24 , 1 19 c C 16 y Y 14 + t 14 . . . . 13 X x 16 a x 14 1 | 11 - 1 10 × x 10 z 2 10 q 9 10 Table 6 . 6 13 -Illustration of edge (SLG) label errors of (Merge 9 ) on CROHME 2014 test set. The first column represents the output labels; the first row offers the ground truth labels; other cells in this table provide the corresponding no. of occurrences. '*' represents segmentation edges, grouping two nodes into a symbol. The label of segmentation edge is a symbol (For convenient representation, we do not give the specific symbol types, but a overall label '*'.). We evaluated the graph model in Section 6.4.2 where around 6% relationships were missed. One of the future works could be searching for a better graph representation model to catch the 6% relationships. ( * Above Below Inside Right Sub Sup _ * 208 0 0 17 1 1 29 Above 8 1 21 10 Below 2 1 1 114 7 Inside 5 1 1 9 Right 344 65 22 40 152 112 1600 Sub 4 6 3 44 1 7 Sup 1 3 35 3 31 _ 929 300 80 109 1858 189 235 graph representation stage. Mathematical markup language (MathML) version 3.0, https://www.w3.org/Math/. Graves A. RNNLIB: A recurrent neural network library for sequence learning problems. http://sourceforge.net/projects/rnnl/. Graves A. RNNLIB: A recurrent neural network library for sequence learning problems. http://sourceforge.net/projects/rnnl/. The weights are manually optimized. We tested several different weight assignments, and then choose the best one among them. Figure 8.5 -Introduction des traits « en l'air ». associée à un trait sur l'un des points de ce trait, limitant ainsi la flexibilité de l'étiquette « blank » à l'intérieur dudit trait. Ceci va permettre de reconstruire le graphe SLG en ayant une et une seule étiquette par noeud et par arc. Acknowledgments spatial relationship classification ('Tree Rels.'). A correct spatial relationship between two symbols requires that both symbols are correctly segmented and with the right relationship label. As presented, the results for 'Segments' and 'Seg+Class' do not show a big difference among exp. (1 2 3 4). It can be explained by the fact that time path is enough to give good results and random paths contributes little. With regard to 'Tree Rels.', 'Rec.' of exp. (2 3 4) is improved compared to exp.1 because random paths catch some ground truth edges which are missed in time path; but 'Prec.' rate declines which means that random paths also cover some edges which are not in ground truth LG. Unfortunately, these extra edges are not labeled as N oRelation. Among (1 2 3 4) experiments, exp.3 outperforms others for all the items. Thus, it is a better strategy to use CLS T for labeling time path and use CLS R for random path. Our results are comparable to the results of CROHME 2014 because the same training and testing data sets are used. The second part of Table 5.4 gives the symbol level evaluation results of the participants in CROHME 2014 sorting by the recall rate for correct symbol segmentation. The best 'Rec.' of 'Segments' and 'Seg+Class' reported by CROHME 2014 are 98.42% and 93.91% respectively. Ours are 92.77% and 85.17%, both ranked 3 out of 8 systems (7 participants in CROHME 2014 ). Our solution presents competitive results on symbol recognition task and segmentation task. Table 5.5 shows the recognition rates at the global expression level with no error, and with at most one to three errors in the labels of LG. Among (1 2 3 4) experiments, exp.3 outperforms others for all the items. Compared to exp.1 where only time path is considered, we see an increase on recall rate of 'Tree Rels.' but meanwhile a decrease on precision rate of 'Tree Rels.' in exp.3. As only 6 random paths is used in it, we would like to see if more random paths could bring any changes. We carry out another experiment, exp.5, where 10 random paths is used to train classifier CLS R . The evaluation results on symbol level and expression level are provided respectively in Table 5.4 andTable 5.5. As shown, when we consider more random paths, the recall rate of 'Tree Rels.' keeps on increasing but precision rate of 'Tree Rels.' is decreasing. Thus, at the expression level, the recognition rate remains the same level as the experiment with 6 random paths. To illustrate these results, we reconsider the 2 test samples (a ≥ b and 44 -4 4 ) recognized with the system of exp.3. In last chapter where we just use the single time path, a ≥ b was correctly recognized and for 44 -4 4 , the Right relationship from the minus symbol to fraction bar was omitted in the modeling all the participated systems. When we compute the recognition rate with ≤ 3 errors, our result is 50.15%, very close to the second-ranked system (50.20%). The top ranked system is from My Script company and they use a much larger training data set which is not available to the public. Furthermore, as we know, all the top 4 systems in the CROHME 2014 competition are grammar driven solutions which need a large amount of manual work and a high computational complexity. There is no grammar considered in our system. To have more uptodate comparisons, we also evaluate the system of Merge 9 on CROHME 2016 test data set. As can be seen in Table 6.8, compared to other participated systems in CROHME 2016, our system is still competitive on symbol segmentation and classification task. For relationship recognition task, there is room for improvement. The results at expression level is presented in Table 6.9. The global expression recognition rate is 27.03%. Experiment 3 In Experiment 2, we consider merging 3 trees to build a SLG describing an math expression. Compared to the results of symbol segmentation and recognition, the relationship classification results are not particularly prominent. We thought one of the possible reasons could be that only 3 trees can not well cover the graph G, in other words, some edges in the graph G are not used in the already derived 3 trees. Thus, in this experiment, we will test all the 11 trees illustrated in Table 6.3 to see if more trees could improve the results of relationship classification task. Taking into consideration that we see a small increase in recognition results, but a much increase in time complexity from Network (ii) to Network (iii) in previous experiments, we use Network (ii) as a cost-effective choice to train 11 different classifiers in this section. The evaluation results on symbol level and global expression level are presented in Table 6.10 and 6.11 respectively. In each table, we provide in detail the individual tree recognition results and the merging Publications of author Journals Thèse de Doctorat Ting ZHANG Nouvelles architectures pour la reconnaissance des expressions mathématiques manuscrites New Architectures for Handwritten Mathematical Expressions Recognition Résumé Véritable challenge scientifique, la reconnaissance d'expressions mathématiques manuscrites est un champ très attractif de la reconnaissance des formes débouchant sur des applications pratiques innovantes. En effet, le grand nombre de symboles (plus de 100) utilisés ainsi que la structure en 2 dimensions des expressions augmentent la difficulté de leur reconnaissance. Dans cette thèse, nous nous intéressons à la reconnaissance des expressions mathématiques manuscrites en-ligne en utilisant de façon innovante les réseaux de neurones récurrents profonds BLSTM avec CTC pour construire un système d'analyse basé sur la construction de graphes. Nous avons donc étendu la structure linéaire des BLSTM à des structures d'arbres (Tree-Based BLSTM) permettant de couvrir les 2 dimensions du langage. Nous avons aussi proposé d'ajouter des contraintes de localisation dans la couche CTC pour adapter les décisions du réseau à l'échelle des traits de l'écriture, permettant une modélisation et une évaluation robustes. Le système proposé construit un graphe à partir des traits du tracé à reconnaître et de leurs relations spatiales. Plusieurs arbres sont dérivés de ce graphe puis étiquetés par notre Tree-Based BLSTM. Les arbres obtenus sont ensuite fusionnés pour construire un SLG (graphe étiqueté de traits) modélisant une expression 2D. Une différence majeure par rapport aux systèmes traditionnels est l'absence des étapes explicites de segmentation et reconnaissance des symboles isolés puis d'analyse de leurs relations spatiales, notre approche produit directement un graphe SLG. Notre système sans grammaire obtient des résultats comparables aux systèmes spécialisés de l'état de l'art. Abstract As an appealing topic in pattern recognition, handwritten mathematical expression recognition exhibits a big research challenge and underpins many practical applications. Both a large set of symbols (more than 100) and 2-D structures increase the difficulty of this recognition problem. In this thesis, we focus on online handwritten mathematical expression recognition using BLSTM and CTC topology, and finally build a graph-driven recognition system, bypassing the high time complexity and manual work in the classical grammar-driven systems. To allow the 2-D structured language to be handled by the sequence classifier, we extend the chain-structured BLSTM to an original Tree-based BLSTM, which could label a tree structured data. The CTC layer is adapted with local constraints, to align the outputs and at the same time benefit from introducing the additional 'blank' class. The proposed system addresses the recognition task as a graph building problem. The input expression is a sequence of strokes, and then an intermediate graph is derived considering temporal and spatial relations among strokes. Next, several trees are derived from the graph and labeled with Tree-based BLSTM. The last step is to merge these labeled trees to build an admissible stroke label graph (SLG) modeling 2-D formulas uniquely. One major difference with the traditional approaches is that there is no explicit segmentation, recognition and layout extraction steps but a unique trainable system that produces directly a SLG describing a mathematical expression. The proposed system, without any grammar, achieves competitive results in online math expression recognition domain.
262,295
[ "989805" ]
[ "473973", "481177" ]
01754481
en
[ "sdv" ]
2024/03/05 22:32:10
2015
https://hal.univ-lorraine.fr/tel-01754481/file/DDOC_T_2015_0219_BEKHIT.pdf
M Denis Poncelet Je Remercie Également M Andrée Voilley Carole Jeandel Carole Perroud Myriam Michel Sylvie Wolff Lactococcus lactis subsp. lactis ATCC 11454 in microcapsules based on biopolymers" Keywords: biopolymer, bioactive films, microbeads, starch, hydroxypropylmethylcellulose, mechanical properties Je tiens en premier lieu à remercier le Professeur Stéphane Desobry, pour avoir accepté de m'accueillir au sein de son équipe de recherche et m'avoir encadré pendant la durée de cette thèse. Je pense avoir appris à son contact, je lui suis reconnaissante pour le temps qu'il m'a consacré et pour toutes les opportunités qu'il m'a données au cours de cette thèse. Ensuite, j'adresse tout particulièrement ma reconnaissance à ma co-directrice de thèse, le Dr Laura Sanchez-gonzalez pour ses conseils, commentaires, aides, et pour son précieux engagement dans l'amélioration du travail. J'ai sincèrement apprécié de travailler avec elle et je suis reconnaissante pour le temps qu'elle m'a consacré. J'aimerais aussi lui exprimer ma gratitude pour son implication et sa disponibilité. Je tiens à remercier très chaleureusement les membres du programme Erasmus Mundus de m'avoir accordé la bourse pour financer ma thèse, et en particulier Delphine pour sa présence, sa contribution, ses aides et ses chaleureux conseils. List of Abbreviations List of tables List of Figures Pour éviter d'incorporer les LAB directement dans les aliments ou elles se développeraient très rapidement, une voie consiste à les placer dans le matériau d'emballage et de permettre la migration de la nisine active vers l'aliment selon le concept de l'emballage actif. Cela est envisageable par le biais de LAB piégés dans des microbilles et incorporés dans des films de biopolymères. Les films de polymères bioactifs ainsi aurait également pour effet de ralentir la croissance bactérienne tout en permettant la synthèse de nisine pendant un stockage prolongé. à-vis de Listeria spp qui est un contaminant majeur des aliments réfrigérés. L. lactis peut produire de la nisine. L'inclusion de cette bactérie dans des microbilles afin de les protéger d'un environnement hostile et contrôler la libération de nisine commence à être étudiée au niveau international. Il existe différents polymères utilisés dans l'encapsulation (principalement des polysaccharides et protéines) et de nombreuses techniques d'encapsulation. La sélection d'une technique compatible avec la stabilisation et le maintien en vie de bactéries lactiques est assez complexe. La méthode la plus utilisée pour la production de microcapsules contenant des probiotiques est l'extrusion, en raison de la simplicité de son fonctionnement, son faible coût et les conditions de formation appropriées vis-à-vis de la viabilité des bactéries. LITTERATURE REVIEW Litterature review Chapter I: Litterature Review Introduction The encapsulation process aims to entrap a specific component within a matrix (proteins, polysaccharides, etc.). There are examples for component that need to be encapsulated for use in the food industry, such as flavors to control aroma release, antimicrobials to protect from spoilage, antioxidants to increase nutritional value and delay chemical degradation, vitamins to increase their bioavailability, or even probiotics (as lactic acid bacteria) to improve food value Capsules roles and encapsulation objectives Encapsulated probiotics is used to protect the cells against a harsh environment, control their release, avoid alteration in stomach and improve LAB viability in products [START_REF] Burgain | Encapsulation of probiotic living cells: From laboratory scale to industrial applications[END_REF] ; [START_REF] Nedovic | An overview of encapsulation technologies for food applications[END_REF]. Therefore, particles with diameters of a few µm to a few mm are produced. There are various names for the substance used for encapsulation, as coating, membrane, shell, carrier material, wall material, external phase or matrix. The matrix used for prepare microcapsules in food processes should be food grade and able to protect the LAB entrapping in matrix [START_REF] Nedovic | An overview of encapsulation technologies for food applications[END_REF]. The different structures of capsules Microencapsulation technique occurs typically in three phases. The 1 st phase aims the integration of bioactive component in a matrix, which can be liquid or solid. In the case of liquid core, integration is a dissolution or a dispersion in the matrix. If the core is solid, the incorporation is an agglomeration or an adsorption. The 2 nd phase distributes liquids while powder dissolved in a matrix. The last step, consists in stabilizing the capsules by chemical (polymerization), physicochemical (gelation) or physical (evaporation, solidification, coalescence) process [START_REF] Passos | Innovation in food engineering: New techniques and products[END_REF]. Microcapsules can be ranked in 4 categories (Fig. 1): (i) Matrix-core/shell microcapsules processed by gelling biopolymers drops in a solution containing a bivalent ion followed by treatment of the surface with a polycation "multi-step technique" [START_REF] Murua | In vitro characterization and in vivo functionality of erythropoietin-secreting cells immobilized in alginate-poly-L-lysine-alginate microcapsules[END_REF]; [START_REF] Willaert | Applications of cell immobilisation biotechnology[END_REF] . (ii) Liquid-core/shell microcapsules processed by dropping a cell suspension containing bivalent ions into a biopolymers solution "one-step technique". (iii) Cells-core/shell microcapsules (coating). (iv) Hydrogel microcapsules in which cells are hydrogel-embedded. Many attempts has been over the years to improve the microencapsulation techniques such as correct biomaterial characterization and purification, improvements in microcapsules production procedures [START_REF] Wilson | Layer-by-layer assembly of a conformal nanothin PEG coating for intraportal islet transplantation[END_REF]. Application to cell protection and release Capsules prevents the cell release and increases mechanical and chemical stability [START_REF] Overgaard | Immobilization of hybridoma cells in chitosan alginate beads[END_REF]. The capsules are often obtained by coating technique (negative charge polymers as "alginate" coated by positively charged polymers as "chitosan") that enhances stability of the gel [START_REF] Smidsrød | Alginate as immobilization matrix for cells[END_REF]) and provides a barrier to cell Liquid Core Hydrogel Bead One Matrix Beads Coating release [START_REF] Dumitriu | Polysaccharides in medicinal applications[END_REF], [START_REF] Gugliuzza | Smart Membranes and Sensors: Synthesis, Characterization, and Applications[END_REF], [START_REF] Zhou | Spectrophotometric quantification of lactic bacteria in alginate and control of cell release with chitosan coating[END_REF]. [START_REF] Tanaka | A novel immobilization method for prevention of cell leakage from the gel matrix[END_REF] reported the coating of gel capsules by a cell-free alginate gel layer. Cross-linking with cationic polymers to improve the stability of microcapsules [START_REF] Kanekanian | Encapsulation Technologies and Delivery Systems for Food Ingredients and Nutraceuticals[END_REF]. [START_REF] Kolot | Immobilized microbial systems: principles, techniques, and industrial applications[END_REF]; [START_REF] Garti | Encapsulation technologies and delivery systems for food ingredients and nutraceuticals[END_REF]; (Kwak, 2014) develops a membrane around the beads to minimize cell release, and produces stronger microcapsules. The reaction of the biofunctional reagent with chitosan membrane results in bridge formation linking the chitosan molecules. The length of the bridge depends on the type of cross-linking agent [START_REF] Hyndman | Microencapsulation of Lactococcus lactis within cross-linked gelatin membranes[END_REF]. For dry capsules, incorporation of cryoprotectants such as glycerol enhances survival of encapsulated cells after lyophilisation and rehydration [START_REF] Lakkis | Encapsulation and controlled release technologies in food systems[END_REF]. (Aziz Homayouni, Azizi, Javadi, Mahdipour, & Ejtahed, 2012) presented that the survival of bifidobacteria increased significantly. [START_REF] Sheu | Improving survival of culture bacteria in frozen desserts by microentrapment[END_REF] reported Lb. delbrueckii ssp. bulgaricus survival up to 90% because these agents reduced ice crystal formation by water binding. The capsules with glycerol also exhibited a 43% decrease capsules size, due to higher alginate concentration per unit volume in the capsules with glycerol binding with water. Polymers used for encapsulation The selection of a biopolymer or combination of biopolymers depends on many factors: the desired physicochemical and functional properties of the particles (e.g., size, charge, polarity, loading capacity, permeability, degradability, and release profile), the properties of the biopolymers (e.g., charge, polarity, and solubility), and the nature of any enclosed active ingredient (e.g. charge, polarity, solubility, and stability) [START_REF] Joye | Biopolymer-based nanoparticles and microparticles: Fabrication, characterization, and application[END_REF]. Several matrix are used to offer a large range of properties adapted to the entrapped bacteria. Carbohydrates Alginate Alginate hydrogels are extensively used in microcapsules with probiotics because of their simplicity, non-toxicity, biocompatibility and low cost [START_REF] Rowley | Alginate hydrogels as synthetic extracellular matrix materials[END_REF]) [START_REF] Krasaekoopt | Evaluation of encapsulation techniques of probiotics for yoghurt[END_REF]. Alginate is a linear heteropolysaccharide extracted from different types of algae with two structural units consisting of D-mannuronic (M) and L-guluronic acids (G) (Fig. 2). Depending on the source, the composition and the sequence in D-mannuronic and L-guluronic acids vary widely and influence functional properties of the material. G-units have a bucked shape while M-units tends to extended band. Two G-units aligned side-by-side result in the formation of a hole with specific dimension, which is able to bind selectively divalent cations. To prepare alginate capsules, sodium alginate droplets fall into a solution containing a multivalent cations (usually Ca +2 in the form of CaCl2). The droplets form gel spheres instantaneously, entrapping the cells in a three dimensional structure due to a polymer crosslinking by exchange of sodium ions from the guluronic acids with divalent cations (Ca 2+ , Sr 2+ , or Ba 2+ ). This results in a chain-chain association to form the "egg box model". There are some disadvantages of involving alginate polymer to form capsules. Alginate capsules are very porous and sensitive to acidic environment [START_REF] Gouin | Microencapsulation: industrial appraisal of existing technologies and trends[END_REF][START_REF] Mortazavian | Survival of encapsulated probiotic bacteria in Iranian yogurt drink (Doogh) after the product exposure to simulated gastrointestinal conditions[END_REF], which is not compatible for bacteria preservation and resistance of the microparticles under stomach conditions. κ-Carrageenan -carrageenan is a neutral polysaccharide extracted from marine macroalgae, commonly used in food industry. There are three types of carrageenan ( -, -and -) commonly used in foods and exhibit different gelation properties: -carrageenan forms rigid, brittle gels; -carrageenan produces softer, elastic and cohesive gels; while -carrageenan does not form gels (Fig. 3). These differences can be attributed to differences in sulfate groups and anhydro-bridges [START_REF] Burey | Hydrocolloid gel particles: formation, characterization, and application[END_REF]. They can be used to form microcapsules by using different production techniques. Carrageenan formed gels through ionotropic gelation coupled to a cold mechanism, which involves helix formation upon cooling and crosslinking in the presence of K + ions (in the form of KCl) to stabilize the gel and prevent swelling, and induce gelation. However, KCl has been reported to have an inhibitory effect on some lactic acid bacteria. As an alternative to K + , NH 4+ ions have been recommended and produce stronger gel capsules [START_REF] Krasaekoopt | Evaluation of encapsulation techniques of probiotics for yoghurt[END_REF]. High concentration of carrageenan such as 2-5% needs temperatures between 60-90°C for dissolution. Gelation is induced by temperatures changes. Probiotics are added into the polymer solution at 40-45ºC and gelation occurs by cooling down to room temperature. Encapsulation of probiotic cells in -carrageenan beads keeps the bacteria in a viable state [START_REF] Dinakar | Growth and viability of Bifidobacterium bifidum in Cheddar cheese[END_REF] but gels produced are brittle and not able to withstand stresses (Chen & Chen, 2007). Cellulose Cellulose is a structural polysaccharide from plants that is available to use in the food industry after several physically or chemically modification. Cellulose acetate phtalate (CAP) This polymer is a derivate of cellulose (Fig 4) that is used with drugs for controlled drug release in the intestine [START_REF] Mortazavian | Survival of encapsulated probiotic bacteria in Iranian yogurt drink (Doogh) after the product exposure to simulated gastrointestinal conditions[END_REF]. CAP is insoluble in acid media (pH ≤5) but soluble when the pH is≥6. It provides a good protection for probiotic bacteria in simulated gastrointestinal (GI) conditions [START_REF] Favaro-Trindade | Microencapsulation of L. acidophilus (La-05) and B. lactis (Bb-12) and evaluation of their survival at the pH values of the stomach and in bile[END_REF], [START_REF] Burgain | Encapsulation of probiotic living cells: From laboratory scale to industrial applications[END_REF]. [START_REF] Rao | Survival of microencapsulated Bifidobacterium pseudolongum in simulated gastric and intestinal juices[END_REF] found that preparing an emulsion with starch and oil and adding CAP improved high viability of probiotics in simulated gastric environment. Sodium carboxymethyl cellulose Sodium carboxymethyl cellulose (NaCMC) is a hydrophilic polyanionic cellulose that is commonly used as a food-grade biopolymer for building block for assembling polymers [START_REF] Tripathy | Designing carboxymethyl cellulose based layer-bylayer capsules as a carrier for protein delivery[END_REF]. It consists of linked glucopyranose residues with varying levels of carboxymethyl substitution (Fig 5). CMC is not biodegradable and has been used as a hydrophobic polymer to produce hollow-shell particles with solid core [START_REF] Gunduz | Continuous generation of ethyl cellulose drug delivery nanocarriers from microbubbles[END_REF][START_REF] Montes | Coprecipitation of amoxicillin and ethyl cellulose microparticles by supercritical antisolvent process[END_REF]. It can be used with drugs and probiotics because of its resistance to gastric acid and intestinal solubility properties [START_REF] Hubbe | Cellulosic nanocomposites: a review[END_REF]. Xanthan gum Xanthan Gum is an extracellular anionic polysaccharide extracted from microbial source Xanthomonas campestris. It is a complex polysaccharide consisted of a primary chain of β-D- (1,4)-glucose backbone, which has a branching tri-saccharide side chain comprised of β-D- (1,2)-mannose, attached to β-D- (1,[START_REF]Antilisterial activity[END_REF]-glucuronic acid, and terminates in a β-D-mannose [START_REF] Elçin | Encapsulation of urease enzyme in xanthan-alginate spheres[END_REF], [START_REF] Goddard | Principles of polymer science and technology in cosmetics and personal care[END_REF]) (Fig. 6). Xanthan is soluble in cold water, and hydrates rapidly. It is considered to be basically non-gelling. The viscosity gradually reduces with increasing shear stress, thereafter returning to the stable over a wide range of pH (2)(3)[START_REF]Antilisterial activity[END_REF](5)(6)(7)(8)(9)(10)(11)(12) and temperature. The H-bonding and polymer chain entanglement formed network this is lead to high viscosity. Two chains may be aligned to form a double hemlix, providing a rather rigid configuration. The conversion between the ordered double helical conformation and the single more flexible extended chain may take place over hours between 40°C and 80°C [START_REF] Nedovic | An overview of encapsulation technologies for food applications[END_REF]. The extraordinary resistance to enzymatic degradation is attributed to the shielding of the backbone by side chain. Xanthan undergoes cryogelation [START_REF] Giannouli | Cryogelation of xanthan[END_REF]. Chitosan is a linear polysaccharide with positive charge, which is obtained by extracted chitin from crustacean shells then deacetylated. Chitosan is a hydrophilic, cationic and crystalline polymer that demonstrates film forming ability and gelation characteristics. It is composed of glucosamine units, which can polymerize by cross-linking in the presence of anions and polyanions (Fig. 7). Chitosan is preferably used as a coating than capsules because it is not efficient for increasing cell viability by simple encapsulation [START_REF] Mortazavian | Survival of encapsulated probiotic bacteria in Iranian yogurt drink (Doogh) after the product exposure to simulated gastrointestinal conditions[END_REF]. Chitosan capsules are efficient with hydrophilic macromolecules through electrostatic interaction or hydrogen bonding. However, chitosan seems to have inhibitory effects on LAB (Groboillot, [START_REF] Green | Membrane formation by interfacial cross-linking of chitosan for microencapsulation of Lactococcus lactis[END_REF]. Pectin is a heteropolysacchride that enhances cellulose structures in plant cell walls. The backbone of pectin contains regions of galacturonic acid residues that can be methoxylated. Natural pectin typically has a degree of esterification of 70 to 80%, but this may be diverse by changing extraction and production conditions. Pectin gelation and structure forming characteristics are due to the degree of esterification (DE) and the arrangement of the methyl groups over the pectin molecule (Fig. 8). Gelation of high methoxyl pectin (HMP) requires a pH about 3 and a insoluble solids, while gelation of low methoxyl pectin (LMP) requires the presence of a controlled amount of calcium ions and need neither sugar nor acid [START_REF] Joye | Biopolymer-based nanoparticles and microparticles: Fabrication, characterization, and application[END_REF]. Pectin form rigid gels by reacting with calcium salts or multivalent cations, which crosslink the galacturonic acids of the main polymer chains. Calcium pectinate 'eggbox' structure is obtained by combination of the carboxylic acid groups of the pectin molecules and the calcium ions. [START_REF] Jain | Perspectives of biodegradable natural polysaccharides for site-specific drug delivery to the colon[END_REF]; (L. [START_REF] Liu | Pectin-based systems for colonspecific drug delivery via oral route[END_REF]; [START_REF] Tiwari | Preparation and characterization of satranidazole loaded calcium pectinate microbeads for colon specific delivery; Application of response surface methodology[END_REF]. Biopolymer particles can be produced by combination of pectin with other polymers or cations. Its degradation rate can also be modified by chemical-modification (de Vos, Faas, Spasojevic, & Sikkema, 2010). Pectin is degradable by bacteria. Therefore, it can be used as gelling agent in food, in medicines and as a source of dietary fiber, because it remains intact in the stomach and the small intestine. 9). Dextran is one of the polysaccharide promising target for modification, because it has a large number of hydroxyl groups [START_REF] Suarez | Tunable protein release from acetalated dextran microparticles: a platform for delivery of protein therapeutics to the heart post-MI[END_REF]; (H. [START_REF] Wang | Amphiphilic dextran derivatives nanoparticles for the delivery of mitoxantrone[END_REF]. Dextran are well soluble in water, but is modified to change the solubility pattern, that has implications when forming biopolymer particles by antisolvent precipitation [START_REF] Broaders | Acetalated dextran is a chemically and biologically tunable material for particulate immunotherapy[END_REF]. Dextran is largely used as biomaterial in the field of polysaccharide polymers, because the required chemical modifications by hydroxyl are low cost [START_REF] Hu | Biodegradable amphiphilic polymer-drug conjugate micelles[END_REF]; (Kuen Yong Lee, Jeong, Kang, [START_REF] Lee | Electrospinning of polysaccharides for regenerative medicine[END_REF]. Dextran is rapidly degraded by dextranases produced in the gut. . Gellan gum Gellan gum is a microbial polysaccharide derived from Pseudomonas elodea which is constituted of a repeating unit of tetrasaccharides composed of two glucose units, one glucuronic acid and one rhamnose (M.-J. Chen & Chen, 2007) (Fig. 10). Gellan is not easily degraded by enzymes, and stable over wide pH range; therefore it is used in the food industry [START_REF] Nag | Microencapsulation of probiotic bacteria using pHinduced gelation of sodium caseinate and gellan gum[END_REF], [START_REF] Yang | Preparation and evaluation of chitosancalcium-gellan gum beads for controlled release of protein[END_REF]. Gellan gum particles are useful for colonic delivery of active compounds because, it can be degraded by galactomannanases in colonic fluids [START_REF] Yang | Preparation and evaluation of chitosancalcium-gellan gum beads for controlled release of protein[END_REF]. To forming delivery system, there are two types of gellan gum : the highly acetylated (native) and the deacetylated forms [START_REF] Singh | Effects of divalent cations on drug encapsulation efficiency of deacylated gellan gum[END_REF]. . The starch is not digested by amylases in the small intestine [START_REF] Singh | Starch digestibility in food matrix: a review[END_REF]; [START_REF] Anal | Recent advances in microencapsulation of probiotics for industrial applications and targeted delivery[END_REF] and releases bacterial cells in the large intestine [START_REF] Mortazavian | Survival of encapsulated probiotic bacteria in Iranian yogurt drink (Doogh) after the product exposure to simulated gastrointestinal conditions[END_REF]. It improves probiotic delivery in a viable and a metabolically active state to the intestine, because it has an ideal surface for the adherence of the probiotic cells [START_REF] Anal | Recent advances in microencapsulation of probiotics for industrial applications and targeted delivery[END_REF][START_REF] Crittenden | Adhesion of bifidobacteria to granular starch and its implications in probiotic technologies[END_REF]. Proteins Collagen Collagen is the major component of mammalian connective tissue. It is found in high concentrations in tendon, skin, bone, cartilage and, ligament. The collagen protein is composed of a triple helix, which generally consists of two identical chains (α1) and an additional chain that differs slightly in its chemical composition (α2) (Fig. 13). Due to its biocompatibility, biodegradability, abundance in nature, and natural ability to bind cells, it has been used in cell immobilization. Human like collagen (HLC) is produced by recombinant Escherichia coli BL21 containing human like collagen cDNA. Collagen may be gelled utilizing changes in pH, allowing cell encapsulation in a minimally traumatic manner [START_REF] Rosenblatt | Injectable collagen as a pH-sensitive hydrogel[END_REF], [START_REF] Senuma | Bioresorbable microspheres by spinning disk atomization as injectable cell carrier: from preparation to in vitro evaluation[END_REF]. It may be processed into fibers and macroporous scaffolds [START_REF] Chevallay | Collagen-based biomaterials as 3D scaffold for cell cultures: applications for tissue engineering and gene therapy[END_REF], [START_REF] Roche | Native and DPPA crosslinked collagen sponges seeded with fetal bovine epiphyseal chondrocytes used for cartilage tissue engineering[END_REF], while [START_REF] Su | Encapsulation of probiotic Bifidobacterium longum BIOMA 5920 with alginate-human-like collagen and evaluation of survival in simulated gastrointestinal conditions[END_REF] prepared microspheres using alginate and HCL by electrostatic droplet generation. The results showed encapsulation probiotic bacteria improving tolerance of in simulated gastric juice than free probiotic bacteria. Figure 12: collagen triple helix (from Wikipedia) Gelatin Gelatin is a protein derived from collagen by partial hydrolysis. It contains between 300 and 4,000 amino acids (Fig. 13). Gelatin is soluble in most polar solvents and forms a solution of high viscosity in water. Gelatin gels formed on cooling of solutions with concentrations above about 1%wt. The form of gels depends on the quality of the gelatin and the pH; they are clear, elastic, transparent, and thermo-reversible; it dissolves again on 35-40°C (Zuidam & Nedovic, 2010). Gelatin has been used for prepare microcapsules (probiotic bacteria) alone or with other compounds. When pH of gelatin is below the isoelectric point, the net charge of gelatin is positive then formed and a strong interaction with the negatively charged marerials [START_REF] Krasaekoopt | Evaluation of encapsulation techniques of probiotics for yoghurt[END_REF], [START_REF] Anal | Recent advances in microencapsulation of probiotics for industrial applications and targeted delivery[END_REF]. Higher concentrations of gelatin produce strong capsules that are tolerant against crackling and breaking. Gellan gum and gelatin mixing has been used for the encapsulation of Lactobacillus lactis ssp. cremoris [START_REF] Hyndman | Microencapsulation of Lactococcus lactis within cross-linked gelatin membranes[END_REF]. Figure 13: Gelatin structure Milk proteins Caseins Caseins are the most prevalent phosphoproteins in milk and are extremely heat-stable proteins. They is obtained by destabilizing skim milk micelles by various processes. The main casein types are presented in Fig. 14. The products obtained are mineral acid casein, lactic acid casein, and rennet casein. The microcapsules can prepared by using milk protein solution and enzyme rennet. Aggregation of the casein occurs by using rennet enzyme cleaving the k-casein molecule [START_REF] Heidebach | Microencapsulation of probiotic cells for food applications[END_REF]. Non-covalent cross-links are then gradually formed between chains to form a final gel above 18º C [START_REF] Bansal | Aggregation of rennet-altered casein micelles at low temperatures[END_REF]. They are able to encapsulate probiotics, without significant cells loss during encapsulation process. Caseins can protect the bacteria in capsules during incubation under simulated gastric conditions. This technique is suitable for application with probiotic in food. Whey protein Whey is a by-product of cheese or casein production. Whey protein include α-lactalbumin, ßlactooglobulin, immunoglobulins, and serumalbumin but also various minor proteins (Fig. 15). Whey protein can be gelled by two different way, such as heat induced gelation (heating above the thermal denaturation temperature under appropriate pH and ionic strength conditions) or cold induced gelation (addition of calcium to a mixture using both extrusion and phase separation methods). Whey proteins gel are produced using various whey proteins, such as βlactoglobulin [START_REF] Jones | Comparison of proteinpolysaccharide nanoparticle fabrication methods: Impact of biopolymer complexation before or after particle formation[END_REF], (Jones & McClements, 2011), and lactoferrin [START_REF] Bengoechea | Formation and characterization of lactoferrin/pectin electrostatic complexes: Impact of composition, pH and thermal treatment[END_REF]. Protein derived from plants provide some advantages over animal proteins to form a biopolymers molecules: reduced risk of contamination and infection, cheaper, usable in vegetarian food products. [START_REF] Regier | Fabrication and characterization of DNA-loaded zein nanospheres[END_REF]) -(L. [START_REF] Chen | Elaboration and Characterization of Soy/Zein Protein Microspheres for Controlled Nutraceutical Delivery[END_REF]. Among all vegetal proteins, hydrophobic cereal proteins are largely used to produce biopolymer particles by various methods. These particles are shown suitable for encapsulation of active ingredients (A. Patel, Hu, Tiwari, & Velikov, 2010a), [START_REF] Ezpeleta | Gliadin nanoparticles for the controlled release of all-trans-retinoic acid[END_REF], [START_REF] Duclairoir | Alpha-tocopherol encapsulation and in vitro release from wheat gliadin nanoparticles[END_REF]. because they are water insoluble, biodegradable and biocompatible. Emulsifier-stabilized protein particles may be used to improve the chemical stability of encapsulated active ingredients [START_REF] Podaralla | Influence of Formulation Factors on the Preparation of Zein Nanoparticles[END_REF] , (A. Patel et al., 2010a),(A. R. [START_REF] Patel | Colloidal approach to prepare colour blends from colourants with different solubility profiles[END_REF]. because it have good physical stability over a wide range of pH conditions (A. [START_REF] Patel | Synthesis and characterisation of zein-curcumin colloidal particles[END_REF]. Other plant proteins that have recently been shown to be suitable for producing biopolymer particles include pea leguminous [START_REF] Pierucci | Comparison of alpha-tocopherol microparticles produced with different wall materials: pea protein a new interesting alternative[END_REF] and soy protein isolate [START_REF] Liu | Soy Protein Nanoparticle Aggregates as Pickering Stabilizers for Oil-in-Water Emulsions[END_REF]. Soy protein isolate (SPI) contains an abundant of mixture hydrophilic globular proteins, cheap, and renewable source. Chickpea proteins showed also good functional attributes. Two salt-soluble globulin-type proteins dominate legumin and vicilin. (J. [START_REF] Wang | Entrapment, survival and release of Bifidobacterium adolescentis within chickpea protein-based microcapsules[END_REF]. Chickpea protein coupled with alginate were designed to serve as a suitable probiotic carrier intended for food applications [START_REF] Klemmer | Pea protein-based capsules for probiotic and prebiotic delivery[END_REF]. These capsules were able to protect B. adolescentis within simulated gastric juice and simulated intestinal fluids. -Non toxicity. -Biocompatibility. -Low cost. -Alginate beads are sensitive to the acidic environment. -Not compatible for the resistance of the microparticles in the stomach conditions. -Very porous. + [START_REF] Krasaekoopt | Evaluation of encapsulation techniques of probiotics for yoghurt[END_REF] ( [START_REF] Mortazavian | Survival of encapsulated probiotic bacteria in Iranian yogurt drink (Doogh) after the product exposure to simulated gastrointestinal conditions[END_REF] -Carrageenan -Softer, elastic and cohesive gels. -K+ ions to stabilize the gel and prevent swelling -Brittle gels ; not able to withstand stresses. -KCl has been reported to have an inhibitory effect on some lactic acid bacteria. - -Ethylcellulose is not biodegradable and has been used as a hydrophobic polymer to produce hollow-shell particles and particles with a solid core. b-Sodium carboxymethyl cellulose (NaCMC). -High hydrophilicity. [START_REF] Gunduz | Continuous generation of ethyl cellulose drug delivery nanocarriers from microbubbles[END_REF] (Montes, Gordillo, Pereyra, & de la Ossa, 2011) [START_REF] Hubbe | Cellulosic nanocomposites: a review[END_REF] Xanthan Gum -Stable over a wide range of pH (2)(3)[START_REF]Antilisterial activity[END_REF](5)(6)(7)(8)(9)(10)(11)(12) and temperature. -Resistance to enzymatic degradation. -Soluble in cold water, hydrates rapidly. It is considered to be basically non-gelling + [START_REF] Elçin | Encapsulation of urease enzyme in xanthan-alginate spheres[END_REF] (Goddard & Gruber, 1999) [START_REF] Nedovic | An overview of encapsulation technologies for food applications[END_REF]) [START_REF] Giannouli | Cryogelation of xanthan[END_REF] Chitosan -Useful for encapsulation of hydrophilic macromolecules. -Poor efficiency for increasing cell viability by encapsulation and it is preferable to use as a coat but not as a capsule. -Inhibitory effects on LAB. - [START_REF] Mortazavian | Survival of encapsulated probiotic bacteria in Iranian yogurt drink (Doogh) after the product exposure to simulated gastrointestinal conditions[END_REF]) (Groboillot et al., 1993) Pectin -Can be used to form biopolymer particles in combination with other polymers. -Is used as gelling agent in food, in medicines and as a source of dietary fiber. - -Is rapidly degraded by dextranases produced in the gut. -Soluble in water. -Degradation rapidly and acid-sensitivity + [START_REF] Suarez | Tunable protein release from acetalated dextran microparticles: a platform for delivery of protein therapeutics to the heart post-MI[END_REF] (H. [START_REF] Wang | Amphiphilic dextran derivatives nanoparticles for the delivery of mitoxantrone[END_REF]) [START_REF] Hu | Biodegradable amphiphilic polymer-drug conjugate micelles[END_REF] (Kuen Yong [START_REF] Lee | Electrospinning of polysaccharides for regenerative medicine[END_REF] Gellan gum -Not easily degraded by enzymes. -Stable over widely pH range. -Useful for colonic delivery of active compounds. + (M.-J. Chen & Chen, 2007) [START_REF] Yang | Preparation and evaluation of chitosancalcium-gellan gum beads for controlled release of protein[END_REF] (B. N. [START_REF] Singh | Effects of divalent cations on drug encapsulation efficiency of deacylated gellan gum[END_REF]. Starch -Good enteric delivery characteristic that is a better release of the bacterial cells in the large-intestine. -Ideal surface for the adherence of the probiotic cells to the starch granules. -Improving probiotic delivery in a viable and a metabolically active state to the intestine. -Poor solubility. -High surface tension. Gelatin -Clear, elastic, transparent, and thermo-reversible gels. -used for probiotic encapsulation. -High deformation of capsules. -Law the values of viscoelastic parameters. + (Zuidam & Nedovic, 2010) [START_REF] Krasaekoopt | Evaluation of encapsulation techniques of probiotics for yoghurt[END_REF]) [START_REF] Hyndman | Microencapsulation of Lactococcus lactis within cross-linked gelatin membranes[END_REF] Collagen -Used in cell immobilization due to its biocompatibility, biodegradability, abundance in nature, and natural ability to bind cells. -can form a high-water-content hydrogel composite. -High cost to purify, natural variability of isolated collagen. - -Proteins plants provide some advantages over animal proteins to form a biopolymers molecules since the risk of contamination and infection is reduced -Cheap. -Can be used in vegetarian products. -Water insoluble, biodegradable and biocompatible. -Good physical stability over a range of pH conditions. Chickpea Protein -Good functional attributes and nutritional importance. -Good protection to probiotic bacteria -+ [START_REF] Podaralla | Influence of Formulation Factors on the Preparation of Zein Nanoparticles[END_REF] (A. R. [START_REF] Patel | Colloidal approach to prepare colour blends from colourants with different solubility profiles[END_REF]. (A. Patel, Hu, Tiwari, & Velikov, 2010b) (F. [START_REF] Liu | Soy Protein Nanoparticle Aggregates as Pickering Stabilizers for Oil-in-Water Emulsions[END_REF] (J. [START_REF] Wang | Entrapment, survival and release of Bifidobacterium adolescentis within chickpea protein-based microcapsules[END_REF]) [START_REF] Klemmer | Pea protein-based capsules for probiotic and prebiotic delivery[END_REF] Microencapsulation methods There are several encapsulation techniques. Before selecting one of them, people must take into consideration the following point (N.J. Zuidam & Nedovic, 2010): i) What conditions that affect the viability of probiotics? (ii) Which processing conditions are used during food production? (iii)What will be the storage conditions of the food product containing the Microcapsules? (iv)Which particle size is needed to incorporate in the food product? [START_REF] Kailasapathy | Encapsulation technologies for functional foods and nutraceutical product development[END_REF]; [START_REF] Zhao | Measurement of particle diameter of Lactobacillus acidophilus microcapsule by spray drying and analysis on its microstructure[END_REF]. There are different parameters to optimize spray-drying such as air flow, feed rate, feed temperature, inlet air temperature, and outlet air temperature [START_REF] Vega | Invited review: spray-dried dairy and dairy-like emulsions-compositional considerations[END_REF]; [START_REF] O'riordan | Evaluation of microencapsulation of a Bifidobacterium strain with starch as an approach to prolonging viability during storage[END_REF]. There are many advantages for spray-dried process : it can be operated on a continuous basis, it is rapid and relatively low cost, it can be applied on a large scale suitable for industrial applications [START_REF] Brun-Graeppi | Cell microcarriers and microcapsules of stimuli-responsive polymers[END_REF]; [START_REF] Gouin | Microencapsulation: industrial appraisal of existing technologies and trends[END_REF]. Spray-drying disadvantages are linked to the "high" temperature used to dry, which is not suitable for probiotic bacteria [START_REF] Favaro-Trindade | Microencapsulation of L. acidophilus (La-05) and B. lactis (Bb-12) and evaluation of their survival at the pH values of the stomach and in bile[END_REF]; [START_REF] Ananta | Cellular injuries and storage stability of spraydried Lactobacillus rhamnosus GG[END_REF]; (Oliveira, Moretti, Boschini, Baliero, Freitas, & Favaro-Trindade, 2007). The compatibility of bacteria strain and type of encapsulating polymers have to be controlled to allow bacteria survival during spray-drying process as well as during storage [START_REF] Desmond | Improved survival of Lactobacillus paracasei NFBC 338 in spray-dried powders containing gum acacia[END_REF]. Freeze-drying Freeze-drying has been largely used to producing probiotic powders. During the process, the solvent or the suspension medium is frozen then sublimed [START_REF] Santivarangkna | Alternative drying processes for the industrial preservation of lactic acid starter cultures[END_REF][START_REF] Solanki | Development of Microencapsulation Delivery System for Long-Term Preservation of Probiotics as Biotherapeutics Agent[END_REF]. The freeze-drying process is divided into three stages; freezing, primary drying, and second drying. The freeze-drying processing condition is milder than spray-drying and higher probiotic survival rates are typically achieved [START_REF] Wang | Entrapment, survival and release of Bifidobacterium adolescentis within chickpea protein-based microcapsules[END_REF]. Freeze-drying disadvantage is linked to crystal formation and stress condition due to high osmolarity that cause damages to cell membrane. To increase the viability of probiotics during dehydration, skim milk powder, whey protein, glucose, maltodextrin, trehalose are added to the drying media before freeze-drying to act as cryoprotectants [START_REF] Basholli-Salihu | Effect of lyoprotectants on β-glucosidase activity and viability of Bifidobacterium infantis after freeze-drying and storage in milk and low pH juices[END_REF]. Cryoprotectants reduce the osmotic difference between the internal and external environments by accumulation within the cells [START_REF] Kets | Effect of Compatible Solutes on Survival of Lactic Acid Bacteria Subjected to Drying[END_REF]. Spray freeze-drying Spray-freeze-drying technique combines processing steps that are common to freeze-drying and spray-drying. Probiotic cells are in a solution, which is atomized into a cold vapor phase of a cryogenic liquid such as liquid nitrogen. The microcapsules formed by dispersion of frozen droplets then dried in a freeze dryer (Amin, Thakur, Jain, 2013); (H. [START_REF] Wang | Amphiphilic dextran derivatives nanoparticles for the delivery of mitoxantrone[END_REF][START_REF] De Vos | Encapsulation for preservation of functionality and targeted delivery of bioactive food components[END_REF]; (K. [START_REF] Kailasapathy | Encapsulation technologies for functional foods and nutraceutical product development[END_REF]; [START_REF] Semyonov | Microencapsulation of Lactobacillus paracasei by spray freeze drying[END_REF]. The main advantages of spray-freeze-drying techniques are: controlled size, higher surface area for capsules in spray freeze dried. This technique has nevertheless some disadvantages including the high use of energy, the long processing time and the cost which is 30-50 times more expensive than spraydrying (Zuidam & Nedovic, 2010). In some studies, using polysaccharides contributes to reduce cells mobility in the glassy state matrix, that act as a protective excipient and improve cell viability during freezing [START_REF] Semyonov | Microencapsulation of Lactobacillus paracasei by spray freeze drying[END_REF]. Spray-chilling / Spray-cooling Spray-chilling, spray-cooling, spray-congealing are similar processes as spray-drying, but no water is evaporated and the air used is cold, which enables particle solidification. The microcapsules are quickly formed when the matrix containing the bioactive compound is into contact with the cold air [START_REF] Champagne | Microencapsulation for the improved delivery of bioactive compounds into foods[END_REF]. The spray-chilling mainly uses a molten lipid matrix as carrier. Spray-chilling disadvantages are that the capacity of encapsulation is low and the release of core material is observed during storage [START_REF] Sato | Polymorphism in Fats and Oils[END_REF]. Spraychilling technique is a cheaper encapsulation technology that has potential for industrial scale manufacture [START_REF] Gouin | Microencapsulation: industrial appraisal of existing technologies and trends[END_REF]. This technology could generate smaller beads, which be desirable in food processing. [START_REF] Pedroso | Protection of Bifidobacterium lactis and Lactobacillus acidophilus by microencapsulation using spray-chilling[END_REF]) used spraychilling technology to prepare microencapsulate Bifidobacterium lactis and Lactobacillus acidophilus using as wall materials with palm and palm kernel. The solid lipid microparticles provided an effective protection for probiotics against gastric and intestinal fluids. Fluid bed In the fluid bed process, a cell suspension is sprayed and dried on inert carriers. The fluid bed advantages are mainly the control over the temperature and the lower cost. The disadvantages are that this technology is difficult to master for long duration. This method can be used to produce multilayer coatings with two different fats [START_REF] Champagne | The determination of viable counts in probiotic cultures microencapsulated by spray-coating[END_REF]. Fluid bead is one of the most used encapsulation technologies commercially applied to probiotics. Some companies have developed commercial products, such as Probiocap® and Duaolac® [START_REF] Burgain | Encapsulation of probiotic living cells: From laboratory scale to industrial applications[END_REF]. Impinging aerosol technology Impinging aerosol technology is used two separate aerosols: One with the microbial suspension in alginate solution and the other one with calcium chloride. The mixture of alginate is injected from the top of a cylinder and on the mean time the calcium chloride is injected from the base to produce alginate microcapsules [START_REF] Sohail | Survivability of probiotics encapsulated in alginate gel microbeads using a novel impinging aerosols method[END_REF]. The impinging aerosol technology advantages are: it is suitable for encapsulating heat labile and solvent sensitive materials, it provides a large volume production, the capsules could be spray or freeze-dried in a later stage. The microcapsules diameter of 2 mm was obtained and offered high protection to L. rhamnosus GG in gastic acid and bile [START_REF] Sohail | Survivability of probiotics encapsulated in alginate gel microbeads using a novel impinging aerosols method[END_REF]. Electrospinning This technique is a combination between two techniques namely electrospray and spinning. In this technique, a high electric field is applied to a fluid which comes out from the tip of a die that acts as one of the electrodes. This leads to droplet deformation and finally to the ejection of a charged jet from the tip towards the counter electrode leading to the formation of continuous capsules (cylinder). The main advantage of electrospinning technique is that capsules are very thin with large surface areas "few nanometers" [START_REF] Agarwal | Use of electrospinning technique for biomedical applications[END_REF]. (López-Rubio, Sanchez, Wilkanowicz, Sanz, & Lagaron, 2012) compared two type of electrospinning microcapsules, encapsulated probiotics in a protein based matrix (whey protein concentrate) and in a carbohydrate based matrix (pullulan). Whey protein microcapsules proved a greater cell viability when compared to pullulan structures. Methods to produce humid capsules Emulsification and ionic gelation Emulsification is a technique to encapsulate alive probiotics and uses different polysaccharides as encapsulating materials such as alginate, Ƙ-carrageenan, gellan-gum, xanthan, or pectin. For encapsulation in an emulsion, an emulsifier and/or a surfactant are needed. A solidifying agent is then added to the emulsion (Chen & Chen, 2007); [START_REF] Kailasapathy | Encapsulation technologies for functional foods and nutraceutical product development[END_REF]; [START_REF] De Vos | Encapsulation for preservation of functionality and targeted delivery of bioactive food components[END_REF]. This coupled technique gives a high survival rate of the bacteria but it provides capsules of large sizes and different shapes. The gel beads can be coated by a second polymer to provide better protection to the cell and improve organoleptic properties (K. [START_REF] Kailasapathy | Encapsulation technologies for functional foods and nutraceutical product development[END_REF]. Emulsification and enzymatic gelation This technique uses milk proteins with probiotics that are encapsulated by means of an enzymatic induced gelation; milk proteins have high gelation properties and offer a good protection for probiotics [START_REF] Heidebach | Microencapsulation of probiotic cells by means of rennet-gelation of milk proteins[END_REF], [START_REF] Heidebach | Microencapsulation of probiotic cells for food applications[END_REF]. This method produces spherical particles, insoluble in water. [START_REF] Heidebach | Microencapsulation of probiotic cells by means of rennet-gelation of milk proteins[END_REF] detailed an example of rennet gelation to prepare microcapsules. This technique permitted to use of alginate, Ƙ-carrageenan, gellan-gum or xanthan for capsule coating, even they are not allowed for use in dairy products in some countries [START_REF] Picot | Production of Multiphase Water-Insoluble Microcapsules for Cell Microencapsulation Using an Emulsification/Spray-drying Technology[END_REF]. Emulsification and interfacial polymerization Interfacial polymerization is an alternative technique that is performed in one-step. The technique requires the formation of an emulsion. The discontinuous phase contains an aqueous suspension of probiotic cells and the continuous phase is an organic solvent. To begin the polymerization reaction, a biocompatible agent is added. The microcapsules obtained are thin and have a strong membrane (Kaila [START_REF] Kailasapathy | Microencapsulation of probiotic bacteria: technology and potential applications[END_REF]. Interfacial polymerization of microcapsules is used to improve the productivity in fermentation [START_REF] Yáñez-Fernández | Rheological characterization of dispersions and emulsions used in the preparation of microcapsules obtained by interfacial polymerization containing Lactobacillus sp[END_REF]. Extrusion Extrusion technique is the most popular method for humid microcapsules production [START_REF] Green | Membrane formation by interfacial cross-linking of chitosan for microencapsulation of Lactococcus lactis[END_REF], [START_REF] Koyama | Cultivation of yeast and plant cells entrapped in the lowviscous liquid-core of an alginate membrane capsule prepared using polyethylene glycol[END_REF], [START_REF] Özer | Effect of Microencapsulation on Viability of Lactobacillus acidophilus LA-5 and Bifidobacterium bifidum BB-12 During Kasar Cheese Ripening[END_REF]. It involves the preparation of a hydrocolloid solution, mixed with the microbial cells and the extrusion of the cell suspension through a needle, then dropped into the solution of a crosslinking agent [START_REF] Heidebach | Microencapsulation of probiotic cells for food applications[END_REF]. Gelation occurs by a combination between the polymer and the cross-linking agent. The advantages of this technique are the simplicity of its operation, the lower cost, and the operational conditions suitable for probiotic bacteria viability [START_REF] De Vos | Encapsulation for preservation of functionality and targeted delivery of bioactive food components[END_REF]. The main disadvantage for this technique is that the microcapsules are larger than 500 µm [START_REF] Reis | Review and current status of emulsion/dispersion technology using an internal gelation process for the design of alginate particles[END_REF]. In addition, rapid cross-linking between droplet polymers solution and cross link agent led to rapid hardening of microcapsules surfaces that delay the movement of cross-linking ions into the inner core [START_REF] Liu | Characterization of structure and diffusion behaviour of Ca-alginate beads prepared with external or internal calcium sources[END_REF]. Co-Extrusion Co-Extrusion technology is based on a laminar liquid jet that is broken into equally sized droplets by a vibrating nozzle (Prüsse, Bilancetti, Bučko, Bugarski, Bukowski, Gemeiner, Lewińska, Manojlovic, Massart, Nastruzzi, Nedovic, et al., 2008), [START_REF] Del Gaudio | Mechanisms of formation and disintegration of alginate beads obtained by prilling[END_REF]. The droplets are then gelled in a cross-linking solution. The diameter of microcapsules is controlled by two main factors, whaic are the flow rate and polymer solution viscosity [START_REF] Del Gaudio | Mechanisms of formation and disintegration of alginate beads obtained by prilling[END_REF]. [START_REF] Graff | Increased intestinal delivery of viable Saccharomyces boulardii by encapsulation in microspheres[END_REF] encapsulated Saccharomyces boulardii using a laminar jet break up technique. Microcapsules were obtained by coating with chitosan solution and significantly reduced the degradation of yeast cells in the gastrointestinal tract. [START_REF] Huang | Microfluidic device utilizing pneumatic micro-vibrators to generate alginate microbeads for microencapsulation of cells[END_REF] obtained microcapsules with two concentrations of alginate solution that were introduced separately into the inner and outer chambers of the coaxial nozzle. The polymer droplets were cross-linked in calcium chloride solution. Adjusting the concentrations of the shell and core materials provided a high control on the size of alginate microspheres and on releasing of the microbial cells from the microspheres. Coacervation This technique can be used to encapsulate flavor oils, preservatives, enzymes as well as microbial cells [START_REF] John | Bioencapsulation of microbial cells for targeted agricultural delivery[END_REF]; (Oliveira, Moretti, Boschini, Baliero, Freitas, Freitas, et al., 2007); (Oliveira, Moretti, Boschini, Baliero, Freitas, & Favaro-Trindade, 2007). This technique uses specific pH, temperature and composition of the solution to separate one or more incompatible polymers from an initial coating solution. The incompatible polymers are added to the coating polymer solution and the dispersion is stirred. The separation of incompatible polymer and deposition of dense coacervate phase surrounding the core material to form microcapsules occur as a result for changes in the physical parameters [START_REF] Gouin | Microencapsulation: industrial appraisal of existing technologies and trends[END_REF]; [START_REF] John | Bioencapsulation of microbial cells for targeted agricultural delivery[END_REF][START_REF] Nihant | Microencapsulation by coacervation of poly(lactide-co-glycolide) IV. Effect of the processing parameters on coacervation and encapsulation[END_REF]; (Oliveira, Moretti, Boschini, Baliero, Freitas, Freitas, et al., 2007) ; (Oliveira, Moretti, Boschini, Baliero, Freitas, & Favaro-Trindade, 2007). The most important factors for the coacervation technique are the volume of dispersed phase, the ratio of incompatible polymer to coating polymer, the stirring rate of the dispersion and the core material to be encapsulated (N. [START_REF] Nihant | Microencapsulation by coacervation of poly(lactide-co-glycolide) IV. Effect of the processing parameters on coacervation and encapsulation[END_REF]. In the coacervation technique, the composition and viscosity for polymers solution in supernatant phases act on size distribution, surface morphology and internal porosity of the microcapsules (Nicole [START_REF] Nihant | Microencapsulation by Coacervation of Poly(lactide-co-glycolide). III. Characterization of the Final Microspheres[END_REF], (N. [START_REF] Nihant | Microencapsulation by coacervation of poly(lactide-co-glycolide) IV. Effect of the processing parameters on coacervation and encapsulation[END_REF]. (Oliveira, Moretti, Boschini, Baliero, Freitas, Freitas, et al., 2007) used the coacervation technique to encapsulate B. lactis (BI 01) and L. acidophilus (LAC 4) in a casein/pectin complex. This technology presented a good encapsulation capacity and a controlled liberation of core material from the microcapsules by mechanical stress, temperature and pH changes. So, the coacervation technique is a favorable with probiotic bacteria (Oliveira, Moretti, Boschini, Baliero, Freitas, Freitas, et al., 2007). The coacervation method disadvantage is that it cannot be used for producing very small microspheres [START_REF] John | Bioencapsulation of microbial cells for targeted agricultural delivery[END_REF]. Table 3: Encapsulation methods Encapsulation methods Advantages Disadvantages Valid with probiotic bacteria Methods to produce dry capsules Spray Drying -It can be operated on a continuous basis. -It can be applied on a large scale. -It is suitable for industrial application. -It is rapid and relatively low cost. -The high temperature used in the process may not be suitable for encapsulating probiotic bacteria. - Freeze Drying -The freeze-drying processing condition is milder than spray-drying higher probiotic survival rates are typically achieved. -Freezing causes damage to the cell membrane because of crystal formation and imparts stress condition by high osmolarity. + Spray Freeze Drying -Controlled size. -Larger specific surface area than spray-dried capsules. -The use of high energy. -The long processing time and the cost which is 30-50 times expensive than spray-drying. + Spray chilling/ Spray cooling -cheapest encapsulation technology. -potential of industrial scale manufacture. -generates smaller beads. -The spray chilling mainly uses a molten lipid matrix as carrier. The micro particles that are produced can present some disadvantages, which include a low encapsulation capacity and the expulsion of core material during storage. + Fluid bed -Good temperature control. -Low cost. -Easy scale-up. -Technology difficult to control for longer duration. + Impinging aerosol technology -Suitable for encapsulating heat labile and solvent sensitive materials. -Large volume production capacity. + Electrospinning -Production of very thin capsules. -Large surface areas. + Methods to produce humid capsules Emulsification and ionic gelation -High survival rate of the bacteria. -Gel beads can be coated by second polymer that provides more protection for bacteria. -Large size ranges and shapes. + Emulsification and enzymatic gelation -Processing of water insoluble and spherical particles. -Use of coatings. -Coating microcapsules (enzymatic induced gelation) by alginate, Ƙcarrageenan, gellan-gum or xanthan that are not allowed in dairy products in some countries. + Extrusion -Simple operations. -Low cost. -Mild operational conditions ensuring high cell viability. -Inefficiency in producing microspheres smaller than 500 µm. -Less stable microspheres. + Co-Extrusion -size-controlled microspheres. + Litterature review 39 -Significant protection of the microorganisms in the gastrointestinal tract. -High production rate. Coacervation -Good encapsulation capacity. -Controlled liberation of core material from the microspheres by mechanical stress. -This method may not be used for producing small microspheres. + Techniques for capsules characterization 4.1. Microcapsules size, morphology and stability The particle size is often the most important characteristic of the capsules, and was measured by different type of microscopy or light scattering. Microscopy Optical and electron microscopies are used to measure the size of capsules, the surface topography, the thickness of membrane and, sometimes, the permeability of capsules membrane. Conventional Optical Microscopy This microscope is used to characterizing structures of capsules that are ≥ 0.2 µm as fixed by the wavelength of visible light, and the size of capsules can be measured (N.J. Zuidam & Nedovic, 2010). Confocal Laser Scanning Microscopy (CLSM) (CLSM) produce in-focus images of a fluorescent specimen by optical sectioning. CLSM provides a better spatial 3D image than electron microscopy and provides additional information such as the three-dimensional localization and quantification of the encapsulated phase. CLSM may allow definition of encapsulation rate without the need for any destruction, extraction and chemical assays [START_REF] Lamprecht | Characterization of microcapsules by confocal laser scanning microscopy: structure, capsule wall composition and encapsulation rate[END_REF]. CLSM could also provide the distribution of polymers and cross-linking ions [START_REF] Strand | Visualization of alginatepoly-L-lysine-alginate microcapsules by confocal laser scanning microscopy[END_REF]. [START_REF] Lamprecht | Characterization of microcapsules by confocal laser scanning microscopy: structure, capsule wall composition and encapsulation rate[END_REF]. Transmission Electron Microscopy (TEM) TEM is able of resolving structure with smaller dimension than optical microscopy [START_REF] Dash | Kinetic modeling on drug release from controlled drug delivery systems[END_REF]. TEM is used to measure the structures of very thin samples by passing electrons through them. TEM gives the morphology and shell thickness of encapsulates following fixation, dehydration, and sectioning [START_REF] Chen | Chitosan/beta-lactoglobulin core-shell nanoparticles as nutraceutical carriers[END_REF]. [START_REF] Chiu | Encapsulation of doxorubicin into thermosensitive liposomes via complexation with the transition metal manganese[END_REF] used TEM to analyze various liposome preparations (Fig. 17). [START_REF] Xu | Effect of molecular structure of chitosan on protein delivery properties of chitosan nanoparticles[END_REF] used TEM to examined diameter and spherical shape for capsules. Atomic force Microscopy Atomic force microscopy is used for observing the surface structure of particles. It used to obtained a resolution in the nanometer range and three-dimensional. Samples undergo relatively mild preparation procedures that reduce the risk of damaging or altering the sample properties prior to measurement [START_REF] Burey | Hydrocolloid gel particles: formation, characterization, and application[END_REF]. Particles produced using fluid gels typically have irregular shapes and can even have tail like structures (Frith, Garijo, Foster, & Norton, 2002) [START_REF] Williams | Microstructural origins of the rheology of fluid gels[END_REF]. (Burgain et al., 2013) (Fig. 19.20) used AFM to identify specific interactions between bacteria and whey proteins. Force measurements and topography images were made at room temperature and different pH. It was observed that many factors influence "Bacteria / dairy matrix" interactions, including the nature of proteins, the nature of strains and the pH of the media. Laser light scattering Laser light scattering also measures the size of microcapsules in the size range between 0.02-2.000 µm. Single particle optical sensing (SPOS) SPOS is used to measure particle size distribution (Onwulata, 2005); it is based on the magnitude of pulse generated by single particles passing through a small photo zone, illuminated by light from laser bulb, which can be correlated with the size of the particles [START_REF] Dodds | 13 -Techniques to analyse particle size of food powders[END_REF]. Focused beam reflectance measurement (FBRM) FBRM provides in situ/online characterization of non-spherical particles by measuring chord lengths of particles [START_REF] Li | Determination of non-spherical particle size distribution from chord length measurements. Part 2: Experimental validation[END_REF], [START_REF] Barrett | Characterizing the Metastable Zone Width and Solubility Curve Using Lasentec FBRM and PVM[END_REF]. In Fig 21, FBRM measurements were conducted in aqueous suspensions which were prepared with distilled water. In the image analysis experiments, the particles were evenly distributed on microscope slides and were measured in dry form [START_REF] Li | Determination of non-spherical particle size distribution from chord length measurements. Part 2: Experimental validation[END_REF]. Malven Zetasizer Malven Zetasizer characterized the electrical properties of biopolymer particles by ζ-potential. to evaluate the magnitude of the repulsion between capsules [START_REF] Legrand | Polymeric nanocapsules as drug delivery systems. A review[END_REF]. and predict the stability of particle suspensions to aggregation ( Jahanshahi & Babaei, 2008) ζ-Potential measurements are highly sensitive to pH and ionic strength. In addition, in systems consisting of a biopolymer mixture, it may be difficult to interpret the data since they will all contribute to the overall signal. Microcapsules composition, physical state and release To determine particle composition and distribution of active ingredients within particles, several techniques have been explored. FTIR spectroscopy FTIR gives information on the chemical structures and interactions between the matrix and the active compound (or bacteria). (Ben Messaoud et al., 2015a) used FTIR to investigated molecular interactions between alginate and thickening agents. (Ben messaoud et al., 2015b) studied the influence of the thickening agents for alginate capsules by modulated with anionic chitosan, xanthan gum and maltodextrin. The result showed that the release profile of cochineal red food dye changed considerably with the different thickening agents. After a one-day storage, capsules filled with chitosan avoided any molecular transport and 35% of the encapsulated red dye remained in the capsules filled with maltodextrin. X-ray photoelectron spectroscopy X-ray photoelectron spectroscopy was used for chemical analysis of particle surface composition, while elemental analysis has been used to study the overall composition of particles and evaluate if a certain compound was encapsulated within the particles. (Montes, Gordillo, Pereyra, & de la Ossa, 2011) used X-ray to determine the particle composition, size and shape. The powder samples were mounted on double sided adhesive and analyzed without any further treatment. Differential scanning calorimetry (DSC) DSC is used to detect the thermal changes in the sample during heating or cooling to show the presence of particular organization as crystals or detect interactions between biopolymers in the particles. [START_REF] Ribeiro | Chitosan-reinforced alginate microspheres obtained through the emulsification/internal gelation technique[END_REF] assessed interactions between alginate and chitosan in capsules membranes [START_REF] Yang | Preparation and evaluation of chitosancalcium-gellan gum beads for controlled release of protein[END_REF]. Rheological gel characterization The rheometer investigates the influence of polymers composition on gel properties. The elastic Spectrophotometer Spectrophotometer is used to measure the relative amount of released active compound versus time. At scheduled time intervals, the amount released was determined from solution absorbance at 500 nm wavelength. At the end of the experiments, to determine the total mass initially encapsulated and the remaining amount, the capsules are destructured by sonication [START_REF] Leick | Deformation of liquidfilled calcium alginate capsules in a spinning drop apparatus[END_REF]. -Provides a better spatial 3D image. -the distribution of polymers and cross-linking ions [START_REF] Lamprecht | Characterization of microcapsules by confocal laser scanning microscopy: structure, capsule wall composition and encapsulation rate[END_REF]. Malvern particle sizing (Mastersizer) -Particles in size ranging from 0.05 up to 900 µm. Single particle optical sensing (SPOS) -particle size distribution (Onwulata, 2005) Particle charge Malven Zetasizer -characterized the electrical properties of biopolymer particles by ζ-potential (Jahanshahi & Babaei, 2008). [START_REF] Legrand | Polymeric nanocapsules as drug delivery systems. A review[END_REF]) Particle morphology Conventional optical microscopy -Characterizing structures of capsules. -Size of capsules. (Zuidam & Nedovic, 2010b) Scanning Electron Microscopy (SEM) -Surface characteristics, such as composition, shape and size. (Mohsen Jahanshahi & Babaei, 2008) (Montes, Gordillo, Pereyra, & Martínez de la Ossa, 2011) Transmission Electron Microscopy (TEM) -Structures of very thin samples. -Morphology and shell thickness of encapsulates following fixation, dehydration, and sectioning [START_REF] Chen | Chitosan/beta-lactoglobulin core-shell nanoparticles as nutraceutical carriers[END_REF] Atomic force microscopy (AFM) -Surface structure of particles in the nanometer range and three-dimensional. [START_REF] Burey | Hydrocolloid gel particles: formation, characterization, and application[END_REF] Litterature review 50 (Burgain et al., 2013) Confocal Laser Scanning Microscopy (CLSM) -Provides a better spatial 3D image. -Provides additional information such as the threedimensional location and quantification of the encapsulated phase. [START_REF] Lamprecht | Characterization of microcapsules by confocal laser scanning microscopy: structure, capsule wall composition and encapsulation rate[END_REF]. Focused beam reflectance measurement (FBRM) -Provide in situ/on-line characterization of non-spherical particles by measuring chord lengths of particles (Li et al., 2005a), [START_REF] Barrett | Characterizing the Metastable Zone Width and Solubility Curve Using Lasentec FBRM and PVM[END_REF] QICPIC -2D, 3D view of particles, and then determine several size and shape parameters. -Sphericity, and The convexity the particles (Cellesi, Weber, Fussenegger, Hubbell, & Tirelli, 2004) [START_REF] Burgain | Encapsulation of probiotic living cells: From laboratory scale to industrial applications[END_REF]. Particle composition and physical state and release X-ray photoelectron spectroscopy -Chemical analysis of particle surface composition, -particle size and shape -presence of compounds in the resulting precipitates. -Determine the total mass initially encapsulated. [START_REF] Leick | Deformation of liquidfilled calcium alginate capsules in a spinning drop apparatus[END_REF] Particular interest of LAB encapsulation in Alginate Pure alginate capsules The common materials was used in encapsulation of probiotic bacteria involve polysaccharides, originate from seaweed (K-carrageenan, alginate), plants (starch, arabic gum), bacteria (gellan, xanthan), and animal proteins (milk, gelatin). Alginates are excessively used at laboratory-and industry scale for encapsulation because they are cheap, readily available, biocompatible, and have low toxicity [START_REF] Krasaekoopt | Evaluation of encapsulation techniques of probiotics for yoghurt[END_REF]. There are various techniques to prepare alginate alginate microcapsules, based on dry techniques freeze-drying [START_REF] Shah | Microencapsulation of probiotic bacteria and their survival in frozen fermented dairy desserts[END_REF][START_REF] Giulio | Use of alginate and cryo-protective sugars to improve the viability of lactic acid bacteria after freezing and freeze-drying[END_REF][START_REF] Capela | Effect of cryoprotectants, prebiotics and microencapsulation on survival of probiotic organisms in yoghurt and freeze-dried yoghurt[END_REF][START_REF] Ross | Microencapsulation of probiotic strains for swine feeding[END_REF], spray-drying ((K. Y. [START_REF] Lee | Survival of Bifidobacterium longum immobilized in calcium alginate beads in simulated gastric juices and bile salt solution[END_REF], or electrospraying [START_REF] Laelorspoen | Microencapsulation of Lactobacillus acidophilus in zein-alginate core-shell microcapsules via electrospraying[END_REF]. Humid capsules are alos largely used based on liquid form, emulsification and ionic gelation [START_REF] Hansen | Survival of Caalginate microencapsulated Bifidobacterium spp. in milk and simulated gastrointestinal conditions[END_REF][START_REF] Mandal | Effect of alginate concentrations on survival of microencapsulated Lactobacillus casei NCDC-298[END_REF][START_REF] Allan-Wojtas | Microstructural studies of probiotic bacteria-loaded alginate microcapsules using standard electron microscopy techniques and anhydrous fixation[END_REF], extrusion [START_REF] Ivanova | Encapsulation of lactic acid bacteria in calcium alginate beads for bacteriocin production[END_REF][START_REF] Chandramouli | An improved method of microencapsulation and its evaluation to protect Lactobacillus spp. in simulated gastric conditions[END_REF], [START_REF] Sathyabama | Co-encapsulation of probiotics with prebiotics on alginate matrix and its effect on viability in simulated gastric environment[END_REF], [START_REF] Corbo | Immobilization and microencapsulation of Lactobacillus plantarum: Performances and in vivo applications[END_REF], [START_REF] Muthukumarasamy | Survival of Escherichia coli O157: H7 in dry fermented sausages containing micro-encapsulated probiotic lactic acid bacteria[END_REF]. In There are some disadvantages related to the alginate microbeads. For example, the microbeads are very porous which this is a drawback when the aim is to protect the cells from its environment [START_REF] Gouin | Microencapsulation: industrial appraisal of existing technologies and trends[END_REF]. Moreover, alginate microbeads is sensitive to acidic environment [START_REF] Mortazavian | Survival of encapsulated probiotic bacteria in Iranian yogurt drink (Doogh) after the product exposure to simulated gastrointestinal conditions[END_REF] which make them not compatible for microparticles in stomach conditions. [START_REF] Sousa | Characterization of freezing effect upon stability of, probiotic loaded, calciumalginate microparticles[END_REF] presented that the alginate micro-particles was not able to exert protection to the encapsulated probiotic cells stored at -20 °C for 60 days, especially from acid and particularly bile salts. Nevertheless, the defects alginate micro-particles can be treated by mixing alginates with other polymer compounds, coating the capsules by another compound or applying structural modification by using different additives [START_REF] Krasaekoopt | Evaluation of encapsulation techniques of probiotics for yoghurt[END_REF]) Microparticles preparetion Microbeads (matrix) preparation Alginate and pectin solutions (1 % (w/w)) were prepared with sterile physiological water (9 % sodium chloride, VWR Belgium) or with sterile M17 broth supplemented with 0.5 % D (+) glucose. Preliminary studies indicate a positive effect of addition of 0.5 % glucose on L.lactis growth and nisin production (data not shown). L.lactis culture was regenerated by transferring a loopful of the stock culture into 10 mL of M17 broth and incubated at 30 ºC overnight. A 10 l aliquot from overnight culture was again transferred into 10 mL of M17 broth and grown at 30 ºC to exponential or stationary phase of growth (6 and 48 h respectively). L.lactis cells were collected by centrifugation (20 min, 4 °C, 5000 rpm) and diluted to obtain a target inoculum in microbeads of 10 5 CFU.mg -1 . Alginate/pectin hydrogel microspheres were made using the Encapsulator B-395 Pro (BÜCHI Labortechnik, Flawil, Switzerland). In this study five polymers ratios (A/P) were selected: 100/0; 75/25; 50/50; 25/75; 0/100. The encapsulation technology is based on the principle that a laminar flowing liquid jet breaks up into equal sized droplets by a superimposed nozzle vibration. The vibration frequency determined the quantity of droplets produced and was adjusted at 1200 Hz to generate 1200 droplets per second. The flow rate was 3 mL.min -1 . A 120 µm diameter nozzle was used for the preparation of beads. Droplets fell in 250 mL of a sterile CaCl2 solution (100 mM) continuously stirred at 150 rpm to allow microbeads formation. The beads were maintained in the gelling bath for 15 minutes to complete the reticulation process and then were filtered and washed with buffer solution (9 % sodium chloride). Microcapsules (coremembrane) preparation SA pure or with L.lactis composed the membrane of the mi-crocapsules. SA (1.3% (w/w)) solution was prepared with sterile physiological water (9% sodium chloride). Physico-chemical characterization of microparticles Size The mean distribution of capsules was measured using a laser light scattering particle size analyzer Mastersizer S (Malvern Instruments Ltd. UK) equipped with a He-Ne laser, a beam of light of 360 nm. The system was able to determine particles in size ranging from 0.05 up to 900 µm. Measurements were achieved in ten replicates for each system. Results were reported as the volume weighted mean globule size D (4,3) in µm: , = ∑ / ∑ (1) Where ni was the number of particles; dί was the diameter of the particle (µm). The D [START_REF]Antilisterial activity[END_REF]3) was chosen instead of D (3, 2) since it is very sensitive to the presence of small amounts of large particles. Morphology Microparticles were observed under an optical microscope (Olympus AX70, Japan) equipped with a camera (Olympus DP70). Dp controller software (version 2.1.1) was used for taking pictures. Microparticles shape was also determined by using a QICPIC TM analyzer (Sympatec MgbH, Clausthal-Zellerfeld, Germany). The analyzer was directly connected to the reactor and made measurements every 5 min during 60 min. The liquid with capsules was pumped into the reactor, passed through the measuring cell, and images were captured and recorded. The results analysis provided 2D, 3D particle, and then determine shape parameters. The diameter of a circle of equal projection area (EQPC) was calculated. It identified the diameter of a circle with the same area as the 2D image of the particles. As different shaped particles may have the same EQPC, other parameters were used to characterize the particles. The sphericity was defined as the ratio between the EQPC perimeters with the real particle perimeter. The convexity provided information about the harshness of the particle. A particle with smooth edges had a convexity value of 1 whereas a particle with irregular ones had a lower convexity [START_REF] Burgain | Encapsulation of probiotic living cells: From laboratory scale to industrial applications[END_REF]. All tests were run in triplicate. Mechanical stability To investigate mechanical stability of alginate microparticles, individual capsules were compressed between two parallel plates. A rotational rheometer Malvern Kinexus pro (Malvern Instruments, Orsay, France) with a plate-and-plate (20 mm) geometry was used. A force gap test was used to compress the microparticles in a droplet of water from 500 to 5 µm with a linear compression speed of 10 µm.s -1 . The gap and the normal force being imposed were measured simultaneously at the upper plate. Three replicates were considered for each type of microcapsules. FTIR spectroscopy Attenuated total reflectance Fourier transform infrared (ATR-FTIR) spectra of freeze-dried microparticles were acquired using a Tensor 27 mid-FTIR Bruker spectrometer (Bruker, Karlsruhe, Germany) equipped with an ATR accessory (128 scans, 4 cm -1 resolution, wavenumber range 4000 -550 cm -1 ) and a DTGS detector. The spectral manipulations were performed using OPUS software (Bruker, Karlsruhe, Germany). All tests were run in triplicate. Encapsulation of L.lactis L.lactis survival and nisin activity L.lactis free and encapsulated in the different microcapsules was placed in physiological water for 10 days at 30°C. Periodically during the storage period bacterial survival and nisin activity inside and outside microparticles were studied. To analyze bacterial survival and nisin activity inside microparticles, 1 g of capsules was placed in a 0.1 M citrate solution to dissolved hydrogel microspheres by calcium chelation. Bacterial survival Serial dilutions were made from microparticles dissolution and physiological water and then poured onto M17 agar. Plates were incubated for 24 hours at 30 ºC before colonies were counted. All tests were run six times. Nisin activity Micrococcus Flavus DSM 1790 sensitive to nisin was used to evaluate nisin activity. Two successive M. flavus cultures in TSBYE medium (TSB: Biomerieux, Marcy l'étoile, France; YE: Biokar diagnostics, Beauvais, France) were made from cryotubes stored at -80 °C. The optical density (OD) at 660 nm of the culture was measured, then a dilution in TSAYE medium (TSAYE; Bacteriological Agar Type A: Biokar diagnostics, Beauvais, France; Tween 80: Merck Hohenbrunn, Germany) was performed to obtain a final absorbance at 660 nm of 0.01. 12 mL of this medium were poured and plates were placed at 4 °C for 2 h to allow agar solidification. Then, wells were hollowed in the agar using a Durham and 25 µL of liquefied microcapsules and physiological water containing L.lactis were deposited in the wells. In parallel, a negative control (M17) was performed. Plates were incubated overnight at 4 °C, and then for 24 h at 37 °C. The inhibition diameters were measured. Results were expressed in cm and converted into nisin mg.mL -1 using a standard curve obtained from a commercial solution of nisin (Sigma-Aldrich, St Louis, USA). All tests were run in triplicate. Antimicrobial activity of microparticles Bacterial strain Culture of Listeria Monocytogenes CIP 82110 was regenerated by transferring a loopful of stock culture into 10 mL of TSB and incubated at 37 ºC overnight. A 10 L aliquot from overnight culture was again transferred into 10 mL of TSB and grown at 37 ºC to the end of the exponential phase of growth. Subsequently, this appropriately diluted culture was used for the inoculation of the synthetic media containing L.lactis free or encapsulated in order to obtain a target inoculum of 10 2 CFU.mL -1 . Antimicrobial activity A synthetic media, TSBYE broth, was inoculated with L.monocytogenes and L.lactis free or encapsulated in exponential state in alginate-xanthan capsules enriched with M17 supplemented with 0.5 % glucose. Media was stored at 30 °C for 7 days. L.monocytogenes and L.lactis counts were examined both immediately after the inoculation and periodically during the storage period. Serial dilutions were made and then poured onto PALCAM agar (Biokar diagnostics, Beauvais, France) and M17 agar plates. Plates were incubated for 48 hours at 37 ºC (PALCAM agar) or 24 hours at 30 °C (M17 agar) before colonies were counted. All tests were run in triplicate. Preparation of the bioactive films The film forming aqueous dispersions (FFD) contained 4 % (w/w) of HPMC or corn starch and glycerol as plasticizer. The hydrocolloid:glycerol mass ratio was 1:0.25 in every case. Polymers were dissolved in distilled water (pH 6.5) under continuous stirring (400 rpm) at 25 °C. Lactococcus lactis subsp lactis ATCC 11454 was used for the preparation of bioactive films. The selection of the strain was based on its antimicrobial activity, its ability to produce nisin, a bacteriocin. Microbial culture was regenerated according methodology described above. Lactic acid bacteria free or encapsulated were incorporated by adding the bacterial cells preparation into the FFD. The ratio was fixed in order to have a final concentration of 3 logs CFU/cm 2 in dry film. FFD were then placed under magnetic stirring for 5 minutes. A casting method was used to obtain the polysaccharide films without lactic acid bacteria and bioactive films. FFD were poured onto a framed and levelled PET Petri dishes (85 or 140 mm diameter) and were dried at 25 ºC and 40 % relative humidity for approximately 48 hours. Film thickness was controlled by pouring the amount of FFD that will provide a surface density of solids in the dry films of 56 g/m 2 in all cases. Dry films were peeled off the casting surface and preconditioned in desiccators at 5 ºC and 75 % relative humidity (RH) prior to testing. These values of temperature and RH were chosen to simulate the storage conditions of refrigerated coated products. Films characterization Moisture content and thickness After equilibration, films were dried in triplicate at 60 °C for 24 h in a natural convection oven and for 24 h more in a vacuum oven in order to determine their moisture content. Measurements of film thickness were carried out by using an electronic digital micrometer (0-25 mm, 1 m). Water vapour permeability Water vapour permeability (WVP) was measured in dry film discs, which were equilibrated at 75 % RH and 5 ºC, according to the gravimetric method described in the AFNORNFH00-030 standard (1974). The dry film was sealed in a glass permeation cell containing silica gel, a dessicant. The glass permeation cells were 5.8 cm (i.d.) × 7.8 cm (o.d.) × 3.6 cm deep with an exposed area of 26.42 cm2. The permeation cells were placed in a controlled temperature (5°C) and RH (75%) chamber via ventilation. The water vapor transport was determined from the weight gain of the cell. After 30 min, steady-state conditions were reached, and weightings were made. To calculate WVTR, the slopes of weight gain as a function of time in the steady state period were determined by linear regression. For each type of film, WVP measurements were replicated three times and WVP was calculated according to Mc Hugh et al. (1993). Oxygen permeability The oxygen permeability of the films (OP) was measured in triplicate by using an oxygen permeation measurement system (Systech Illinois 8100 Oxygen Permeation Analyser, France) at 20 °C and 75% RH (ASTM, 2005). A sample of the film was placed in a test cell and pneumatically clamped in place. Films were exposed to pure nitrogen flow on one side and pure oxygen flow on the other side. An oxygen sensor read permeation through the barrier material and the rate of permeation or oxygen transmission rate was calculated taking into account the amount of oxygen and the area of the sample. Oxygen permeability was calculated by dividing the oxygen transmission rate by the difference in oxygen partial pressure between the two sides of the film, and multiplying by the average film thickness. Mechanical properties A Lloyd instruments universal testing machine (AMETEK, LRX, U.K.) was used to determine the tensile strength (TS), elastic modulus (EM), and elongation (E) of the films, according to ASTM standard method D882 (2001). EM, TS, and E were determined from the stress-Hencky strain curves, estimated from force-distance data obtained for the different films (2.5 cm wide and 10 cm long). At least six replicates were obtained for each formulation. Equilibrated film specimens were mounted in the film-extending grips of the testing machine and stretched at a deformation rate of 50 mm/min until breaking. The relative humidity of the environment was held constant at 53 % during the tests, which were performed at 25 °C. Optical properties The transparency of the films was determined through the surface reflectance spectra in a spectrocolorimeter CM-5 (KonicaMinolta Co., Tokyo, Japan). Measurements were taken from three samples in each formulation by using both a white and a black background. The transparency was determined by applying the Kubelka-Munk theory for multiple scattering to the reflection spectra. As each light flux passes through the layer, it is affected by the absorption coefficient (K) and the scattering coefficient (S). Transparency was calculated, as indicated by [START_REF] Hutchings | Food colour and appearance[END_REF], from the reflectance of the sample layer on a white background of known reflectance and on an ideal black background, through the internal transmittance (Ti). Colour coordinates of the films, L*, Cab* (Equation 1) and hab* (Equation 2) from the CIELAB colour space were determined, using D65 illuminant and 10º observer and taking into account R∞ (Equation 3) which correspond with the reflectance of an infinitely thick layer of the material. 2 2 * * * b a C ab   Equation 1        * * * a b arctg h ab Equation 2 b a R    Equation 3 Finally, the whiteness index (WI) was calculated by applying equation 4. 7.6. FTIR analysis ATR-FTIR spectra of freeze-dried alginate-pectin microbeads without bacteria and preconditioned polysaccharide films were recorded with a Tensor 27 mid-FTIR Bruker spectrometer (Bruker, Karlsruhe, Germany) equipped with an ATR accessory. 128 scans were used for both reference and samples between 4000 and 400 cm -1 at 4 cm -1 resolution. Spectral manipulations were then achieved using OPUS software (Bruker, Karlsruhe, Germany). Raw absorbance spectra were smoothed using a 13 points smoothing function. After elastic baseline correction using 200 points, spectra were centred and normalized. All tests were run at least in triplicate.   Antimicrobial activity of the films against Listeria monocytogenes Bacterial strain Stock culture of Listeria monicytogenes CIP 82110 was regenerated by transferring a loopful into 10 mL of TSB and incubated at 37 ºC overnight. A 10 l aliquot from overnight culture was again transferred into 10 mL of TSB and grown at 37 ºC to the end of the exponential phase of growth. Subsequently, this appropriately diluted culture was used for the inoculation of the agar plates in order to obtain a target inoculum of 10 2 CFU/cm 2 . Antimicrobial effectiveness of films The methodology followed for the determination of antimicrobial effectiveness of films was adapted from [START_REF] Kristo | Thermal, mechanical and water vapor barrier properties of sodium caseinate films containing antimicrobials and their inhibitory action on Listeria monocytogenes[END_REF]. Aliquots of Tryptone Soy Agar (TSA, Biokar Diagnostics, Beauvais, France) (20 g) were poured into Petri dishes. After the culture medium solidified, properly diluted overnight culture from L.monocytogenes was inoculated on the surface and the different films (containing or not L.lactis) of the same diameter as the Petri dishes were placed onto the inoculated surfaces. Plates were then covered with parafilm to avoid dehydration and stored at 5 ºC for 12 days. L.monocytogenes and L.lactis counts on TSA plates were examined both immediately after the inoculation and periodically during the storage period. The agar was removed aseptically from Petri dishes and placed in a sterile plastic bag with 100 mL of tryptone soy water (Biokar Diagnostics, Beauvais, France).The bag was homogenized for 2 minutes in a Stomacher blender 400 (Interscience, Saint-Nom-La-Breteche, France). Serial dilutions were made and then poured onto M17 agar and PALCAM agar. Plates were incubated for 48 hours at 37 ºC before colonies were counted. All tests were run in duplicate. Statistical analysis A statistical analysis of data was performed through a one-way analysis of variance using (chlorure de sodium à 9%) ou avec un bouillon stérile M17 additionné de 0,5% de D (+) glucose. 10 5 CFU.mg -1 de L. lactis sont inoculés dans les microsphères. La caractérisation physico-chimique des microsphères a consisté à suivre le diamètre, la sphéricité et la convexité des billes de 0 à 7 jours à 30°C. Toutes les microsphères ont été produites dans les mêmes conditions, mais la taille des perles a été montrée dépendante de la viscosité du fluide injecté. Au bout de 7 jours, les billes A/P : 100/0, 75/25, 50/50 ont été observées plus stables que d'autres billes. La survie et la croissance de L.lactis dans les microsphères pendant 7 jours à 30°C ont été mesurées. La survie de L.lactis a été évaluée suivant trois facteurs : l'état physiologique de L. lactis au moment de l'encapsulation, la composition interne en microsphères (eau physiologique ou M17 enrichi avec 0,5% de glucose), le rapport A / P. La population de L. lactis augmente plus rapidement avec le milieu M17 enrichi en glucose par rapport à l'eau physiologique utilisée pour la dissolution des polymères. Le ratio de polymères et l'état physiologique de la bactérie ne semblent pas avoir d'influence significative. Il a été observé que des billes d'alginate/pectine fournissent une meilleure protection de L.lactis contre les facteurs environnementaux que celles faites avec de l'alginate pure ou de la pectine pure. Le ratio A/P ι5/25 présente de meilleurs résultats de maintien d'une population stable au sein des perles, en raison de propriétés mécaniques plus stables des microsphères. La production et l'activité de la nisine a été déterminée pour les différents rapports A/P dans l'eau physiologique à 30°C pendant 7 jours. Les résultats montrent que plusieurs facteurs agissent significativement, comme l'état physiologique et la composition de la matrice en polymères et en substances nutritive). Les meilleurs résultats ont été montrés lorsque L.lactis a été encapsulée en phase exponentielle avec A/P (75/25) en présence du M17 enrichi en glucose. Lette activité a été mesurée via l'inhibition de la croissance de L. monocytogenes pendant toute la période de stockage. La réduction de la population de L.monocytogenes était plus élevée avec des bactéries encapsulées que les cellules libres. En conclusion, les meilleurs résultats ont été globalement obtenus avec des microsphères (ratio:75/25) enrichies en M17 et complété avec 0,5% de glucose contenant L. lactis en phase exponentielle. Introduction The interest in application of lactic acid bacteria (LAB) in the prevention of food spoilage and foodborne pathogens growth has increased in the last twenty years (Scannell et al., 2000). Many studies have shown that LAB can reduce the presence of Listeria monocytogenes in meat and seafood (Budde et al., 2003;Jacobsen et al., 2003;Tahiri et al., 2009) or inhibit other foodborne pathogens such as Escherichia coli, Pseudomonas aeruginosa, Salmonella Typhimurium, Salmonella Enteritidis and Staphylococcus aureus (Trias et al., 2008). Several mechanisms, such as lactic acid production, competition for nutrients or production of antimicrobial compounds, explain inhibition of spoilage or pathogenic microorganisms by LAB. Among LAB, L. lactis subsp. lactis is particularly used for food preservation because of its ability to produce bacteriocin, such as nisin, to control spoilage and pathogenic bacteria. However the possible interactions between food components and LAB decreased their effectiveness. The immobilization of LAB by encapsulation using natural polymers as proteins or polysaccharides appears as an interesting strategy to protect strain and modulate nisin release. Encapsulation of bacteria in calcium alginate beads is one of the most studied system for probiotic immobilization and protection (Léonard et al., 2014;Madziva et al., 2005;Polk et al., 1994;Smrdel et al., 2008). Some studies focus on the interest to design composite systems by associating several biopolymers as pectin and alginate to control active components release (Jaya et al., 2008). Authors reported that an increase in pectin caused the diminution of gel barrier and increases the percentage of drug release. Moreover, morphology of alginate pectin microcapsules showed porous micro-structure and they also facilitates active components release. Encapsulation of Lactococcus lactis subsp. lactis on alginate / pectin composite microbeads 90 Sodium alginate is a water soluble anionic polysaccharide, mainly found in the cell walls of brown algae and can be isolated from the bacteria Pseudomonas (Pawar and Edgar, 2012). This natural polymer possesses several attractive properties such as good biocompatibility, wide availability, low cost, and simple gelling procedure under mild conditions. Alginate composition is variable and consists in homopolymeric blocks alternating 1,4-linked β-Dmannuronic acid (M) and α-L guluronic acid (G) residues. Physical properties of alginate are dependent on the composition, sequence and molecular weight. Gel formation is driven by interactions between G-blocks which associate to form firmly held junctions due to divalent cations. In addition to G-blocks, MG blocks also participate by forming weaker junctions. Pectin is one of the main structural water-soluble polysaccharides of plant cell walls. It is commonly used in the food industry as gelling and stabilizing agents. Basically, pectins are polymers of (1-4) linked partially methyl esterified -D-galacturonic acid (Synytsya et al., 2003). Pectins gelation is driven by the interaction between the polygalacturonate chains and divalent cations and is described by the egg-box model where the divalent cations are thought to be held in the interstices of adjacent helical polysaccharide chains (Braccini and Pérez, 2001). Therefore, the objectives of the present study were (a) to develop novel alginate-pectin hydrogel microspheres for the microencapsulation of L. lactis subsp. lactis, a lactic acid bacteria, by dripping using the vibrating technology (b) to analyze physicochemical properties of composite microbeads (c) to evaluate the effect of polymers ratio and physiological state of encapsulated bacteria (exponential or stationary phase) on microbial survival, nisin release and antilisterial activity (d) to determine if a nutritional enrichment of hydrogel matrix by addition of synthetic media (M17) supplemented with 0.5 % glucose can improve results. Results and Discussion Physico-chemical characterization of microbeads Shape and size Microscopic images of A/P composite microbeads were presented on Fig. 22. Microbeads were fairly regular and spherical. Beads diameter, sphericity and convexity at 0 day and after 7 days at 30 °C were reported on Table 7. In general, the size and shape of microspheres depended on the intrinsic properties of the injected polymer solutions such as viscosity, density and surface tension (Chan et al., 2009). Initially, little differences were observed in terms of microbeads size and sphericity, but convexity increased clearly with pectin content in the matrix. After 7 days, significant differences were observed in size and convexity. Convexity provides information about the roughness of the particle. Convexity values are between 0 and 1. A particle with smooth edges has a convexity value of one whereas a particle with irregular ones has a lower convexity [START_REF] Burgain | Encapsulation of probiotic living cells: From laboratory scale to industrial applications[END_REF]. After 7 days at 30 °C, sphericity also decreased with pectin content. The sphericity value "1" is a perfect sphere and a particle with a sphericity close to "0" is highly irregular. Therefore, sphericity is a good way to describe particle shape deviation. Alginate contributed largely to beads sphericity as observed by [START_REF] Sandoval-Castilla | Textural properties of alginate-pectin beads and survivability of entrapped Lb. casei in simulated gastrointestinal conditions and in yoghurt[END_REF]. As observed in the present paper, Dı áz-Rojas et al. ( 2004) reported that the use of both polymers pectin/alginate in matrix composite beads induce loss of sphericity as the proportion of pectin in the matrix increased, as a consequence of the weaker mechanical stability of the calcium-pectinate network compared to that from calcium-alginate. Particularly significant changes occurred for pure pectin microbeads. A swelling phenomenon was responsible for the changes in size as no aggregation occurred. 100 % of pectin microbeads were distorted and became more irregular. From results in Table 7, it was concluded that the beads A/P 100/0, 75/25, 50/50 were more stable than other beads with higher pectin content. Mechanical stability The microcapsules functionality is closely related to their chemical and mechanical stability. In fact microspheres are sensitive to deformations that may lead to their rupture or to undesirable early release of their contents. In order to evaluate the mechanical stability of the prepared systems, microbeads were compressed between two parallel plates. As shown in Fig. 2, the normal force (N) versus the gap distance (mm) is plotted as double-logarithmic compression curves (Degen et al., 2011). As expected, for all the systems, the normal force increases with a decreasing gap, because the bead became more and more compressed. However, some differences in the force evolution were observed. In fact the initial force at 0.2 mm could be noted for the different systems which could be related to the beads size. Some curves showed a region of force diminution at small gaps (<0.005 mm) which could be related to potential beads rupture. Nevertheless, due to the low speed used in these experiments, microbeads break-up was not clearly observed. For comparison purpose, at an intermediate gap (0.01) the beads 25/75 and 75/25 showed a better stability than the others systems were a force of 0.5 N is needed to main the beads at the corresponding gap in Fig. 23. The differences on mechanical stability could be related to potential synergy between alginate and pectin (Walkenström et al., 2003). This synergism is attributed to a heterogeneous association of the G blocks of alginate and the methyl ester regions of pectin (Oakenfull et al., 1990). In the other hand, molecular modeling showed that the G blocks and methylesterified polygalacturonic acid ribbons could pack together in parallel twofold crystalline arrays (Thom et al., 1982). Practically, all the force/gap curves showed a region where the force decreased at small gaps (<0.005 mm) which could be related to potential beads rupture at a compressed state. For alginate beads, the start of hydrogel break-down is associated with the rupture of one single polymer chain (Zhang et al., 2007). It's interesting to note, that the bursting zone was less obvious for composite alginate/pectin beads which are stabilized by alginate G, MG junctions and pectin polygalcturonic junctions. The formation of IPN (Inter-Penetrating Network) could lead to stiffer microbeads. FTIR spectroscopy Fig. 24a shows FTIR spectra of freeze-dried alginate, pectin and alginate/pectin mixture and Fig. 24b spectra of the corresponding freeze-dried microbeads. As shown in Fig. 24a, alginate spectra (100/0) displayed two vibrations in the infrared spectrum due to the carboxylate group; an antisymmetric stretch at 1600 cm -1 and a symmetric stretch at 1412 cm -1 (Sartori et al., 1997). The pectin spectra (0/100) displayed also a peak between 1720 and 1760 cm -1 . The region between 1000 and 1140 cm -1 corresponds to the stretching vibrations of (C-OH) side groups and the (C-O-C) glycosidic bond vibration (Kamnev et al., 1998). The absorption bands between 1100 and 1200 cm -1 were from ether (R-O-R) and cyclic C-C bonds in the ring structure of pectin molecules. The regions between 1590 and 1600 cm -1 are due to the aromatic ring stretching. The region between 1600 and 1800 cm -1 is of special interest and was usually used to compare pectin samples (Synytsya et al., 2003). This spectral region reveals the existence of two bands at 1620-1650 and 1720-1760 cm -1 from free and esterified carboxyl groups, respectively. For full assignment of the infrared bands of alginate and pectin is presented in Table 8. As expected, the mixture solutions of alginate/pectin (75/25, 50/50 and 25/75) displayed the typical bands of the two corresponding biopolymers without significant shifts. However some differences in band intensities were noticed. In particular the mixture 75/25 showed a significant increase of the bands corresponding to the carboxylate groups of pectin and alginate (1600 and 1400 cm -1 ) and to the glycosidic pectin vibrations (1000-1300 cm -1 ). The IR spectra of calcium alginate beads (100/0) (Fig. 24b) displayed the same characteristic peaks of sodium alginate with a greater intensities of the bands corresponding to the carboxylate groups (1600 and 1412cm -1 ) (Pereira et al., 2003). The infrared spectrum of calcium pectinate microbeads (0/100) showed intensity increase of the band corresponding to the carboxylate groups (1620 cm -1 ) and the appearance of a narrow peak at 1400 cm -1 due to the interactions between the galacturonic residues and divalent cations (Braccini and Pérez, 2001). The alginate/pectin blend beads, showed the same typical bands of alginate and pectin with some differences as function of the alginate/pectin ratio especially in the wavenumber between 1000 and 1200 cm -1 . In fact, FTIR analysis of 75/25 beads showed the same two bands (1020 and 1060 cm -1 ) as alginate 100/0. The increase of pectin amount resulted in the appearance of a shouldering peak at 1145 cm -1 for (50/50) and then in well-defined peak for the 25/75 system. Moreover the pH-values of the studied mixtures 100/0, 0/100, 75/25, 50/50 and 25/75 were 7. 04, 3.30, 4.34, 3.94, 3.60 for, respectively. Alginate and pectin could form synergistic mixed gels at low pH values (but >4) in the absence of calcium with a relatively low gelation kinetic (>200 min) (Walkenström et al., 2003). In our case the alginate/pectin mixture was not allowed to stand, and the formation of composite gel was not observed, however we could suppose the possible formation of cooperative bands between alginate and pectin for mixtures before the encapsulation step, which could explain the increase of the intensity of the characteristics bands and therefore better physico-chemical properties. Otherwise, the freeze-drying step of the hydrogel microspheres before the FTIR study could also initiate or reinforce the potential interactions between the two polymers. L.Lactis survival The suitability of different matrices for L.lactis survival and growth inside and outside hydrogel microspheres was studied in physiological water for 7 days at 30 °C (Fig. 25 and26). Bacterial viability significantly changes with three factors: A/P ratio, physiological state of L.lactis during the bead formation and internal composition of the microsphere (physiological water or M17 enriched with 0.5 % glucose). Non-encapsulated L.lactis was used as control (Fig. 27). Bacterial population decreased significantly for the storage period at 30 °C independently of the physiological state of the strain at the beginning of the assay. The addition of nutrients within microbeads led to significant differences in terms of L.lactis counts inside and outside beads. As expected, the population decreased more rapidly with physiological water than with glucose-enriched M17 as medium used for polymers dissolution, independently of polymers ratio and L.lactis physiological state. From the 5 th day of storage bacterial population decreases dramatically inside microbeads, a reduction of approximately 50 % was observed when strain was encapsulated in exponential phase. This decrease was less marked with strain in stationary phase. Concerning bacterial physiological state at the moment of encapsulation, stationary and exponential phase, L.lactis counts inside microbeads after 7 days of storage was higher when LAB were encapsulated in exponential phase than in stationary phase. Results focused on the importance of two parameters, physiological state and presence of nutrients. These factors impact certainly cellular stress and therefore bacterial survival. Finally, the composition of the matrix (A/P ratio) modified significantly bacterial population inside and outside hydrogel microspheres. The use of pectin leads to significant variations especially when physiological water was used for polymers dissolution. Composite A/P hydrogel microspheres tend to present interesting properties compared to pure alginate or pectin beads. [START_REF] Sandoval-Castilla | Textural properties of alginate-pectin beads and survivability of entrapped Lb. casei in simulated gastrointestinal conditions and in yoghurt[END_REF] observed that calcium-alginate-pectin beads matrices provided a better protection against adverse environmental factors to Lactobacillus casei than those made with pure alginate or pectin. The lack of alginate in beads reduced the protective effect, suggesting that both polymers, alginate and pectin, form a structured trapping matrix that is more resistant especially to acids. In this study, microbeads 75/25 present best results to maintain microbial counts within beads. This is could be related to mechanical properties (Fig. 1) or to potential reduction of pore size of the composite system which will increase the retention of the encapsulated bacteria. The best mechanical properties were found for A/P 75/25, beads were more stable. With this matrix composition LAB are certainly better retained in microbeads. Outside hydrogel microspheres, L. lactis counts change significantly with polymers ratio: bacterial population was higher for microbeads 0/100 and 25/75. Alginate has more linear and organized chains than pectin, reticulation links were more efficient with calcium ions. This higher crosslinking for alginate increased the cohesive forces between chains (Silva et al., 2009) and difficult bacteria release. In addition, a greater swelling can occur with pectin, also increasing the output of LAB. Previous studies reported a greater swelling of pectin films, compared to alginate films, as a lowering in crosslinking extent allowed more water absorption (Silva et al., 2009;Sriamornsak and Kennedy, 2008). Mean values and standard deviations. Nisin activity Nisin activity inside and outside microbeads was determined for the different A/P ratios in physiological water at 30 °C for 7 days (Tables 9 and10). As observed for bacterial survival, strain physiological state as well as matrix composition (polymers ratios and addition of nutrients) are key factors. When bacteria were encapsulated in stationary state, a concentration of active nisin was detected inside microbeads at day 0 because nisin was produced before encapsulation step. However when L.lactis was encapsulated during the exponential phase of the growth curve, no nisin was detected initially in beads but after 1 day active nisin was present. Nisin production occurs during the bacterial growth and the amount of peptide adsorbed on cell surface is higher during stationary phase compared to exponential phase of bacterial gowth curve. Therefore the physiological state of the strain at the time of encapsulation impacts the initial concentration of active nisin in microbeads. During storage period, no nisin was detected after 3 days of storage inside microspheres prepared with physiological water whatever their polymer composition. The enrichment of microbeads internal media with glucose-enriched M17 improves these data. After a storage period of 7 days a significant amount of antimicrobial peptide was detected independently of physiological state of the bacteria and polymers ratio used. Previous studies reported equally changes in term of bacteriocin concentration with the composition of the nutrient broth (Parente and Ricciardi, 1999). The A/P ratio does not significantly affect the concentration of active nisin inside microspheres. However, microbeads release properties were modified by the ratio A/P. The mixed matrix (alginate-pectin) reveals an improved suitability to nisin production than pure alginate or pectin microbeads, due to intermediate properties of diffusion and stability of the bead wall. After a storage period of 3 days at 30 °C two ratios seemed interesting independently of others factors (physiological state of the strain and addition of nutrients): 50/50 and 25/75. In conclusion, the physiological state of the bacteria during the encapsulation process and the composition of the microbeads (A/P ratio, enrichment of internal medium with nutrients) were determining factors for both bacteria viability and bacteriocin activity which could be related with the nutritional or cellular stress-producing effects. Of the several matrices tested A/P (75/25) with glucose-enriched M17 gave the best results when L.lactis was encapsulated in exponential state. _ _ _ _ _ _ 75/25 _ 1.4 (0.2) a _ _ _ _ _ _ 50/50 _ 1.2 (0.2) a _ _ _ _ _ _ 25/75 _ 1.0 (0.2) a _ _ _ _ _ _ 0/100 _ 0.1 (0.1) b _ _ _ _ _ _ 100/0 M17 enriched with 0.5 % glucose _ 0.5 (0.2) ax 0.9 (0.5) ax 0.7 (0.4) ax _ 0.9 (0.2) ax 0.5 (0.2) ax _ 75/25 _ 0.8 (0.5) ax 2.2 (0.3) by 2.2 (0.3) by _ 2.1 (0.4) bx 1.7 (0.3) by _ 50/50 _ 0.9 (0.4) ax 1.8 (0.2) by 2.0 (0.3) by _ 1.9 (0.3) bx 1.9 (0.2) by _ 25/75 _ 1.1 (0.13) ax 1.8 (0.4) by 2.0 (0.2) by _ 1.8 (0.4) bx 1.9 (0.1) by _ 0/100 _ 0.6 (0.1) ax 1.9 (0.2) by 0.9 (0.3) ax _ 2.0 (0.2) bx 0.8 (0.4) ay _ a, b,c Different letters in the same column indicate significant differences among samples (p <0.05) x,y,z Different letters in the same file indicate significant differences among time for a same sample (p <0.05) x,y,z Different letters in the same file indicate significant differences among time for a same sample (p <0.05) Antimicrobial activity As commented above the best system to protect L.lactis and permit nisin release is the use of composite microbeads (A/P:75/25; internal medium: glucose-enriched M17) with LAB in exponential state. The possible antilisterial effect of this system at 30 °C was determined on TSB medium and is shown in Figure 28a. Non-encapsulated L. lactis was used as control. Listeria monocytogenes population increased from 2.8 to 7.9 log CFU.mL -1 at the end of the storage period. As expected, in presence of L. lactis free or encapsulated a complete inhibition of L. monocytogenes growth was observed during all storage period. Mechanisms underlying these antimicrobial effects have not been studied, but it can be a combination of several factors such as the production of organic acids, hydrogen peroxide, enzymes, lytic agents and other antimicrobial peptides, or bacteriocins [START_REF] Alzamora | Minimally processed fruits and vegetables: fundamental aspects and applications[END_REF]. Antimicrobial properties of L. lactis were not limited by the system of encapsulation developed in this study. Moreover from the 5 th day of storage at 30 °C reduction of L.monocytogenes counts was higher with microbeads. These data can certainly be explained by a difference in LAB viability. As is shown in Fig. 28b,L.lactis population grew immediately after the incorporation of the strain on TSB medium. No differences between non-encapsulated bacteria and microbeads were observed until 3 rd day of storage. From the 5 th day L.lactis counts decrease to achieve 8.8 and 6.9 log CFU.mL -1 for encapsulated and free strain respectively. A loss of elementary nutrients in synthetic media explains these results. Figure 27: Survival of Lactococcus lactis non-encapsulated during a storage period in physiological water of 7 days at 30 °C (stationary state in solid line and exponential state in dash line). Mean values and standard deviations. ... '.' '" .. •• •-•--------j--- ••••••f•••• .. • ••••••• •••••••••• • •~• .. . •• -" . .,.. " * Conclusion Microencapsulation of LAB producing nisin, L. lactis subsp. lactis, was performed on composite alginate / pectin hydrogel microspheres. The physical properties and the entrapped efficiency of the alginate/pectin beads are greatly affected by the biopolymers ratio used. The best mechanical properties were found for alginate/pectin: 75/25; the beads were more stable and allow the best release of nisin during the storage period. The preparation of alginate/pectin inter-penetrating network resulted in a better control of physicochemical properties of composite microbeads and potentially of the hydrogel mesh size. The physiological state of the bacteria during the encapsulation process and the composition of the microbeads (A/P ratio, enrichment of internal medium with nutrients) were determining factors for both bacteria viability and bacteriocin activity which could be related with the nutritional or cellular stressproducing effects. The best results were obtained with composite microbeads (75/25) enriched with M17 supplemented with 0.5 % glucose. -Budde, B. B., Hornbaek, T., Jacobsen, T., Barkholt, V., & Koch, A. G. (2003). Leuconostoc carnosum 4010 has the potential for use as a protective culture for vacuum-packed meats: culture isolation, bacteriocin identification, and meat application experiments. International Journal of Food Microbiology, 83(2), 171-184. -Burgain, J., Gaiani, C., Linder, M., & Scher, J. ( 2011). Encapsulation of probiotic living cells: From laboratory scale to industrial applications. Journal of Food Engineering, 104(4), 467-483. -Cellesi, F., Weber, W., Fussenegger, M., Hubbell, J. a., & Tirelli, N. ( 2004). Towards a fully synthetic substitute of alginate: Optimization of a thermal gelation/chemical crosslinking scheme ("tandem" gelation) for the production of beads and liquid-core capsules. Biotechnology and Bioengineering, 88(6), 740-749. -Degen, P., Leick, S., Siedenbiedel, F., & Rehage, H. (2011). Magnetic switchable alginate beads. Colloid and Polymer Science, 290(2), 97-106. -Dı áz-Rojas, E. I., Pacheco-Aguilar, R., Lizardi, J., Argüelles-Monal, W., Valdez, M. A., Rinaudo, M., & Goycoolea, F. M. ( 2004). Linseed pectin: gelling properties and performance as an encapsulation matrix for shark liver oil. Food Hydrocolloids, 18 -Sandoval-Castilla, O., Lobato-Calleros, C., García-Galindo, H. S., Alvarez-Ramírez, J. Introduction Application of lactic acid bacteria (LAB) as a biopreservation strategy has increasing interest in the last decades. LAB are considered as GRAS (Generally Recognized As Safe) and can inhibit the growth of different bacteria, yeasts and fungi, through the production of organic acids, hydrogen peroxide, enzymes, defective phages, lytic agents and antimicrobial peptides, or bacteriocins [START_REF] Alzamora | Minimally processed fruits and vegetables: fundamental aspects and applications[END_REF]. Among pathogens present in foodstuffs, Listeria monocytogenes remains one of the major problems particularly in dairy products. Previous studies have already proved the antilisterial efficacy of LAB in model systems (Antwi et al., 2008), in dairy products (Liu et al., 2008), in sea-food products (Concha-Meyer et al., 2011), as well as in meat products (Maragkoudakis et al., 2009). To guarantee food safety, the incorporation of LAB into food packaging appears an interesting novel approach. But some recent studies reported problems of LAB viability (Sánchez-González et al., 2013 ;[START_REF] Sánchez-González | Antilisterial and physical properties of biopolymer films containing lactic acid bacteria[END_REF]. The use of encapsulation techniques to protect LAB before their addition in bioactive films could be an interesting approach to limit this phenomenon. Indeed, microencapsulation methods permit the entrapment of microbial cells within particles based on different materials and their protection against non-favorable external conditions [START_REF] Champagne | Physical properties and antilisterial activity of bioactive films containing alginate/pectin composite microbeads with entrapped Lactococcus lactis subsp lactis[END_REF]Zuidam & Shimoni, 2010). Different factors such as encapsulation method, type and concentration of materials used, particle size and porosity or type of microparticles (bead, capsule, composite, coating layer..) affect effectiveness of the bacterial protection (Ding & Shah, 2009). Several biopolymers were studied for encapsulation: alginate, pectin, k-carrageenan, xanthan gum, gellan gum, starch derivatives, cellulose acetate phthalate, casein, whey proteins and gelatin. Alginate has been widely used as microencapsulation material, as it is non-toxic, biocompatible, and cheap [START_REF] Jen | Review: Hydrogels for cell immobilization[END_REF][START_REF] Léonard | Preferential localization of Lactococcus lactis cells entrapped in a caseinate/alginate phase separated system[END_REF]Léonard et al., 2014). Sodium alginate (SA) composition is variable and consists in homopolymeric and heteropolymeric blocks alternating 1,4-linked β-D-mannuronic acid (M) and α-L guluronic acid (G) residues (Pawar & Edgar, 2012). Physical properties of alginate are dependent of the composition, sequence and molecular weight (Pawar & Edgar, 2012). Gel formation is driven by interactions between G-blocks, which associate to form firmly held junctions due to divalent cations. In addition to G-blocks, MG blocks also participate by forming weaker junctions (Pawar & Edgar, 2012). Some studies focus on the interest on design composite systems on associating several biopolymers such as alginate and xanthan gum to control active components release [START_REF] Wichchukit | Whey protein/alginate beads as carriers of a bioactive component[END_REF]. Xanthan Gum (XG) is an extracellular anionic polysaccharide secreted by Xanthomonas campestris. It is a complex polysaccharide consisted in a primary chain of β-D- (1,[START_REF]Antilisterial activity[END_REF]-glucose backbone, which has a branching tri-saccharide side chain comprised of β-D-(1,2)-mannose, attached to β-D-(1,4)glucuronic acid, and terminates in a β-D-mannose [START_REF] Elçin | Encapsulation of urease enzyme in xanthan-alginate spheres[END_REF][START_REF] Goddard | Principles of polymer science and technology in cosmetics and personal care[END_REF]. Recently, XG has been used to combine with SA in beads to preserve LAB viability and modulate release properties. This could be due to the molecular interaction between SA and XG, which led to the formation of a complex matrix structure (Fareez et al., 2015 and[START_REF] Pongjanyakul | Xanthan-alginate composite gel beads: molecular interaction and in vitro characterization[END_REF][START_REF] Pongjanyakul | Xanthan-alginate composite gel beads: molecular interaction and in vitro characterization[END_REF]. The aim of the present study was to develop novel SA-XG microspheres that can enhance stability of L.lactis and release of nisin during storage for future food packaging applications. LAB were usually present inside beads or in the core of capsules. One of the originalities of this study was to immobilize bacteria in SA membrane of the capsule and to use the core as a nutrient pool to permit a gradual bacterial growth. Physico-chemical properties of microcapsules were studied. The effect of bacteria physiological state during encapsulation step (exponential or stationary phase) and a possible enrichment of aqueous-core with nutrients (M17 supplemented with 0.5 % glucose vs physiological water) on bacterial survival, nisin release and antilisterial activity was studied. Results and Discussion Physico-chemical characterization of microcapsules Shape and size Microscopic image of freshly prepared SA-XG microcapsules was presented on Fig. 29. Capsule was fairly spherical and the aqueous-core was centered. Microcapsules size, sphericity and convexity at day 0 and after 7 days at 30 °C were reported on Table 11. For a given procedure of capsules production (given nozzle diameter, vibration frequency, extrusion flow rate), the average diameter was demonstrated dependent of the viscosity of injected fluid (Cellesi et al., 2004). In this study viscosity was set to obtain rather spherical particles. Results of sphericity and convexity were in accordance with microscopic observations and indicated that just-prepared capsules were rather spherical and with an irregular surface. Therefore, a sphericity value "1" correspond to a perfect sphere and a particle with sphericity close to "0" is highly irregular. Convexity provides information about the roughness of the particle. Convexity values are between 0 and 1. A particle with smooth edges has a convexity value of one whereas a particle with irregular ones has a lower convexity [START_REF] Burgain | Encapsulation of probiotic living cells: From laboratory scale to industrial applications[END_REF]. All parameters measured remained constant during the storage period. Microcapsules thus remained stable under conditions tested in this study. SA capsules are sensitive to deformations that may lead to their rupture or to undesirable early release of their contents. In order to evaluate the mechanical stability of the prepared systems, individual microcapsule was compressed between two parallel plates. Fig. 30 reports the plot of the normal force (N) versus the displacement (µm) for SA-XG microcapsules prepared with physiological water and M17 enriched with glucose. The microcapsules did not show a rupture point (a maximum peak force followed by a dramatic force decrease), even when the capsules were compressed down to a thickness of around 5 m from an original diameter of 500 m. The compression curves showed the same profiles which means that the aqueous core composition of the microcapsules did not influence the compression profile of the microcapsules. Under compression, the alginate physical cross-links could restructure in a more dense fashion, releasing the water in excess and resulting in a volume reduction (Cellesi et al., 2004). The water expulsion could be also accentuated due to the low speed used during the compression experiments (10 m s -1 ) which might be lower than the water release kinetic from the squeezed microcapsules. The FTIR spectra of freeze-dried SA-XG solutions and microcapsules are shown in Fig. 31. All spectra displayed a band between 3000 and 3700 cm -1 (OH stretching) followed by a small band (3000-2850) due to CH stretching. FTIR spectra of SA (a) showed two characteristics peaks around 1595 and 1408 cm -1 , indicating the stretching of COO -(asymmetric) and COO -(symmetric), respectively. The band at 1020 cm -1 is an antisymmetric stretch (C-O-C) given by the guluronics units (Pereira et al., 2003). The FTIR spectrum of xanthan gum (b) shows two carbonyl peaks (C-O) at 1725 cm -1 corresponding to acetate groups of an inner mannose unit and at 1600 cm -1 , the characteristic band of carboxylate of pyruvate group and glucuronic acid (Hamcerencu et al., 2007). The reticulation of alginate with calcium cations caused a decrease in intensity of COOstretching peaks, and a decrease in intensity of 1031 cm -1 peak (c). This indicated that an ionic bonding between calcium ion and carboxyl groups of SA and a partial covalent bonding between calcium and oxygen atom of ether groups, respectively. The incorporation of XG into calcium alginate microcapsules (d) did not show significant modifications. However, SA-XG microcapsules spectra (d) showed a decrease of peak intensity of the carboxylate bands. This observation could be related to potential hydrogen bonding between alginate carboxylate groups and XG hydroxyls groups [START_REF] Pongjanyakul | Xanthan-alginate composite gel beads: molecular interaction and in vitro characterization[END_REF]. Otherwise, the freeze-drying step of microcapsules before the FTIR study could also initiate or reinforce the hydrogen bonding between the two polymers. As described in the literature, the barriers capsules could be obtained by coating technique in which negative charge polymers, as "alginate", are coated by positively charged polymers, such as "chitosan". Coating is used to enhance gel stability [START_REF] Smidsrød | Alginate as immobilization matrix for cells[END_REF][START_REF] Kanekanian | Encapsulation Technologies and Delivery Systems for Food Ingredients and Nutraceuticals[END_REF] and provide a better barrier to cell release [START_REF] Gugliuzza | Smart Membranes and Sensors: Synthesis, Characterization, and Applications[END_REF][START_REF] Zhou | Spectrophotometric quantification of lactic bacteria in alginate and control of cell release with chitosan coating[END_REF]. The reaction of the biofunctional molecule with the membrane resulted in bridge formation. The length of the bridge depended on the type of cross-linking agent [START_REF] Hyndman | Microencapsulation of Lactococcus lactis within cross-linked gelatin membranes[END_REF]. Nevertheless, in the present study, FTIR spectra of SA-XG microcapsules did not exhibit the characteristics bands of XG which could be related to potential ionic bonding formation between XG and calcium ions. The low amount of XG compared to the SA concentration could explain this unapparent change in the FTIR spectra. Encapsulation of L.lactis L.Lactis survival The survival of L.lactis inside and outside of two types of microscapsules was studied in physiological water for 10 days at 30°C (Fig. 32). The aim of this part was to study if a possible enrichment of microcapsules core with nutrients (M17 supplemented with 0.5 % glucose) and the physiological state of the strain during microcapsule production were key factors to preserve viability of L.lactis. Non-encapsulated L.lactis was used as control (data not presented). During the storage period a significant decrease of bacterial counts were observed for bacteria in exponential or in stationary state at the beginning of the assay to achieve after 10 days at 30 °C regardless what physiological state of the strain at the beginning of assay and core composition (Fig 32 a,b). Addition of nutrients within microbeads led to little differences in terms of L. lactis population. This is certainly due to a rapid release of some nutrients (small molecules) to the storage medium. Indeed, recent studies reported that the pore size of hydrogel beads may range from around 5 to 200 nm depending on their composition and preparation (Zeeb et al., 2015 andGombotz andWee, 1998). The combination of the supply of nutrients in core and the encapsulation of LAB during exponential phase gave the best results in terms of viability (Fig 32a); in these conditions L. lactis counts were maintained to 5 log CFU.mg -1 after 10 days at 30 °C. Concerning bacterial counts outside capsules, in physiological water, a significant increase during the beginning of the storage period was observed whatever the composition of the microcapsules. A maximum level of 4.5 log CFU mg -1 was achieved after 5 days of microcapsules storage at 30 °C. When capsules were placed into physiological water, bacteria adsorb on surface or present on alginate membrane were certainly progressively released to external medium. From the 5 th day of storage, L.lactis population started to decrease significantly to achieve 3.5 log CFU.mg -1 at the end of the storage period. Polymers choice and structure of microcapsules were key factors but results of this study focused on the importance of two supplementary parameters, physiological state of LAB during the encapsulation step and the possible enrichment of capsules with nutrients. These factors Nisin activity Nisin activity inside and outside microcapsules was determined for different systems in physiological water at 30°C for 10 days (Table 12). As observed for bacterial survival, bacterial physiological state as well as addition of nutrients in core were key factors to optimize nisin activity and consequently antimicrobial properties of microcapsules. Maximum concentration of active nisin inside capsules was detected after 1 day of storage at 30 °C independently of the aqueous-core composition and bacterial physiological state. A significant decrease in the amount of active nisin was then observed. Addition of nutrients in core reduces this phenomenon, a slight concentration of active bacteriocin was reported at the end of the storage period. Previous studies reported equally changes in term of bacteriocin concentration with the composition of the nutrient broth but for non-encapsulated bacteria (Parente & Ricciardi, 1999). Bacterial physiological state of L.lactis during encapsulation step appears equally as an important factor. When LAB was encapsulated in stationary state, a small concentration of active nisin was detected inside microcapsules at day 0 because nisin was already produced by the bacteria before encapsulation. Conversely, when L.lactis was encapsulated in exponential phase, no nisin was detected initially in capsules. Therefore the physiological state of the LAB at the time of encapsulation impacts the initial concentration of active nisin in microcapsules. Concerning release of active compound, it is necessary to wait one day of storage to detect a significant concentration of active nisin outside microcapsules, in physiological water. This amount gradually increases to achieve a maximum after one or three days at 30 °C when L.lactis was encapsulated in stationary and exponential state respectively. Production of nisin occurs during bacterial growth. When bacteria were encapsulated in stationary phase, cell growth was completed. A high concentration of nisin has been released but a significant part is certainly remained adsorbed on the surface of bacterial cells. This fraction was encapsulated with bacteria and quickly released outside microcapsules. After 10 days at 30 °C the presence of active nisin in the solution of NaCl used for the storage of microcapsules was detected only for one of the tested systems: capsules with alginate membrane and aqueous-core based on nutrients (M17 enriched with 0.5 % glucose) prepared using bacteria in exponential state. To conclude with this part, to optimize an encapsulation system it is necessary to take into account two parameters that influence bacterial survival and production of bacteriocin, bacterial physiological state during the encapsulation process and the possible addition of nutrients in the system. These factors could be related with nutritional or cellular stress-producing effects. Microcapsules with L.lactis in exponential state encapsulated in alginate membrane and aqueous-core based on xanthan gum with nutrients (M17 enriched with 0.5 % glucose) gave the best results. Antimicrobial activity As discussed above, the best system to preserve bacterial survival and permit nisin release was the use of microcapsules with L.lactis in exponential state encapsulated in SA membrane and aqueous-core based on XG with nutrients (M17 enriched with 0.5 % glucose). Antilisterial properties of this system were determined at 30°C on synthetic medium, TSB (Fig. 33a). Nonencapsulated L.lactis was used as a control. Listeria monocytogenes population increased from 2.8 to 7.9 log CFU.mL -1 at the end of the storage period. A clear inhibition of L. monocytogenes growth was observed in presence of L.lactis free or encapsulated. Antimicrobial properties of L.lactis were not limited by the system of encapsulation developed in this study. Moreover from the 3 rd day of storage at 30°C reduction of L.monocytogenes counts was higher with L.lactis encapsulated than with free L.lactis. These data can certainly be explained by a difference in LAB viability. As is shown in Fig. 33b,L.lactis population grew immediately after the incorporation of the strain on TSB medium. No differences between non-encapsulated bacteria and encapsulated bacteria were observed until 2 nd day of storage. From the 3 rd day L.lactis counts decrease to achieve at the end of the storage period 8.6 and 6.6 log CFU.mL -1 for encapsulated and free strain respectively. A loss of elementary nutrients in synthetic media explains these results. Considering the effect of the size of the capsule on antilisterial activity, the smaller beads were more effective than larger beads due to higher surface/volume ratio, as previously observed by Anal, Stevens, and Remunan-Lopez (2006) Introduction Lactic acid bacteria (LAB) are traditionally used to provide taste, texture and increase the nutritional value of fermented foods such as dairy products (yoghurt, cheese), meat products, as well as some vegetables. However, a large amount of research has been focused on the great potential of LAB use for food preservation. Studies have shown that LAB can inhibit the growth of different microorganisms, including bacteria, yeasts and fungi, through the production of organic acids, hydrogen peroxide, enzymes, defective phages, lytic agents and antimicrobial peptides, or bacteriocins [START_REF] Alzamora | Minimally processed fruits and vegetables: fundamental aspects and applications[END_REF]. During the last years, innovative bioactive films enriched with LAB have been developed [START_REF] Gialamas | Development of a novel bioactive packaging based on the incorporation of Lactobacillus sakei into sodiumcaseinate films for controlling Listeria monocytogenes in foods[END_REF]Sánchez-González et al., 2013;[START_REF] Sánchez-González | Antilisterial and physical properties of biopolymer films containing lactic acid bacteria[END_REF]. Among biopolymers used as support for LAB, cellulose derivatives appeared as remarkable film forming compounds. They are not only biodegradable, odorless and tasteless [START_REF] Krochta | Edible and biodegradable polymer films: challenges and opportunities[END_REF] but they also exhibit good barrier properties against lipids, oxygen and carbon dioxide at low and intermediate relative humidity (Nispero-Carriedo, 1994). Hydroxypropylmethyl cellulose (HPMC) has been also used for its good film forming properties and mechanical resistance. The third interesting polysaccharide used in active packaging is starch. This biopolymer is a renewable resource, inexpensive (compared with other compounds) and widely available [START_REF] Lourdin | Influence of amylose content on starch films and foams[END_REF]. However, one of the major problems encountered was the decrease of the film's antimicrobial activity throughout time due to LAB viability problems (Sánchez-González et al., 2013;[START_REF] Sánchez-González | Antilisterial and physical properties of biopolymer films containing lactic acid bacteria[END_REF]. To limit this problem and increase films effectiveness, encapsulation techniques appeared as an interesting approach. Indeed, microencapsulation methods permitted the entrapment of microbial cells within particles based on different materials and their protection against non-favorable external conditions [START_REF] Champagne | Physical properties and antilisterial activity of bioactive films containing alginate/pectin composite microbeads with entrapped Lactococcus lactis subsp lactis[END_REF]Zuidam & Shimoni, 2010). Different factors such as encapsulation method, Physical properties and antilisterial activity of bioactive films containing alginate/pectin composite microbeads with entrapped Lactococcus lactis subsp lactis. conditions as temperature, RH gradient, kind and amount of plasticizer, etc… It was verified that biopolymer films are highly permeable to water vapor, which is coherent with the hydrophilic nature of polysaccharides [START_REF] Han | Edible films and coatings: a review[END_REF]. Under these experimental conditions, significant differences were observed in WVP values between corn starch and HPMC films. This high WVP is of high interest to allow mass transport though the film and nisin activity for food safety application. The optical properties of the films were evaluated through their color and transparency since these properties have a direct impact on the appearance of the coated product. Film transparency was evaluated through the internal transmittance, Ti (0-1, theoretical range). An increase in Ti can be assumed as an increase in transparency [START_REF] Hutchings | Food colour and appearance[END_REF]. The spectral distribution of Ti (400-700 nm) is shown in Figure 1. The main impact was observed when both capsules and bacteria were added in the film where the transparency slightly decrease. All systems remained high transparent independently of the capsules or bacteria content. This properties, confirmed by the table 2, would also help easy applications of the films as a packaging materials. FTIR analysis FTIR spectra of freeze-dried (A/P:75/25) microbeads, preconditioned starch and HPMC films before and after microbeads incorporation are shown in Fig. 2. As discussed in our previous study, FTIR spectrum of A/P microbeads, displayed typical bands of alginate and pectin biopolymers (Fig. 2a). The band 1590-1600 cm The small band at 1640-1650 cm -1 indicated the C-O of the HPMC pyranose molecules or could be related to O-H stretching of water molecules coupled to the structure of HPMC [START_REF] Klangmuang | Combination of beeswax and nanoclay on barriers, sorption isotherm and mechanical properties of hydroxypropyl methylcellulose-based composite films[END_REF]. The sharp band at 1050 cm -1 (C-O stretching) presented an evident shoulder at 1110 cm -1 attributed to a C-O-C asymmetric stretching vibration [START_REF] Akhtar | Antioxidant capacity and light-aging study of HPMC films functionalized with natural plant extract[END_REF]. In the same sense, previous studies of nano-functionalized films showed that incorporation of particles didn't not resulted necessarily in important FTIR spectra modifications [START_REF] García | Physico-Mechanical Properties of Biodegradable Starch Nanocomposites[END_REF]. Viability of lactic acid bacteria during storage of the films. The viability of free and encapsulated L. lactis added to HPMC and Starch films was tested throughout a storage period of 12 days at 5°C and 75% RH. L. lactis microbial counts as a function of the storage time are shown in Fig. 3a. As can be seen, the viability of encapsulated L.lactis was greater than that free bacteria in both polymer matrices. For free L.lactis, a significant reduction of the initial population was observed during the storage period, which indicated that free L.lactis were more sensitive to storage stresses. By comparing both hydrocolloid matrices, starch appeared to be a more favorable environment for L.lactis survival. Regardless of the nature of the matrix, worse results were obtained free L.lactis in comparison with encapsulated L.lactis. Counts for free L.lactis were lower than 2 log CFU/cm 2 in all films after 5 days storage, which indicates the great sensitivity of this strain to the lack of nutrients and to the decrease of the water content. Antilisterial activity The antimicrobial activity of the developed films against Listeria monocytogenes was tested in a synthetic non-selective medium (TSA) stored at 5°C (Fig. 3b). Pure HPMC and Starch films and with A/P capsules without L. lactis were used as control samples. As is shown in Fig. 3b,L. monocytogenes population increased from 2.5 to 7.5 log CFU/cm 2 at the end of the storage period. As expected, pure HPMC and Starch films with A/P capsules without L.lactis were not effective against L. monocytogenes growth, since no significant differences were observed in microbial growth on TSA plates. All films containing bioactive cultures exhibited a significant antilisterial activity since, during the period of storage, a reduction of the initial microbial population was observed in all cases. The films with free and encapsulated L. lactis showed, therefore, bactericidal activity. After 3 storage days, the best growth limitation were obtained with starch and HPMC films with encapsulated L.lactis, all films with free and encapsulated L.lactis led to a reduction of the microbial growth of approximately 3 logs with respect to the control. For films based on polysaccharide, different results were obtained. The initial population remained constant during the first 7days and, after that, a slight decrease was observed in agreement with [START_REF] Gialamas | Development of a novel bioactive packaging based on the incorporation of Lactobacillus sakei into sodiumcaseinate films for controlling Listeria monocytogenes in foods[END_REF] and [START_REF] Sánchez-González | Antilisterial and physical properties of biopolymer films containing lactic acid bacteria[END_REF]. In this sense, the Physical properties and antilisterial activity of bioactive films containing alginate/pectin composite microbeads with entrapped Lactococcus lactis subsp lactis. viability of free L. lactis significantly decreased in comparison with encapsulated L. lactis and limited L. monocytogenes inhibition. Conclusion Abstract Nisin is an antimicrobial peptide produced by strains of Lactococus lactis subsp. lactis, recognized as safe for food applications by the Joint Food and Agriculture Organization/World Health Organization (FAO/WHO). Nisin could be applied, for shelf-life extension, biopreservation, control of fermentation flora, and potentially as clinical antimicrobials. Entrapment of bacteria able to produce nisin in calcium alginate beads is promising way for cells immobilization in active films to extend food shelf-life. The present PhD work aimed to design biopolymeric active packaging entrapping bioprotective lactic acid bacteria (LAB) and control undesirable microorganisms growth in foods, particularly L. monocytogenes. First, the mechanical and chemical stability of the alginate beads were improved, and consequently the effectiveness of encapsulation was increased. Alginate/pectin (A/P) biopolymers were prepared, as first microspheres design, by extrusion technique to encapsulate nisin-producing Lactococcus lactis subsp. lactis in different physiological state (exponential phase, stationary phase). Results showed that A/P composite beads were more efficient to increase beads properties than those formulated with pure alginate or pectin. Association of alginate and pectin induced a synergistic effect which improved microbeads mechanical properties. As a second microspheres design, aqueous-core microcapsules were prepared with an alginate hydrogel membrane and a xanthan gum core. Results showed that microcapsules with L.lactis in exponential state encapsulated in alginate membrane and aqueous-core based on xanthan gum with nutrients gave the best results and exhibit interesting antilisterial activity. These microparticles were applied in food preservation and particularly in active food packaging. A novel bioactive films (HPMC, starch) was developed and tested, entrapping active beads of alginate/xanthan gum core-shell microcapsules and alginate/pectin hydrogel enriched with L.lactis. Figure 1 :Figure 2 :Figure 3 :Figure 4 :Figure 5 :Figure 6 :Figure 7 :Figure 8 :Figure 9 : 123456789 Figure 1 : The different types of capsules ......................................................................................... 8 Figure 2: Alginate structure. ............................................................................................................ 10 Figure 3: Carrageenan structure. ..................................................................................................... 11 Figure 4: Cellulose acetate phtalate (CAP) structure ...................................................................... 13 Figure 5: Sodium carboxymethyl cellulose (NaCMC) sturcture .................................................... 13 Figure 6: Xanthan gum structure ..................................................................................................... 14 Figure 7: Chitosan structure ............................................................................................................ 15 Figure 8: Pectin structure ................................................................................................................ 16 Figure 9: Dextran structure ............................................................................................................. 17 Figure 10 :Figure 13 :Figure 14 :Figure 16 :Figure 19 :Figure 20 :Figure 21 :Figure 33 : 1013141619202133 Figure 10: Gellan gum structure ...................................................................................................... 18 Figure 11: Starch structure. ............................................................................................................. 18 Figure 12: collagen triple helix (from Wikipedia) .......................................................................... 19 Figure 13: Gelatin structure ............................................................................................................. 20 Figure 14: Caseins structure ............................................................................................................ 21 Figure 15: Whey Protein structure .................................................................................................. 22 Figure 16: Visualization of microcapsules containing a nile red stained oil phase by a light microscopy image (a) and by CLSM using the red fluorescence channel and transmitted light detection (b). The fluorescence signal allows the oil-containing and air-containing microcapsules to be unambiguously distinguished. Scale bar is shown in m. (Lamprecht et al., 2000). ................. 41 Figure 17: ........................................................................................................................................ 41 Figure 18: Scanning electronic microscopy of the outer surface of the BSA nanoparticles. a: Outer surface of the particle with magnification of 30000. b: Outer surface of the particle with magnification of 60000. .................................................................................................................. 42 Figure 19: Deflection images of micellar casein at two pHs ((A) pH 6.8 and (B) pH 4.8) and whey proteins ((C) pH 4.8). Each image corresponds to 512 horizontal lines that describe the outward and return of AFM cantilever tip ( 1024 scans are made on each image). The graphics below each image correspond to height profiles taken from a cross section on the AFM images. .............................. 43 Figure 20: Height images of bacterial strains L. rhamnosus GG and L. rhamnosus GR-1. Each image corresponds to 512 horizontal lines that describe the outward and return of AFM cantilever tip (1024 scans are made on each image) insets: 3D views of bacterial strains. ............................................ 44 Figure 1 : 1 Figure 1 : The different types of capsules Figure 2 : 2 Figure 2: Alginate structure. Figure 3 : 3 Figure 3: Carrageenan structure. Figure 4 : 4 Figure 4: Cellulose acetate phtalate (CAP) structure Figure 5 : 5 Figure 5: Sodium carboxymethyl cellulose (NaCMC) sturcture Figure 6 : 6 Figure 6: Xanthan gum structure Figure Figure 7: Chitosan structure Figure Figure 8: Pectin structure Figure Figure 9: Dextran structure Figure 10 : 10 Figure 10: Gellan gum structure Figure 11 : 11 Figure 11: Starch structure. Figure 14 : 14 Figure 14: Caseins structure Figure 15 : 15 Figure 15: Whey Protein structure (v) What are the mechanisms of release? (vi)What are the cost constraints? Different technologies are used to produce microcapsules according to the answers of the above questions. They are presented below. is one of the oldest process and the most widely used in microencapsulation technique in the food industry sector. It is an economical and flexible operation. The process involves the atomization of a suspension of microbial cells in polymeric solution in a chamber supplied with hot air. This lead to solvent evaporation. The dried particles are then separated by a filter or cyclone (M.-J. Chen, Chen, & Kuo, 2007); (de Vos et al., 2010); (K. Figure 16 : 16 Figure 16: Visualization of microcapsules containing a nile red stained oil phase by a light microscopy image (a) and by CLSM using the red fluorescence channel and transmitted light detection (b). The fluorescence signal allows the oil-containing and air-containing microcapsules to be unambiguously distinguished. Scale bar is shown in m.[START_REF] Lamprecht | Characterization of microcapsules by confocal laser scanning microscopy: structure, capsule wall composition and encapsulation rate[END_REF]. Figure 17 : 17 Figure 17: (a) TEM of CS-βlg nanoparticles (N-native) in simulated gastric fluid with pepsin for 0.5 h, (b) in simulated intestinal fluid with pancreatin for 10 h, (c) then degraded by chitosanase and lysozyme for 4 h. (d) TEM of chitosan nanoparticles in simulated intestinal fluid with pancreatin for 10 h, (e) and degraded by chitosanase and lysozyme for 4 h. 4. 1 . 1 . 4 . 114 Scanning Electron Microscopy (SEM)SEM provides information on surface characteristics, such as composition, shape and size(Mohsen Jahanshahi & Babaei, 2008),[START_REF] Pierucci | Comparison of alpha-tocopherol microparticles produced with different wall materials: pea protein a new interesting alternative[END_REF],[START_REF] Montes | Coprecipitation of amoxicillin and ethyl cellulose microparticles by supercritical antisolvent process[END_REF]. The samples must be frozen, dried or fractured and subsequently coated with metal compounds, which alter the representativeness of the sample. As an example, Rahimnejad,Jahanshahi, Najafpour (2006) (Fig.18) determined nanoparticle size and distribution by SEM. The samples (protein (BSA) nanoparticle) were dipped into liquid nitrogen for 10 min, and then freeze-dried. The sample was fixed on the aluminum stub and coated with 20 nm of gold palladium. The shape of the nanoparticles were demonstrated spherical with sizes absolutely below 100 nm. Figure 18 : 18 Figure 18: Scanning electronic microscopy of the outer surface of the BSA nanoparticles. a: Outer surface of the particle with magnification of 30000. b: Outer surface of the particle with magnification of 60000. Figure 19 : 19 Figure 19: Deflection images of micellar casein at two pHs ((A) pH 6.8 and (B) pH 4.8) and whey proteins ((C) pH 4.8). Each image corresponds to 512 horizontal lines that describe the outward and return of AFM cantilever tip ( 1024 scans are made on each image). The graphics below each image correspond to height profiles taken from a cross section on the AFM images. Figure 20 : 20 Figure 20: Height images of bacterial strains L. rhamnosus GG and L. rhamnosus GR-1. Each image corresponds to 512 horizontal lines that describe the outward and return of AFM cantilever tip (1024 scans are made on each image) insets: 3D views of bacterial strains. Figure 21 : 21 Figure 21: Raw micrographs of samples: (a) ceramic beads; (b) plasma aluminium; and (c) zinc dust. (G') and viscous (G'') moduli of gels are investigated by dynamic mechanical analyses with a plate-plate geometry (20 mm) at 20°C. Rheological frequency sweep tests are performed on three-dimension. Dynamic strain sweep tests are measured at a frequency of 1 Hz to investigate the linear viscoelastic range. In general, the elastic modulus of an gel depends on the number of cross-links and length and stiffness of the chains between cross-links[START_REF] Pongjanyakul | Xanthan-alginate composite gel beads: molecular interaction and in vitro characterization[END_REF] . ( Montes, Gordillo, Pereyra, & de la Ossa, 2011) Differential scanning calorimetry (DSC) -Detect crystals of an encapsulated compound, -Interactions between biopolymers in the particles. (Ribeiro et al., 2005) (Yang et al., 2013) Rheological gel characterization -Rheological parameters of initial preparation -Mechanical parameters of capsules (Pongjanyakul & Puttipipatkhachorn, 2007) Spectrophotometer -The relative amount of released compounds. ( [START_REF] Allan-Wojtas | Microstructural studies of probiotic bacteria-loaded alginate microcapsules using standard electron microscopy techniques and anhydrous fixation[END_REF] prepared calcium alginate microcapsules, with or without probiotic bacteria using emulsification. Results showed large differences between the alginate matrix of microcapsules without and with bacteria. The presence of bacteria during gelation caused local changes to the gelation process and the occurrence of "void space" phenomenon, as observed in fermented dairy products.[START_REF] Chandramouli | An improved method of microencapsulation and its evaluation to protect Lactobacillus spp. in simulated gastric conditions[END_REF]) studied the effect of capsule size, sodium alginate concentrations and calcium chloride concentrations on the viability of encapsulated bacteria. The viability of probiotic bacteria in the microcapsules increased with alginate capsule size and gel concentration. No significant differences were observed in the viability of probiotic bacteria in capsules when the concentration of calcium chloride increased. L . lactis culture was re-generated by transferring a loopful of the stock culture into 10 mL of M1ι broth and incubated at 30 °C overnight. A 10 L aliquot from overnight culture was again transferred into 10 mL of M17 broth and grown at 30 °C to exponential or stationary phase of growth (6 and 48 h respectively). L.lactis cells were collected by centrifugation (20 min, 4°C, 5000 rpm),diluted and add to SA solution to obtain a target inoculum in microspheres of 10 5 CFU.mg -1 .The core of the capsules was composed by XG (0.2% (w/w)) dissolved in sterile physiological water (9% sodium chloride) or sterile M17 broth supplemented with 0.5% D (+) glucose. Preliminary studies indicated a positive effect of addition of 0.5% glucose on L. lactis growth and nisin production (data not shown).Microcapsules were made using the Encapsulator B-395 Pro (BÜCHI Labortechnik, Flawil, Switzerland). The Buchi technology is based on the principle that a laminar flowing liquid jet breaks up into equal sized droplets by a superimposed nozzle vibration. The vibration frequency determined the quantity of droplets produced and was adjusted at 700 Hz to generate 700 droplets per second. A diameter nozzle 200 µm for membrane and 80 µm for core were used for the preparation of capsules. Droplets fell in a CaCl2 solution (200 mM) to allow microparticles formation. Capsules were maintained in the gelling bath for 15 min to complete the reticulation process and then were filtered and washed with buffer solution (9 % sodium chloride). Figure 22: Microscopic images of A/P (75/25) beads population (a) and a single microbead (b). Figure 23 : 23 Figure 23: Force/gap curves of alginate, pectin and alginate/pectin microbeads Figure 24 : 24 Figure 24: FTIR spectra of alginate, pectin and alginate/pectin solutions (a) and alginate, pectin and alginate/pectin composite microbeads (b). Figure 25 : 25 Figure 25: Survival of Lactococcus lactis, encapsulated in exponential state, during a storage period in physiological water of ι days at 30 °C (▲100/0, X ι5/25, □ 50/50, 25/ι5, • 0/100) (a) L.lactis inside physiological water microbeads (b) L.lactis inside glucose-enriched M17 microbeads (c) L.lactis outside physiological water microbeads (d) L.lactis outside glucose-enriched M17 microbeads. Mean values and standard deviations. Figure 26 : 26 Figure 26: Survival of Lactococcus lactis, encapsulated in stationary state, during a storage period in physiological water of ι days at 30 °C (▲100/0, x ι5/25, □ 50/50, 25/ι5, • 0/100) (a) L.lactis inside physiological water microbeads (b) L.lactis inside glucose-enriched M17 microbeads (c) L.lactis outside physiological water microbeads (d) L.lactis outside glucose-enriched M17 microbeads. Mean values and standard deviations. Figure 28: . Effect of Lactococcus lactis free or encapsulated (microbeads 75/25) on the growth of Listeria monocytogenes (▲ L.lactis non-encapsulated, microbeads 75/25 and control in solid line) on TSB medium stored at 30 °C (a) and survival of L.lactis in contact with the culture medium (▲ L.lactis nonencapsulated, • microbeads ι5/25) (b). Mean values and standard deviations. - Braccini, I., & Pérez, S. (2001). Molecular Basis of Ca2+-Induced Gelation in Alginates and Pectinsμ The Egg-Box Model Revisited. Biomacromolecules, 2(4), 1089-1096. T.,Budde, B., & Koch, A. (2003). Application of Leuconostoc carnosum for biopreservation of cooked meat products. Journal of Applied Microbiology, 95(2), 242-249. -Jaya, S., Durance, T. D., & Wang, R. (2008). Effect of alginate-pectin composition on drug release characteristics of microcapsules. Journal of Microencapsulation, 26(2), 143-153. -Léonard, L., Degraeve, P., Gharsallaoui, A., Saurel, R., & Oulahal, N. (2014). Design of biopolymeric matrices entrapping bioprotective lactic acid bacteria to control Listeria monocytogenes growth: Comparison of alginate and alginate-caseinate matrices entrapping Lactococcus lactis subsp. lactis cells. Food Control, 37, 200-209. -Madziva, H., Kailasapathy, K., & Phillips, M. (2005). Alginate-pectin microcapsules as a potential for folic acid delivery in foods. Journal of Microencapsulation, 22(4), 343-351. -Parente, E., & Ricciardi, A. (1999). Production, recovery and purification of bacteriocins from lactic acid bacteria. Applied Microbiology and Biotechnology, 52(5), 628-638. -Pawar, S. N., & Edgar, K. J. (2012). Alginate derivatization: A review of chemistry, properties and applications. Biomaterials, 33(11), 3279-3305.-Pereira, L.,Sousa, A., Coelho, H., Amado, A. M., & Ribeiro-Claro, P. J. A. (2003). Use of FTIR, FT-Raman and 13C-NMR spectroscopy for identification of some seaweed phycocolloids. Biomolecular Engineering, 20(4-6), 223-228. -Polk, A., Amsden, B., De Yao, K., Peng, T., & Goosen, M. F. A. (1994). Controlled release of albumin from chitosan-alginate microcapsules. Journal of Pharmaceutical Sciences, 83(2), 178-185. Figure 29 : 29 Figure 29: Microscopic image of single alginate-xanthan gum microcapsule Figure 30 : 30 Figure 30: Typical force-displacement curves of alginate-xanthan microcapsules prepared with M17 and physiological water Figure 31 : 31 Figure 31: FTIR spectra of freeze-dried (a) alginate solution, (b) xanthan gum solution, alginate microbeads (c) and alginate xanthan microcapsules (d). permit only to control osmotic pressure and to avoid cellular lysis. Immobilization of L.lactis in SA membrane seems preserve bacterial viability in physiological water. Although bacterial population inside capsules decreased significantly only after 3 days at 30 °C at the end of the storage period microbial counts remains above 3 log CFU.mg-1 Figure 32 : 32 Figure 32: Survival of Lactococcus lactis inside (a,b) and outside (c,d) microcapsules, during a storage period in physiological water of 10 days at 30 °C (▲physiological water, • M1ι supplemented with glucose) (a,c) L.lactis encapsulated in exponential state (b, d) L.lactis encapsulated in stationary state. Mean values and standard deviations. Figure 34 . 34 Figure 34. Spectral distribution of internal transmittance (Ti) of films equilibrated at 5 °C and 75 % RH. -1 is related to antisymmetric stretch of C-O of alginate and pectin carboxylate groups. The peak at 1413 cm -1 is a symmetric stretch of the COO from the carboxylate groups. The band at 1030 cm -1 is an antisymmetric stretch (C-O-C) given by the guluronics units (Sartori et al., 1997). Figure 35 . 35 Figure 35. FTIR spectra of A/P microbeads (a), starch and HPMC films with and without A/P microbeads incorporation (b), magnification of starch films (c) and HPMC films spectra (d). Figure 3 . 3 Figure 3. Effect of bioactive films on survival of Lactococcus lactis in the film in contact with the culture medium (a) and growth of Listeria monocytogenes (b) on TSA medium, stored at 5ºC. Mean values and standard deviation. (□ HPMC, ∆ HPMC+B, ○ HPMC+C, ◊ HPMC+C+B, ■ ST, ▲ ST+B, • ST+C, ♦ ST+C+B and control in solid line). B= Bacteria; C= Capsules. antilisterial activity of bioactive films containing alginate/pectin composite microbeads with entrapped Lactococcus lactis subsp lactis. Table 1 : 1 comparison of various carbohydrates used for bacteria encapsulation in the literature ... 24 Table 2 : 2 comparison of various proteins used for bacteria encapsulation in the literature ............. 27 Table 3 : 3 Encapsulation methods ..................................................................................................... 37 Table 4 : 4 Particle characterization .................................................................................................... 49 Table 5 : 5 Pure alginate capsules ....................................................................................................... 54 Table 6 : 6 Capsules produced with alginate mixed with other polymers .......................................... 61 Table 7 : 7 Size, sphericity and convexity of microbeads with different ratios of polymers, at day 0 and after a storage period in physiological water of 7 days at 30 °C. .................................................... 92 Table 8 : 8 FTIR Bands of alginate and pectin with Assignments. ..................................................... 98 Table 9 : 9 Concentrations of active nisin for 3 days at 30 °C inside and outside hydrogel microspheres containing L.lactis encapsulated in exponential state. Two internal compositions of microbeads were tested: physiological water and M17 enriched with 0.5 % glucose .............................................. 104 Table 10: Concentrations of active nisin for 3 days at 30 °C inside and outside hydrogel microspheres containing L.lactis encapsulated in stationary state. Two internal compositions of microbeads were tested: physiological water and M17 enriched with 0.5 % glucose .................. 105 Table 11 : Size, sphericity and convexity of microcapsules at day 0 and after a storage period in physiological water of 10 days at 30 °C. ....................................................................................... 119 Table 12 : Concentrations of active nisin for 10 days at 30 °C inside and outside alginate-xanthan microcapsules containing L.lactis encapsulated in stationary or exponential state. Two internal compositions of microcapsules were tested: physiological water and M17 enriched with 0.5 % glucose. Mean values and standard deviations. ............................................................................. 128 Table 1 : 1 comparison of various carbohydrates used for bacteria encapsulation in the literature Interest Carbohydrates Advantages Disadvantages with References probiotics Alginate -Simplicity. Cellulose acetate phtalate (CAP) (M.-J. Chen & Chen, 2007) (Burey et al., 2008) (Krasaekoopt et al., 2003) Cellulose a--Insoluble in acid media (pH ≤5). a- -Good protection for microorganisms -Adding CAP with starch and oil improved viability of probiotics Cellulose acetate phtalate (CAP) -Soluble when the pH is≥6. b + (Favaro-Trindade & Grosso, 2002) (Burgain et al., 2011) (Rao et al., 1989) (Tripathy & Raichur, 2013) + -Sodium carboxymethyl cellulose (NaCMC). Table 2 : 2 comparison of various proteins used for bacteria encapsulation in the literature + (Fuentes-Zaragoza et al., 2010) (J. Singh et al., 2010) (Anal & Singh, 2007) (Mortazavian et al., 2008) (Crittenden et al., 2001) Table 4 : 4 Particle characterization Particle characterization Machines Measurement References Static light scattering (SLS) -The intensity of scattered light waves as a function of scattering angle. dynamic light scattering -Size of particles from direction and speed of particle (DLS) movement due to Brownian motion Particle size Confocal Laser Scanning Microscopy (CLSM) Table 5 : 5 Pure alginate capsules Alginate % Encapsulation method Probiotic bacteria Survival Stability of capsules Reference 3% Freeze dried Lactobacillus + + (Shah & Ravula, acidophilus MJLA1 2000). and Bifidobacterium spp. BDBB2) 2, 3, and 4% Spray drying Bifidobacterium + + Survival of longum KCTC 3128 Bifidobacterium and HLC 3742 longum Immobilized in Calcium Alginate Beads in Simulated Gastric. (K. Y. Lee & Heo, 2000) 10% Bifidobacterium lactis - - (Trindade & Grosso, and Lactobacillus 2000) acidophilus 1.8% alginate Emulsification and Bifidobacteria lactis + - (Hansen, Allan- solution ionic gelification Bb-12Bifidobacteria Wojtas, Jin, & adolescentis 15703, Paulson, 2002) Bifidobacteria breve 15700, Bifidobacteria lactis Bb-12 and Bifidobacteria longum Bb-46 . Capsules produced with alginate mixed with other polymers The alginate capsules are very porous, and allow diffusion of water in and out of the matrix, so alginate is not fully adapted to protect probiotic bacteria from external environment. Mixing, or coating with other polymers is used to compensate this defect. The denatured whey protein encapsulated cells presented a better stability than undenatured whey protein in simulated acidic and bile conditions. This study indicated that combination of denatured whey protein isolate and sodium alginate matrix was able to deliver probiotics with improved survival rate and suitable for controlled core release applications. Lactobacillus casei in different beads types made of sodium alginate and different ratio "amidated low(A)-methoxyl pectin(P)", by the extrusion technique. The beads made with A-P blends in 1:4 and 1:6 ratios provided a better protection to Lb. casei under simulated gastric juice and bile salts. Finally,[START_REF] Albertini | Development of microparticulate systems for intestinal delivery of Lactobacillus acidophilus and Bifidobacterium lactis[END_REF] prepared beads by the extrusion method and nine formulations were developed using alginate as main carrier, xanthan gum (XG) as hydrophilic retardant polymer, and the cellulose derivative, cellulose acetate phthalate (CAP), as gastro-resistant polymer. The results showed that the combination of 0.5% of XG or 1% of CAP within the 3% of alginate solution increased the survival of the probiotic bacteria in acidic conditions from 63% (freeze-dried bacteria) up to 76%.From all the different results were presented above, it can be concluded that the mixing alginate with other polymers in one matrix or coating alginate capsules with other polymers enhanced the protective properties of the beads, and provided the more stable capsules. (Vodnar & Socaciu, 2014) encapsulated of viable cells in chitosan coated alginate beads. Microencapsulated L. casei and L. plantarum were resistant to simulated gastric conditions and bile solution. (González-Forte, Bruno, & Martino, 2014) used starch coating alginate to efficiently protect the L. plantarum probiotic bacteria through a simulated gastrointestinal system (HCL pH 1-2). [START_REF] Chan | Bioencapsulation by compression coating of probiotic bacteria for their protection in an acidic medium[END_REF] demonstrated as a novel encapsulation proposal, a coating of alginate micro-particles by hydroxypropyl cellulose. Results showed there was significant improvement in survival of encapsulated cells when exposed to acidic media of pH 1.2 and 2. (Martin, Lara-Villoslada, Ruiz, & Morales, 2013) mixed alginate and starch and showed that the viability of probiotic decreases in 3 log cell numbers in the case of the formulae with alginate only and in 0.3 log in the formulae with mixed alginate and starch. Moreover, the alginate/starch allowed to obtain a suitable particle size and the viability of probiotic was not modified after 45 days at 4 °C. [START_REF] Rajam | Effect of whey proteinalginate wall systems on survival of microencapsulated Lactobacillus plantarum in simulated gastrointestinal conditions[END_REF] mixed denatured and undenatured whey proteins with alginate to prepare microencapsules with Lactobacillus plantarum. (Sandoval-Castilla, Lobato-Calleros, García-Galindo, Alvarez-Ramírez, & Vernon-Carter, 2010) entrapped Table 6 : 6 Capsules produced with alginate mixed with other polymers Polymer Encapsulation method Probiotic bacteria Survival Stability of capsules Reference Alginate3%, skim Spray drying Bifidobacterium + + (Yu, Yim, Lee, & milk, poly dextrose, longum ATCC Heo, 2001) soy fiber, yeast extract, chitosan, Ƙ- 15707, Bifidobacterium carageenan, and infantis ATCC whey (0.6%) 25962 and Bifidobacterium breve ATCC 15700 Alginate/Skim milk Freeze drying Acetobacter xylinum + + (Jagannath, Raju, & Bawa, 2010) Alginate and Beads coating Lactococcus lactis + + (Klinkenberg, Chitosan ssp. Lactis Lystad, Levine, & Dyrset, 2001) Extrusion Lactobacillus + (Le-Tien, Millette, rhamnosus Mateescu, & Lacroix, 2004) Extrusion Lactobacillus + (Göksungur, helveticus Gündüz, & Harsa, 2005) Extrusion/Coating Lactobacillus + (K. Kailasapathy & acidophilus Iyer, 2005) Stock cultures of Lactococcus lactis subsp. lactis ATCC 11454, a nisin producing-strain, Micrococcus flavus DSM 1790 and Listeria monocytogenes CIP 82110 were kept frozen (-80 ºC) in synthetic media enriched with 30 % glycerol (M17 broth for the LAB and TSB broth (Biokar diagnostics, Beauvais, France) for the others strains). 1.2. Biofilm Materials Sodium alginate from brown algae (viscosity ≤0.02 Pa.s for an aqueous solution of 1 % wt at 20°C), pectin from citrus peel (galacturonic acid ≥ ι4 %, Methoxy Groups ≥ 6. Stock cultures of Lactococcus lactis subsp. lactis ATCC 11454 and Listeria Monocytogenes CIP 82110 were kept frozen (-80 ºC) in synthetic media enriched with 30 % glycerol (M17 Broth for LAB and Tryptone Soy Broth (TSB, Biokar diagnostics, Beauvais, France) for the other strain). Encapsulation of Lactococcus lactis subsp. lactis on alginate / pectin composite microbeads: effect of matrix composition on bacterial survival and nisin release Ce chapitre présente l'encapsulation de Lactococcus lactis subsp. lactis dans des microsphères d'alginate / pectine. Différents ratios d'alginate / pectine ont été utilisés (A / P) ((100/0, 75/25, 50/50, 25/75, 0/100) et des microsphères ont été préparés par extrusion en encapsulant Lactococcus lactis subsp Lactis sous 4 conditions : deux états physiologiques différents (phase exponentielle, phase stationnaire) et en présence de milieux de croissance différents (glucose enrichi M17, l'eau physiologique). L'objectif de cette étude était d'élaborer les microsphères alginate / pectine et d'évaluer le meilleur ratio de polymères permettant la survie microbienne, la libération de nisine, l'activité Statgraphics Plus for Windows 5.1. Homogeneous sample groups were obtained by using LSD test (95 % significance level). Chapter III: anti-listéria en comparaison entre L.lactis libre et L.lactis encapsulée. Des solutions d'alginate et de pectine (1% (p / p)) ont été préparées avec de l'eau physiologique Table 7 : 7 Size, sphericity and convexity of microbeads with different ratios of polymers, at day 0 and after a storage period in physiological water of 7 days at 30 °C. A/P Size D[4,3] (μm) Sphericity Convexity day 0 day 7 day 0 day 7 day 0 day 7 100/0 264 (13) ax 255 (5) ax 0.90 (0.01) ax 0.93 (0.01) ax 0.25 (0.01) ax 0.23 (0.04) ax 75/25 274 (10) ax 267 (3) bx 0.90 (0.02) ax 0.90 (0.01) ax 0.27 (0.01) ax 0.24 (0.05) ax 50/50 275 (2) ax 268 (7) bx 0.95 (0.01) bx 0.90 (0.02) ay 0.43 (0.01) bx 0.35 (0.06) bx 25/75 272 (11) ax 285 (6) cx 0.94 (0.01) bx 0.88 (0.01) ay 0.50 (0.01) cx 0.50 (0.01) cx Table 8 : 8 FTIR Bands of alginate and pectin with Assignments. Polymer Wavenumber (cm -1 ) Functional groups Alginate 1600 1412 COO -antisymmetric stretch COO -symmetric stretch 1024 COC antisymmetric stretch Pectin 1740 1640-1610 1440 C=O stretching COO -antisymmetric stretching COO -symmetric stretch 1380 C-H bending 1240 CO, CC of ring stretching 1145 COC of glycosidic link/ring 1100 CO, CC, CCH, OCH ring 2.2 . Encapsulation of L.lactis Table 9 : 9 Concentrations of active nisin for 3 days at 30 °C inside and outside hydrogel microspheres containing L.lactis encapsulated in exponential state. Two internal compositions of microbeads were tested: physiological water and M17 enriched with 0.5 % glucose internal A/P microbeads composition Table 10 : 10 Concentrations of active nisin for 3 days at 30 °C inside and outside hydrogel microspheres containing L.lactis encapsulated in stationary state. Two internal compositions of microbeads were tested: physiological water and M17 enriched with 0.5 % glucose A/P internal composition a, b,c Different letters in the same column indicate significant differences among samples (p <0.05) Design of microcapsules containing Lactococcus lactis subsp. lactis in alginate shell and xanthan gum with nutrients core Ce chapitre présente l'encapsulation de Lactococcus lactis subsp. lactis dans des particules composées d'une membrane d'alginate et un coeur de xanthane. Les micro-perles ont été préparées avec de l'alginate pour la membrane et de la gomme xanthane enrichie en glucose enrichi M17 ou additivée d''eau physiologique dans le coeur, combinés avec L.lactis dans différents états physiologiques.L'objectif de cette étude était de développer une nouvelle technique de préparation de billes qui améliorerait la survie de L.lactis, ainsi que l'activité antimicrobienne par rapport à L.lactis libre. 'activité de la nisine a été comparée entre les bactéries encapsulées et libres en phase exponentielle dans de l'eau physiologique pendant 10 jours à 30 ° C. Il a été observé que l'activité de la nisine est meilleure dans le cas de L. lactis encapsulée en phase exponentielle et avec un milieu nutritif de M17 enrichi avec 0.5% de glucose pendant 10 jours. En effet, il a été montré que la réduction de la population de L. monocytogenes était plus élevée avec L. lactis encapsulée qu'avec L.lactis libre. En conclusion, les meilleurs résultats ont été obtenus avec des microsphères à membrane d'alginate et coeur de xanthane avec 0.5% glucose dans un milieu M17 et pour L.lactis encapsulée en phase exponentielle. Chapter IV: Les capsules ont été préparées avec de l'alginate pour la fraction membranaire (1,3%) avec de l'eau physiologique et de la gomme de xanthane dans le coeur de 0,2% avec de l'eau physiologique ou de bouillon M17 stéril additionné de 0,5% de D (+) glucose. L'inoculum en bactérie dans les microsphères était de 10 5 CFU.mg -1 . La caractérisation physico-chimique des microsphères a permis d'évaluer le diamètre, la sphéricité et la convexité des perles de 0 à 10 jours à 30 ° C. Toutes les microsphères ont été produites dans les mêmes conditions. On constate que le diamètre de microbilles, la sphéricité et la convexité ne présente pas de différences significatives au jour 0. Après 10 jours, la constatation est la même, ce qui signifie que ce type de capsules est plus stable et offre une bonne protection des bactéries vis-à-vis des conditions externes. La survie et la croissance de L.lactis dans les microsphères pendant 10 jours à 30 ° C change de manière significative selon deux facteurs : la composition interne de la microsphère (eau physiologique ou M17 enrichi avec 0,5% de glucose), et l'état physiologique de L.lactis. Les résultats montrent que les conditions optimales de viabilité de la bactérie encapsulée sont : un état de croissance exponentiel et 0.5% glucose dans le milieu M17 avec la gomme de xanthane. , & Vernon-Carter, E. J. (2010). Textural properties of alginate-pectin beads and survivability of entrapped Lb. casei in simulated gastrointestinal conditions and in yoghurt. Food Research International, 43(1), 111-117. L Table 11 : 11 Size, sphericity and convexity of microcapsules at day 0 and after a storage period in physiological water of 10 days at 30 °C. Time (days) Size D[4,3] ( m) Sphericity Convexity 0 413(10) a 0.6(0.2) a 0.23(0.02) a 10 419(7) a 0.6(0.2) a 0.27(0.02) a a,b,c Different letters indicate significant differences between the various samples among the same column (p <0.05) Table 12 : 12 Concentrations of active nisin for 10 days at 30 °C inside and outside alginate-xanthan microcapsules containing L.lactis encapsulated in stationary or exponential state. Two internal compositions of microcapsules were tested: physiological water and M17 enriched with 0.5 % glucose. Mean values and standard deviations. Different letters in the same file indicate significant differences among time for a same sample (p <0.05) bacterial physiologic al state core composition of microcapsules containing Lactococcus lactis subsp. lactis in alginate shell and xanthan gum with nutrients core 3. Conclusion SA for ampicilin/chitosan capsules. /XG core-shell microcapsules were developed for immobilization of L. lactis on membrane. change nisin release. The physiological state of the bacteria during the encapsulation process and the enrichment of aqueous-core with nutrients (M17 supplemented with 0.5% glucose) were determining factors for both bacteria viability and bacteriocin activity.Microcapsules with L. lactis in exponential state encapsulated in alginate membrane and aqueous-core based on xanthan gum with nutrients (M17 enriched with 0.5% glucose) gave the best results in terms of bacterial viability and nisin activity. These microcapsules allow a complete inhibition of L. monocytogenes growth for 7 days at 30 °C. Antimicrobial properties of L. lactis were not limited by the system of encapsulation developed in this study. It could be interesting to design novel bioactive food packaging based on biopolymers films enriched with these SA/XG core-shell microcapsules.Meyer, A., Schöbitz, R., Brito, C., Fuentes, R. (2011). Lactic acid bacteria in an alginate film inhibit Listeria monocytogenes growth on smoked salmon. FoodControl, 22, log CFU . mL -1 a) 0 b) 2 Figure 33: Effect of Lactococcus lactis free or encapsulated in exponential state (alginate-xanthan Time (days) 1 2 3 4 5 6 7 Time (days) 0 1 3 4 5 6 7 0 2 4 10 12 14 microcapsules with internal medium prepared with glucose-enriched M17) on the growth of Listeria Effect of monopotassium phosphate. International Journal of Food Microbiology, 125( 3) 320-329. -Burgain, J., Gaiani, C., Linder, M., & Scher, J. (2011). Encapsulation of probiotic living cells: From laboratory scale to industrial applications. Journal of Food Engineering, 104(4), 467-483. -Champagne, C.P., & Kailasapathy, K. (2008). Encapsulation of probiotics. Delivery and Controlled Release of Bioactives in Foods and Nutraceuticals, 154, 344-369. -Cellesi, F., Weber, W., Fussenegger, M., Hubbell, J. a., & Tirelli, N. (2004). Towards a fully -Concha-485-489. -Ding, W.K., & Shah, N.P. (2009). An improved method of microencapsulation of probiotic bacteria for their stability in acidic and bile conditions during storage. Journal of Food Science, 74(2), M53-M61. -Elçin, Y. M. (1995). Encapsulation of urease enzyme in xanthan-alginate spheres. Biomaterials, 16(15), 1157-1161. -Goddard, E. D., & Gruber, J. V. (1999). Principles of Polymer Science and Technology in Cosmetics and Personal Care. CRC Press. enteritidis. International Journal of Food Microbiology, 130, 219-226. -Parente, E., & Ricciardi, A. (1999). Production, recovery and purification of bacteriocins from lactic acid bacteria. Applied Microbiology and Biotechnology, 52(5), 628-638. Design Microcapsules present a plastic behavior and no differences were observed in terms of 6 Biotechnology and Bioengineering, 50(4), 357-364. log CFU . mL -1 8 -Jen, A.C., Wake, M.C. & Mikos, A.G. (1996). Review: Hydrogels for cell immobilization. Chapter V: synthetic substitute of alginate: Optimization of a thermal gelation/chemical cross-linking Pawar, S. N., & Edgar, K. J. (2012). Alginate derivatization: A review of chemistry, properties mechanical properties among studied systems. The aqueous core composition of the microcapsules did not affect SA network stability. Addition of XG caused a change in matrix and applications. Biomaterials, 33(11), 3279-3305. scheme ("tandem" gelation) for the production of beads and liquid-core beads. Biotechnology and Bioengineering, 88(6), 740-749. -Sánchez-González, L., Saavedra-Quintero, J. & Chiralt, A. (2013). Physical properties and structure of microcapsules membrane by establishment of potential hydrogen bonding between antilisterial activity of bioactive edible films containing Lactobacillus plantarum. Food XG hydroxyls groups and SA carboxylate groups. This certainly leads to better LAB protection Hydrocolloids, 33(1), 92-98. monocytogenes ( L.lactis non-encapsulated, ▲ alginate-xanthan microcapsules and control in solid line) on TSB medium stored at 30°C (a) and survival of L.lactis in contact with the culture medium ( L.lactis non-encapsulated, ▲ alginate-xanthan microcapsules) (b). Mean values and standard deviations. and References -Alzamora, S.M., Tapia, M.S., López-Malo, A. (2000). Minimally processed fruits and vegetables: fundamental aspects and applications. Aspen Publishers, Inc. -Antwi, M., Theys, T.E., Bernaerts, K.,Van Impe, J.F. & Geeraerd A.H. (2008). Validation of a model for growth of Lactococcus lactis and Listeria innocua in a structured gel system: -Léonard, L., Gharsallaoui, A., Ouaali, F., Degraeve, P., Waché, Y., Saurel, R., & Oulahal, N. (2013). Preferential localization of Lactococcus lactis cells entrapped in a caseinate/alginate phase separated system. Colloids and Surfaces B: Biointerfaces, 109, 266-272. -Léonard, L., Degraeve, P., Gharsallaoui, A., Saurel, R., & Oulahal, N. (2014). Design of biopolymeric matrices entrapping bioprotective lactic acid bacteria to control Listeria monocytogenes growth: Comparison of alginate and alginate-caseinate matrices entrapping Lactococcus lactis subsp. lactis cells. Food Control, 37, 200-209. -Liu, L., O'Conner, P., Cotter, P.D., Hill, C., Ross, R.P. (2008). Controlling Listeria monocytogenes in Cottage cheese through heterologous production of enterocin A by Lactococcus lactis. Journal of Applied Microbiology, 104, 1059-1066. -Maragkoudakis, P.A., Mountzouris, K.C., Psyrras, D., Cremonese, S., Fischer, J., Cantor, M.D., Tsakalidou, E. (2009) . Functional properties of novel protective lactic acid bacteria and application in raw chicken meat against Listeria monocytogenes and Salmonella Physical properties and antilisterial activity of bioactive films containing alginate/pectin composite microbeads with entrapped Lactococcus lactis subsp lactis. Physical properties and antilisterial activity of bioactive films containing alginate/pectin composite microbeads with entrapped Lactococcus lactis subsp lactis. Table 1 . 1 Effect of the incorporation of lactic acid bacteria (Lactococcus lactis) entrapped in microcapsules (C) and microbeads (B) on film mechanical properties (Elongation, Tensile Strength, Elastic Modulus), water vapor permeability, oxygen permeability, moisture content and thickness of biopolymer films (HPMC & STarch) equilibrated at 5°C and 75% relative humidity. Mean values and standard deviation.. Film E (%) TS (MPa) EM (MPa) (g. WVP µm.m -2 .j -1 .kPa -1 ) OP (cc.m.Pa -1 .s -1 ) x 10 7 Moisture content (g water. g film -1 ) Thickness (µm) HPMC 57 (7) a 24 (4) a 524 (45) a 2.15 (0.11) a 46 (3) a 0.158 (0.002) a 159 (6) a HPMC+C 41 (7) b 24 (3) a 561 (26) a 2.02 (0.11) a 51 (8) a 0.162 (0.002) a 210 (4) b HPMC+B 58 (7) a 25.5 (3) a 473 (51) a 2.25 (0.13) a 43 (5) a 0.153 (0.007) a 153 (8) a HPMC+C+B 34 (6) bd 18.0 (4) a 447 (34) a 2.22 (0.16) a 55 (7) a 0.154 (0.006) a 205 (3) b ST 3.3 (0.2) c 20 (2) a 962 (59) b 3.05 (0.11) b 2.50 (0.14) b 0.166 (0.016) b 123 (7) c ST+C 3.6 (0.3) c 12 (4) b 615 (100) c 2.51 (0.11) c 2.07 (0.06) c 0.242 (0.006) c 124 (4) c ST+B 30.6 (1.8) d 7.1 (0.5) c 298 (58) d 3.20 (0.11) b 2.3 (0.2) b 0.130 (0.002) d 122 (4) c ST+C+B 31 (2) d 6.0 (0.4) d 280 (62) d 2.42 (0.12) c 1.91 (0.13) c 0.273 (0.008) e 125 (7) c Physical properties and antilisterial activity of bioactive films containing alginate/pectin composite microbeads with entrapped Lactococcus lactis subsp lactis. Table 2 . 2 Lightness (L*), chrome (C*ab), hue (h*ab) and whiteness index (WI) of biopolymer films equilibrated at 5°C and 75% relative humidity. Mean values and standard deviation. Film L* (C * ab) (H * ab) WI HPMC 81 (3) a 1.8 (0.6) a 104 (3) a 73 (3) a HPMC+C 79.1 (1.9) a 2.3 (1.2) a 99 (2) a 72 (2) a HPMC+B 68.3 (0.6) b 0.8 (0.4) a 103.0 (1.5) a 68.2 (0.6) b HPMC+C+B 63 (2) c 1.2 (0.2) a 103 (2) a 63 (2) c ST 85.7 (1.3) d 7.3 (0.7) b 103.2 (1.8) a 83.9 (0.9) d ST+C 73 (2) e 6.8 (1.9) b 102.4 (1.3) a 80 (3) d ST+B 86 (3) d 5.0 (1.2) b 99.3 (1.9) a 86 (2) d ST+C+B 73 (4) e 5.2 (1.7) b 101.8 (1.2) a 82 (2) d Physical properties and antilisterial activity of bioactive films containing alginate/pectin composite microbeads with entrapped Lactococcus lactis subsp lactis. Encapsulation of L. lactis in alginate conjugated with entrapment in starch or HPMC films shown a high efficiency to produced active packaging materials. The mechanical, transport and transaperncy properties remained adapted to packagind application despited of capuses incorporation. Moreover, the developed films with microcapsules presented an interesting antilisterial activity. These systems must be considered as de possible solution for active packaging in the near future. cette étude était de développer des microcapsules à coeur aqueux, pour analyser les propriétés physico-chimiques de ces microcapsules, d'évaluer leur impact sur la survie des bactéries encapsulées en phase exponentielle ou stationnaire. De plus, la libération et l'activité antilisterial de nisine produite ont été suivies sur 10 jours. Les résultats ont montré que la composition du coeur liquide des microcapsules n'a pas affecté la stabilité du réseau d'alginate et que les capsules ont bien résistées aux conditions de conservation. L'addition de gomme xanthane dans le coeur a permis, dans un premier temps, de contrôler la viscosité et une bonne formation du coeur sphérique dans le centre des microcapsules; mais a aussi renforcé la structure des capsules par établissement de liaison hydrogène entre la gomme xanthane et les groupes d'hydroxyles de l'alginate. Les microcapsules de L. lactis encapsulé à phase exponentielle dans une membrane d'alginate et dans un coeur à base de M17 enrichi avec 0,5% de glucose et 0.2% de gomme de xanthane ont donné les meilleurs résultats. Ces capsules ont enfin été incorporées dans des films d'Amidon et d'HPMC et un excellent effet antilisteria a été demontré ce qui est prométeur pour le la préparation d'emballages actifs. Les films enrichis en microcapsules après quelques études de transfert de technologie vers l'industrie pourront être finalisés pour la conservation des aliments hudides à courte durée de vie. Ces nouveaux films bioactifs, à base de biopolymères (amidon, HPMC) et enrichis avec L.lactis, incorporés et stabilisés dans un milieu composé d'alginate/gomme de xanthane à coeur et d'alginate /pectine pour la membrane pourront alors être utilisés.La nisine est un peptide antimicrobien produit par la souche de Lactococus lactis subsp. lactis, autorisée pour des applications alimentaires par le comité d'experts sur les additifs alimentaires et les aliments de l'Organisation des Nations Unies pour l'Alimentation et l'Agriculture et de l'Organisation Mondiale de la Santé (FAO / OMS). La nisine peut être appliquée, par exemple, pour la conservation des aliments, la biopréservation, le contrôle de la flore de fermentation, et potentiellement comme agent antimicrobien clinique. Le piégeage de bactéries capables de produire de la nisine dans des billes d'alginate de calcium est ainsi une voie prometteuse pour immobiliser des cellules actives et prolonger la durée de conservation des aliments. Le travail de thèse visait à concevoir des emballages actifs en biopolymère renfermant des bactéries lactiques bioprotectrices (LAB) pour contrôler la croissance de micro-organismes indésirables dans les aliments, en particulier L. monocytogenes. La stabilité mécanique et chimique des billes d'alginate a d'abord été améliorée, et l'efficacité d'encapsulation a été accrue. Des capsules « alginate / pectine (A / P) » ont été préparés comme premières microsphères, par une technique d'extrusion. La production de nisine par Lactococcus lactis subsp. lactis encapsulé dans différents états physiologiques (phase exponentielle, phase stationnaire) a été étudiée. Les résultats ont montré que les billes composites (A/P) avaient de meilleurs propriétés que celles formulées avec de l'alginate ou de la pectine pure. L'association de l'alginate et de la pectine induit un effet synergique qui a amélioré les propriétés mécaniques des microbilles. La deuxième partie du travail a concerné la mise au point de microcapsules à coeur liquides avec une membrane d'hydrogel d'alginate et d'un noyau de gomme de xanthane. Les résultats ont montré que ces microcapsules contenant L. lactis encapsulé durant la phase exponentielle dans une matrice d'alginate et un noyau nutritif de gomme xanthane ont donné les meilleurs résultats et présentent une activité anti-listeria intéressante. Ces microbilles ont enfin été appliquées pour la conservation des aliments et en particulier dans des emballages alimentaires actifs. Des films (HPMC, amidon) ont été élaborés en piégeant des perles d'«alginate actifs/gomme xanthane» enrichies en L. lactis dans des films d'emballage et appliqués pour la conservation des aliments. Résumé Design of microcapsules containing Lactococcus lactis subsp. lactis in alginate shell and xanthan gum with nutrients core and 1.8 log CFU.mg -1 respectively. The lack of nutrients can certainly explain these results, a, b,c,d Different letters in the same column indicate significant differences among formulations (p <0.05). a, b,c,d,e Different letters in the same column indicate significant differences among formulations (p <0.05). Remerciements Acknowledgements Authors thank the European Commission for Erasmus Mundus Grant to Mrs Bekhit (Erasmus Mundus External Window "ELEMENT" Program). Chapter II: Material and methods Material and methods Abstract Alginate/pectin hydrogel microspheres were prepared by extrusion based on a vibrating technology to encapsulate bacteriocin-producing lactic acid bacteria. Effects of both alginate/pectin (A/P) biopolymers ratio and physiological state of Lactococcus lactis subsp. lactis (exponential phase, stationary phase) were examined for nisin release properties, L. lactis survival and beads physico-chemical properties. Results showed that A/P composites were more efficient to increase beads properties than those formulated with pure alginate or pectin. Association of alginate and pectin induces synergistic effect which improves microbeads mechanical properties. FTIR spectroscopy confirms possible interactions between alginate and pectin during inter-penetrating network formation. Physiological state of bacteria during encapsulation process and microbeads composition (A/P ratio, enrichment of internal medium with nutrients) were determining factors for both bacteria viability and bacteriocin release. Of the several matrices tested A/P (75/25) with glucose-enriched M17 gave the best results when L. lactis was encapsulated in exponential state. Keywords: hydrogel microspheres, biopolymers, lactic acid bacteria, physico-chemical properties, antilisterial activity. -Scannell, A. G. M., Hill, C., Ross, R. P., Marx, S., Hartmeier, W., & Arendt, E. K. (2000). -Synytsya, A., Čopı ḱová, J., Matějka, P., & Machovič, V. (2003). Fourier transform Raman and infrared spectroscopy of pectins. Carbohydrate Polymers, 54(1), 97-106. Development of bioactive -Tahiri, I., Desbiens, M., Kheadr, E., Lacroix, C., & Fliss, I. (2009). Comparison of different application strategies of divergicin M35 for inactivation of Listeria monocytogenes in cold-smoked wild salmon. Food Microbiology, 26(8), 783-793. -Trias, R., Bañeras, L., Badosa, E., & Montesinos, E. (2008). Bioprotection of Golden Delicious apples and Iceberg lettuce against foodborne bacterial pathogens by lactic acid bacteria. International Journal of Food Microbiology, 123(1-2), 50-60. -Walkenström, P., Kidman, S., Hermansson, A.-M., Rasmussen, P. B., & Hoegh, L. (2003). Microstructure and rheological behaviour of alginate/pectin mixed gels. Food Hydrocolloids, 17(5), 593-603. -Zhang, J., Daubert, C. R., & Allen Foegeding, E. (2007). A proposed strain-hardening mechanism for alginate gels. Journal of Food Engineering, 80(1), 157-165. Abstract Aqueous-core microcapsules with sodium alginate (SA) hydrogel in membrane and xanthan gum (XG) in core were prepared using ionotropic gelation method to encapsulate bacteriocinproducing lactic acid bacteria (LAB). In this study LAB were immobilized in microcapsule membrane. XG was applied to reinforce SA microcapsules. Molecular interaction between SA and XG in the microcapsules was investigated using FTIR spectroscopy. Microcapsules morphology and mechanical properties were examined. The impact of an enrichment with nutrients of core (M17 broth supplemented with 0.5% glucose vs physiological water) and physiological state of Lactococcus lactis during encapsulation step (exponential vs stationary state) was studied on L. lactis survival, nisin release. Furthermore, the antimicrobial effectiveness of the best system to preserve bacterial survival and permit nisin release was studied against Listeria monocytogenes. No differences were observed in terms of mechanical properties among studied systems. FTIR spectroscopy confirmed the establishment of possible hydrogen bonding between XG hydroxyls groups and SA carboxylate groups which could modify microcapsules release properties. Microcapsules with L. lactis in exponential state encapsulated in SA membrane and aqueous-core based on XG with nutrients gave the best results. At 30 °C, a complete inhibition of L. monocytogenes growth throughout the storage period was observed for these microcapsules.These microcapsules could be used for future applications in food preservation and particularly in food packaging. Novel bioactive films based on biopolymers and enriched with L.lactis in alginate/xanthan gum core-shell microcapsules could be designed. type and concentration of materials used, particle size and porosity or type of microparticles (bead, capsule, composite, coating layer..) were show to affect effectiveness of bacterial protection (Ding & Shah, 2009). Alginate has been widely used as microencapsulation material as it is non-toxic, biocompatible, and cheap [START_REF] Jen | Review: Hydrogels for cell immobilization[END_REF][START_REF] Léonard | Preferential localization of Lactococcus lactis cells entrapped in a caseinate/alginate phase separated system[END_REF]Léonard et al., 2014). Alginate consists in homopolymeric and heteropolymeric blocks alternating 1,4linked β-D-mannuronic acid (M) and α-L guluronic acid (G) residues in which the G units form crosslinks with divalent ions, to produce ''egg-box'' model gels (Pawar & Edgar, 2012). Studies have reported that alginate can form strong complexes with other natural polyelectrolytes such as pectin (also a polyuronate) by undergoing chain-chain association and forming hydrogels upon addition of divalent cations (e.g., Ca2+) [START_REF] Fang | Binding behaviour of calcium to polyuronates: Comparison of pectin with alginate[END_REF]Pillay & Fassihi, 1999a), improving mechanical and chemical stability of alginate beads, and consequently improving their encapsulation effectiveness (Pillay & Fassihi, 1999b). The aim of the present paper was to evaluate how HPMC and corn starch films were affected by the incorporation of L. lactis, free or encapsulated in alginate/pectin composite microbeads, through the analysis of different physical properties (water vapor barrier, oxygen permeability, mechanical and optical properties) as well as their antilisterial impact. Results and discussion Physico-chemical properties Properties of films equilibrated at 5°C and 75% RH and are reported in Table 1. Globally, properties of HPMC films were slightly affected by incorporation of capsules and/or bacteria while starch films were dramatically modified (huge increase of elongation properties, strong reduction of EM and TS). With entrapped bacteria, properties of both films became similar except for the oxygen barrier properties which remained highly lower in the case of starch film. The WVP values of corn starch films were in the range of those reported by Greener and Fennema (1989). The slight differences can be attributed to minor changes in the experimental absorption bands between 1100 and 1200 cm -1 were from ether (R-O-R) and cyclic C-C bonds in the ring structure of pectin molecules (Synytsya et al., 2003). The FTIR spectra of the preconditioned films in the presence and absence of A/P microbeads are shown in Fig. 2b. In general, films with and without A/P microbeads present similar features in the FTIR spectral regions. Starch film spectra showed characteristic bands at 931 and 1149 cm -1 , which are associated with the C-O bond stretching. The peaks at 1016 and 1077 cm -1 are characteristic of the C-O stretching of the anhydroglucose ring and the peak at 1645 cm -1 is related to O-H stretching of water molecules linked to starch structure. Finally, the band between 3100 and 3600 cm -1 due to (O-H) stretching is followed a band at 2929 cm -1 is associated with the C-H stretching in the glucose ring. The region below 800 cm -1 displayed complex vibrational modes due to the skeletal vibrations of the pyranose ring in the glucose unit [START_REF] Kizil | Characterization of irradiated starches by using FT-Raman and FTIR spectroscopy[END_REF]. The incorporation of A/P microbeads in the starch film results in very little modifications. In fact, the absorption band at 3300 cm -1 showed a decreased intensity (Fig. 2c), and the band around 1000 cm -1 showed a narrow width in the presence of A/P microbeads. These spectra modification could be related to potential interaction between alginate (COO -) and starch (-OH) groups, via hydrogen bonding (Swamy et al., 2008). Although, the FTIR spectra of starch before and after functionalization with A/P microbeads, show only slight differences, they were repeated in all the tests made on different samples. The HPMC FTIR spectra with and without A/P microbeads incorporation showed the same typical band with neither significant different intensities nor evident shifts (Fig 2b andd). The band situated at 3000-3750 cm -1 corresponding to the hydroxyl groups stretching (O-H), is followed by a small band at 300-2800 due C-H stretching. Chapter VI: General Conclusion
201,594
[ "790824" ]
[ "201883" ]
01754791
en
[ "spi" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01754791/file/Sgueglia_19580.pdf
Alessandro Sgueglia Peter Schmollgruber Nathalie Bartoli Olivier Atinault Emmanuel Benard Joseph Morlier Exploration and Sizing of a Large Passenger Aircraft with Distributed Ducted Electric Fans In order to reduce the CO2 emissions, a disruptive concept in aircraft propulsion has to be considered. As studied in the past years hybrid distributed electric propulsion is a promising option. In this work the feasibility of a new concept aircraft, using this technology, has been studied. Two different energy sources have been used: fuel based engines and batteries. The latters have been chosen because of their flexibility during operations and their promising improvements over next years. The technological horizon considered in this study is the 2035: thus some critical hypotheses have been made for electrical components, airframe and propulsion. Due to the uncertainty associated to these data, sensivity analyses have been performed in order to assess the impact of technologies variations. To evaluate the advantages of the proposed concept, a comparison with a conventional aircraft (EIS 2035), based on evolutions of today's technology (airframe, propulsion, aerodynamics) has been made. I n the next decades, due to the cost of fuel and the increasing number of aircraft flying everyday, the world of aviation will cope with more stringent environmental constraints and traffic density increase. Both ACARE (Advisory Council or Aviation Research and Innovation in Europe) [START_REF]ACARE project[END_REF] and NASA 2 published their targets in terms of environmental impact within the next years. In Table 1 the noise, emissions and energy consumption reduction according to ACARE for the next years are reported: the fuel and energy consumption have to be drastically reduced to meet the 2050 goals. To achieve these objectives, disruptive changes at the aircraft level have to be made. Fostered by the progress made in the automotive industry, aeronautics found an interest in hybrid propulsion. An idea is to merge this concept and distributed propulsion, where the engines are distributed along the wing. As shown by Kirner [START_REF] Kirner | An Investigation into the Benefits of Distributed Propulsion on Advanced Aircraft Configurations[END_REF] and Ko et al., [START_REF] Ko | Assessment of the Potential Advantages of Distributed Propulsion for Aircraft, International Society for Air Breathing Engines[END_REF] distributed propulsion increases performance in fuel consumption, noise, emission and handling qualities. The resulting Distributed Electric Propulsion (DEP) technology has been applied for different aircraft configuration (such as the N+3 Aircraft generation from NASA [START_REF] Greitzer | N+3 Aircraft Concept Design and Trade Studies[END_REF] ): results show a drag reduction (which leads to a minor fuel consumption) and also a better efficiency due to the aero propulsive effects. Nomenclature This work presents the exploration and the sizing of a large passenger aircraft with distributed electric ducted fans, EIS 2035. The objective is to carry out a Multidisciplinary Design Analysis (MDA) in order to consider all the coupling between the disciplines: airframe, hybrid electric propulsion and aerodynamics. The aero propulsive effects are also considered, in order to converge towards a disruptive concept. In the first part of the paper the proposed concept is described, then the propulsive chain architecture is presented, including a review of the key components and their models. The second part is dedicated to their integration in the in-house aircraft sizing code developed by ONERA and ISAE-Supaero, identified as FAST. [START_REF] Schmollgruber | Use of a Certification Constraints Module for Aircraft Design Activities[END_REF] Then the design mission is presented, and the hypothesis for the 2035 technology horizon are discussed. Finally, the performance of the integrated design are presented, and the conclusions regarding the feasibility of such vehicle are reported. II. Electric hybrid aircraft concept The aircraft definitive concept is shown in Fig. 1: for the modelisation it has been used OpenVSP, 7 a free tool for visualization and VLM computation. New components are added to the aircraft architectures: • Turbogenerators, which are the ensemble of a fuel burning engine and a converter device; • Batteries (not shown in figure) which provide electric power and are located in the cargo; • Electric motor and ducted fan, in the nacelle on the wing upper surface; • DC/DC and DC/AC converter (called respectively converter and inverter [START_REF] Bradley | Subsonic Ultra Green Aircraft Research: Phase II-VolumeII-Hybrid Electric Design Exploration[END_REF] ) in order to provide the current in the right mode and at the same voltage; • Cables for the current transport, including a cooling system and protections. Detailed models of each component are described in the next section. The wing-body airframe is the usual "tube and wing" configuration, and no changes on that part have been done. For the engine positions, different choices are possible: 9 upper-wing trailing edge engines, lower-wing leading edge engines and imbedded engines. In this work they are located in the upper part of the wing, at the trailing edge. This allows some advantages in terms of blowing: from an internal project in the frame of the EU program CleanSky 2, 10 it has been estimated the 2D maximum lift coefficient in the zone affected by the engines varies from 4 to 5. For the results presented later it has been used the mean value of 4.5. This effect has three main advantages: • If the approach speed constraint is used for the wing sizing, the wing surface is reduced. • High-lift devices are no more needed for takeoff and landing, leading to a minor wing weight. • It is possible to have a shorter takeoff length. Also, in previous works 9 the engines are mounted near the tip, since it is on that zone the stall begins and a higher C L is needed. In this concept they are located on the inner part in order to not increase the structure at the tip: a twist has to be added in order to make the stall begin in the center part. The motors also provide some moment which partially balance the bending at the wing root: from an internal work at ONERA, it has been estimated that the impact of the engine position on the wing weight is of 5%. Another advantage of the DEP architecture is that the EM weight is reduced. In fact, the One Engine Inoperative (OEI) condition (which is assumed as critical case for the design) is less stringent, as shown by Steiner et al.: 11 in case of OEI condition, the supplementary power (and thus also thrust) required by the other engines is smaller; in particular the total power of a single motor increases with the ratio of N N -1 , being N the number of engines. It is clear that, when the number of engines increases, the effect of the OEI condition becomes negligable, and the weight of each motor decreases. Aircraft is also sized in order to have all the EM working, even if one of the energy source is inoperative. Regarding the energy sources, the generators are located at the rear, on the fuselage, in order to reduce the pylons wetted area and the interferences with the wing. The batteries are instead located in the cargo zone, half of them ahead the wing and the other half back the wing. This choice has been made because it is expected to find the center of gravity in proximity of the wing, and with this disposition the batteries do not drastically affect its position. Also, due to batteries location (in the cargo), the maximum payload is reduced since only part of the freight is available for luggages. A T-tail configuration has been used for the empennage. In this work it has been decided to size the aircraft in order to fly fully electric at least to 3000ft. The reason for this choice is that the mean atmospheric boundary layer height is of about 1km (it changes according to the atmospheric condition), and in that region the convective effects create turbulences which mix the air [START_REF] Li | Vertical Distribution of CO 2 in the Atmospheric Boundary Layer: Characteristics and Impact of Meteorological Variables[END_REF] (Fig. 2): when the emissions are in this region, the quality of the air is decreased, meanwhile for greater altitude this effect is no more as relevant as it is in the boundary layer region. III. Propulsive chain architecture The generic scheme of the propulsive chain is shown in Fig. 3: it is referred to only one half wing, and the number of batteries, generators, EM and fans are not specified since they are variables to be optimized. In this kind of architecture batteries are coupled with a turbogenerator in order to supply power: they are connected through an electrical node (called bus in this work), thus they can be defined as a serial architecture. Converters are placed after these components in order to bring the current at the right transport voltage. The total power is then transferred to the inverters, which convert DC to AC current, required by electric motors. EM work in parallel; each of them is connected to a ducted fan, which generates thrust. Since all the cables from batteries and generators are connected to the bus, from which the current is then trasported to the EM, they are always operative, even if one energy source being inoperative. The power available at each step of the chain is also specified in Fig. 3: η is the efficiency of a generic component, P V the power density, M the Mach number, z the altitude, N the number of electric motors, T the thrust and V ∞ the velocity. In the following sections, each component is detailed. Power is controlled using two different power rates, one for the batteries and another one for the generators. It is then possible to write: P tot = δ batt P batt2 + δ gen P gen2 (1) having defined the battery and generator power rate as below: δ batt = P batt P batt,max (2) δ gen = P gen P gen,max (3) where P batt,2 and P gen,2 are defined from Fig. 3. In previous works on hybrid architectures [START_REF] Pornet | Methodology for Sizing and Performance Assessment of Hybrid Energy Aircraft[END_REF][START_REF] Cinar | Sizing, Integration and Performance Evaluation of Hybrid Electric Propulsion Subsystem Architectures[END_REF] , an hybrid factor has been defined in order to control how much of the total power had to be supplied by each source. In this work, there is no factor splitting the power required from the two sources: with the law presented in Eq. (1) it is possible to have batteries and generators supply the maximum of their available power at the same time. The advantage offered by this approach is that at takeoff or climb (critical conditions in terms of power required), a failure of one energy source can be easily sustained: in case one of them being inoperative, it is possible to ask more power from the second source. Finally, the power required by secondary systems (such as the environmental control system, the ice protection system, lighting and so on) have to be considered too: in this work it has been decided to use the estimation done by Seresinhe and Lawson 15 for a More Electric Aircraft concept, similar to A320. A. Gas turbine generator One of the two power sources is the gas turbine generator: it is composed of a turboshaft engine connected to a generator which converts shaft power to electrical power. The turboshaft has been modeled using GSP (Gasturbine Simulation Program), a software developed at NLR. [START_REF] Visser | GSP: A Generic Object-Oriented Gas Turbine Simulation Environment[END_REF] The scheme is shown in Fig. 4. A single compressor has been used, meanwhile there are two turbines after the combustion chamber: the first is the high speed turbine, directly linked to the compressor, while the second is the low speed turbine. Since it has to produce power, the main outputs are the power and the Power Specific Fuel Consumption (PSFC), which depend on the altitude and the Mach number. The design conditions are reported in Table 2. The turboshaft engine has also been sized in order to supply enough power in case of failure of one energy source in cruise. The gas turbine is not included into the sizing process: once having obtained the curves of power and PSFC from GSP, they are provided to the software FAST and interpolated to get the value of interest. An estimation of the weight is given in the work of Burguburu et al.: [START_REF] Burguburu | Turboshaft Engine Presedesign and performance Assessment[END_REF] it is based on empirical data from a large number of existing turboshaft engines. As previously said, the converter device is mounted on the low speed turbine shaft. The only parameter used for sizing is the power to mass ratio P m , defined as the power converted per unit mass. Starting from this parameter it is possible to estimate the weight: m gen = P gen P m (4) where P gen is the power delivered to the device at the design point. B. Battery In the proposed architecture, battery is a vital component as it is a main source of power as it introduces significant weight to the entire system. There are different types of battery available (such, for example, Li-Ion or Li-S 18 ): in this work it has been decided to use a Li-Ion battery type. A battery is fully defined by a set of five parameters: [START_REF] Pornet | Methodology for Sizing and Performance Assessment of Hybrid Energy Aircraft[END_REF][START_REF] Lowry | Electric Vehicle Technology Explained[END_REF][START_REF] Cinar | Development of a parametric Power Generatin and Distribution Subsystem Models at Conceptual Aircraft Design Stage[END_REF] • Specific energy density E m , which represents how much energy can be stored in a battery per unit mass (in W h kg -1 ); • Energy density E V , which represents how much energy can be stored in a battery per unit volume (in W h L -1 ); • Specific power density P m , which represents how much power can be delivered per unit mass (in kW kg -1 ); • Power density P V , which represents how much power can be delivered per unit volume (in kW L -1 ); • Density (ρ), which represents the mass per unit of volume (in kg m -3 ). These variables are not independent of each others, but only three of them are necessary to compute the others ones. In this work, a battery is defined by its specific energy density, specific power density and density. Missing values are calculated as follows: E V = ρ E m (5) P V = ρ P m (6) The energy stored and the maximum power which can be delivered by the battery are then computed: E batt = E m m batt = E V V batt (7) P max,batt = P m m batt = P V V batt (8) where m batt is the battery mass and V batt the battery volume. For monitoring the state of the battery, the state of charge (SoC) has to be defined: it is the ratio between the remaining energy E at a certain time (t) and the total stored energy E batt . The complement of SoC is defined as the Depth of Discharge (DoD). SoC = E (t) E batt = 1 - E cons (t) E batt (9) DoD = E cons (t) E batt = 1 -SoC (10) Due to safety reasons, the SoC can not be under a certain limit, which in general depends on the battery type. For a Li-Ion battery the minimum limit for the SoC is 20%; therefore the following constraint will be used in the sizing process: SoC f inal = 1 -DoD f inal ≥ 0.2 (11) C. Electric motor Electric motors are the other main components of the hybrid propulsion: they convert electrical power to mechanical power. The high reliability allows to work at very high efficiency; furthermore, as opposed to traditional combustion engine, their efficiency is independent from the altitude, which represents the main advantage. [START_REF] Lowry | Electric Vehicle Technology Explained[END_REF] Performance of these electric motors is determined by their torque and rotational speed charateristics. In this work it has been decided to use AC current based motors, since they are lighter than DC current based. Electric motors have also a very high efficiency (about 0.95); inefficiencies can be caused by various factors and are of different types [START_REF] Lowry | Electric Vehicle Technology Explained[END_REF] (but a complete analysis of them is beyond the scope of this work, in which only the total efficiency is defined). The major requirement for electric motors is the power to mass ratio, defined as the power that delivered per mass unit. Once the maximum power required by the electric motor is known, it is possible to estimate its weight: m EM = P max,EM P m (12) In subsequent steps the rotational speed and the torque are computed according to the fan requirement, and the motor is then fully defined. D. DC/DC and DC/AC transformers In order to convert current within the energy chain, converters and inverters are used. Performance of these devices depends on their efficiency, which is around 0.9. Since the architectures of inverter and converter are similar, it is possible to compute directly their total weight with the equation m IC = P inverter N EM + P converter N gen + P converter N batt P m (13) where P m is the power to mass ratio, N EM the number of electric motors, N gen the number of generators and N batt the number of batteries. E. Cables The cables have to transport current from one device to another within the hybrid architecture. They are sized in order to carry a certain current, which must be below the maximum allowed threshold. The current, and so the sizing, depends on the voltage used for the transport. First the current which flows through a cable is computed as i = P ∆V ( 14 ) Then a check has to be done in order to be sure that value is lower than the maximum current. If it is not, more cables have to be installed; the number is computed dividing the value of current with the maximum one: N cable = i i max ( 15 ) where the square brackets represents the integer part of i imax . Finally, according to EM, generators and batteries positions, it is possible to estimate the cable length and so the weight: m cable = N cable m L L cable (16) where m L is the cable linear density. Installation and Healt Monitoring System have to be included in the weight calculation: preliminary works at ONERA 10 show an increasing in weight of 30% for the installation and of 5% for the HMS. Typical values for the cables' parameters are reported in Table 3. 21 Table 3. Values used for the cables sizing i max 360 A ∆V 2160 V m/L 1.0 kg m -1 F. Cooling system All the components have their efficiency: this means that not all the power generated by batteries or generators is converted into electrical power, but part of them is converted into heat. It consists of two different devices: heat exchanger and air cooling systems. The first are devices which surrounds the cables and artificially dissipate the power. The amount of power to dissipate is: P diss = (1 -η batt ) P batt,max N batt + (1 -η gen ) P gen,max N gen + (1 -η EM ) P EM,max N EM (17) The heat exchangers introduce a penalty in mass: in the framework of an internal project at ONERA, the penalty has been estimated to be 1t. 10 This value is based also on the work of Anton. [START_REF] Anton | High-output Motor Technology for Hybrid-Electric Aircraft[END_REF] The air cooling system is used instead to have cold air that circulates into the system: it consists of some air inlet placed on the fuselage. It does not introduce weight but a penalty on the drag coefficient: [START_REF] Hoerner | Fluid-Dynamic Drag -Theoretical, Experimental and Statistical Information[END_REF] in the same internal project mentioned, the impact has been estimated to be of 5% on the C D . Penalties due to the cooling system are summed up in Table 4. Table 4. Penalties due to the cooling system considered in this work (estimation from an internal work at ONERA) Mass +1 t C D +5 % G. Fan The ducted fans are the last devices in the energy chain: they are directly connected to the electric motor. The design point for the preliminary sizing has been chosen as the beginning of the cruise. As the fans are directly connected to electric motors, the torque and rotational speeds have to be the same for both: if the motor torque is too small, a resize of the fan has to be carried out, or a gearbox must be added. In Fig. 5 the scheme of a ducted electric fan is shown, meanwhile in Fig. 6 a 3D rendering is given, where the elements are drawn separately in order to understand the architecture. Due to technological limits, there is a minimum for the fan diameter: if it is too small, it is not possible to design the fan. In order to avoid this situation, the operating Mach number should not exceed the value of 0.7. In Appendix B the fan sizing, based on isoentropic equations, is fully described. Since the air passes through the fan, the wetted area of the duct is relevant for aerodynamic calculations. It is computed considering the total area of the external duct, the disk created by the actuator, the central duct and the total area of the electric motor. The FAST code has been modified in order to consider also the hybrid architecture sizing. New modules have been added, into the new category "HybridDEP"; here below there is a rapid description: • Battery.py: this module contains the battery model and the functions used for computing the actual SoC, the weight, the volume, the energy and the maximum power available. Battery sizing is also included. • Cable.py: this module contains the definition of the cable function used for computing the maximum current, the diameter and the weight. • DuctedFan.py: this module contains the functions used for sizing the ducted fan and computing the power required for the condition of interest. • ElectricMotor.py: this module contains the definition of the electric motor and the function used for its sizing. • HybridEngine.py: this module is the main module for the propulsion, since it calls the components' modules for computing both the power and the thrust (as in Fig. 3) according to the actual requirement during flight phase and the PSFC to estimate the fuel consumption. The standard mass breakdown module using the French norm AIR 2001/D 24 has also been modified. Since it considers only a classical "tube and wing" configuration, there are no references on the hybrid architecture, such as batteries or cables. Thus five new elements have been added in the propulsion category. The detailed structure is presented in appendix A (Table 16). Finally, two new sections in the .xml input file have been added: one contains all the parameters for the hybrid distributed electric propulsion, while the other contains an estimation of the secondary systems power. The FAST workflow is presented in Fig. 7: since from a method's point of view it can be considered as a MDA, an eXtended Design Strucutre Matrix (xDSM) scheme [START_REF] Lambe | Extensions to the Design Structure Matrix for the Description of Multidiplinary Design, Analysis, and Optimization Processes[END_REF] has been used to describe the main process. Under this format, each rectangular box represents an analysis (e.g. a function or computational code). Input variables related to the analysis are placed vertically while outputs are placed horizontally. Thick gray lines represent data dependencies whereas thin black lines represent process connections. The order of execution is established by the component number. Finally, the superscript notation defined by Lambe et al. [START_REF] Lambe | Extensions to the Design Structure Matrix for the Description of Multidiplinary Design, Analysis, and Optimization Processes[END_REF] has been used. Algorithm 1 details the different steps based on the input given by the work of Pornet et al. [START_REF] Pornet | Methodology for Sizing and Performance Assessment of Hybrid Energy Aircraft[END_REF] and Cinar et al., [START_REF] Cinar | Sizing, Integration and Performance Evaluation of Hybrid Electric Propulsion Subsystem Architectures[END_REF] which describe a sizing process for electric aircrafts. Respect to the original version, a new analysis is added at step 2; all the other blocks have been indirectly modified due to the presence of the new propulsive architecture. 2:Battery sizing. Batteries are sized respect to two different criteria: the power at the takeoff and the energy consumed; the latter is divided by 0.8 in order to consider the 20% safety margin of SoC. Using Eq. ( 7) and Eq. ( 8), battery volume is computed, then the maximum value is taken. Finally, using the same equations, power and energy available are defined. At the first iteration the initial value of volume from step 0 is used, since there is no information about the energy consumption. 3: Wing sizing. Wing area is sized with respect to fuel capacity and approach speed. As for the battery, at the first iteration no wing sizing is performed, as there is no information about the fuel consumption. 4: Compute initial geometry. 5: Resize the geometry in order to match the center of gravity and stability constraints. 6: Aerodynamic calculation. 7: Mass breakdown calculation. For the DEP components, the weight estimation is based according to values from previous loop; 8: Design mission simulation. The mission includes: take off, initial climb (up to 1500ft), climb to cruise altitude, cruise, descent, alternate flight of 200NM, 45 minutes of holding, landing and taxi in. For the cruise two approaches are possible (step and cruise climb); more details will be provided in section V. For the Hybrid-Electric concept, the balance equation is written in terms of power instead of thrust; at each step the code computes the fuel and energy consumption and updates the aircraft weight and battery SoC. 9: Update the MTOW. 10: Check if the convergence criteria is satisfied; if not proceed to next iteration. The convergence is reached when the relative difference between the Operating Weight Empty (OWE) computed at step 7 and step 8 is less than 0.05%. If this condition is satisfied, the code check that the mission fuel is lower than the maximum fuel that can be stored, as that the battery SoC is greater than 20%: if these conditions are fulfilled, the sizing loop is over, otherwise it proceeds to next iteration. until 10 → 1: MDA has converged V. Design mission and sizing parameters A. Design mission definition In the FAST code, the design mission is made of two blocks: the first one represents the mission, and the second one is used for computing the reserve, according to certification rules. [START_REF] Schmollgruber | Use of a Certification Constraints Module for Aircraft Design Activities[END_REF] In particular, the reserve fuel is computed considering an alternate flight of 200NM and 45 minutes of holding. For the key segment of the mission (cruise), two different approaches can be selected: the step climb mission and the cruise climb mission. [START_REF] Bradley | Subsonic Ultra Green Aircraft Research: Phase II-VolumeII-Hybrid Electric Design Exploration[END_REF] In the first case the cruise starts at the optimal altitude (computed by the code), which is kept constant until the code computes is more efficient to climb at a higher level with a step climb of 2000ft. In the second case the aircraft is always at the point of maximum efficiency and at the same Mach number: to keep these conditions the altitude is increased at each time step. In terms of computational costs, the cruise climb option is faster than the step climb, since the code does not check at each iteration if it is convenient to perform the step climb or not. L (0) , M T OW (0) , M LW (0) , M ZF W (0) , S (0) In order to assess the difference between the two approaches, the case of a cruise climb is performed, using the TLAR of the CeRAS aircraft 27 (2750NM of range for 150 passengers). The results have been compared with that reported by Schmollgruber et al.: 6 the differences shown in Table 5 are negligible. Thus in the following sizing loops presented in this paper the cruise climb option is always used. Since an hybrid propulsion system is used, the degree of hybridization over the entire mission has to be defined: recalling Eq. ( 1), the battery power rate defines the use of the battery for each segment. Two cases are possible: the power is not balanced (i.e. for takeoff and climb) and it is balanced (i.e. in cruise). In the first case the battery and generator power rates are given in input, meanwhile in the second case only the percentage of power required by the battery is given in input, then the two power rates are computed. Batteries are never used in cruise, since the energy consumption leads to an increasing in weight that is not affordable for the aircraft. In order to be sure to use the batteries in the most efficient way, the SoC at the end of the mission has to be 20%: if at the end of the sizing the SoC is greater than this value, the degree of hybridization is manually changed until it is 20%. B. Sizing parameters For the sizing a certain number of TLAR have to be defined into the .xml input file. In Table 6 the design parameters for the hybrid aircraft are reported: the number of passengers is the same of an aircraft A320-type (150); the range varies from 800 to 1600NM, meanwhile the Mach number is 0.7, lower than a traditional aircraft. As said, the value of 0.7 has been chosen in order to reach a fan diameter that would be too small, as there is a limit due to technology level. About the propulsive architecture, 2 generators, 4 batteries and 40 engines have been considered; finally the minimum power required at takeoff is fixed to 28MW. After having defined the TLAR, the parameters for the electrical components have to be chosen. As already mentioned, the focus is on the 2035 horizon: in bibliography there are different values for the chosen technological horizon (see for example the works of Lowry and Larminie, [START_REF] Lowry | Electric Vehicle Technology Explained[END_REF] Bradley and Droney, [START_REF] Bradley | Subsonic Ultra Green Aircraft Research: Phase II-VolumeII-Hybrid Electric Design Exploration[END_REF] Belleville, [START_REF] Belleville | Simple Hybrid Propulsion Model for Hybrid Aircraft Design Space Exploration[END_REF] Friedrich et al., [START_REF] Friedrich | Hybrid-Electric Propulsion for Aircraft[END_REF] Delahye, [START_REF] Delhaye | Electrical Technologies for Aviation of the Future, Airbus, 2015 32 HASTECS project: programme de recherche aéronautique européen mené par le Laplace[END_REF] as the HASTECS project 32 and the estimation of Fraunhofer Institute 18 ). All the data found in bibliography are different, leading to an incertitude: after an internal discussion at ONERA and ISAE-Supaero the technology table reported in Table 7 has been defined. However, due to the aforementioned uncertainity in defining the technological horizon, sensibility analysis will be later shown with variation of the parameters in order to assess the effect. Finally, due to the development of new materials, a reduction in the weight has to be considered. In The wing weight reduction is valid only for a conventional aircraft: it is not possible to use composites because of the level of current that flows in the cables in the wing. Thus, for the hybrid concept, no wing reduction is considered. In the next section, where results are presented, the hybrid aircraft is compared with respect to a conventional aircraft: the last has the same TLAR reported in Table 6, the weight reduction reported in Table 8, a maximum efficiency of 19 is assumed and the engine model is based on the CeRAS engine, 27 with a SFC reduction of 20%. VI. Preliminary results for the Hybrid-Electric Aircraft concept As stated in the previous section (Table 6), the range is not fixed: in fact it is a variable to be explored in order to find the breaking point from which the hybrid concept is advantageous with respect to a traditional aircraft. The first parametric study shows the fuel consumption with respect to the range: the result is presented in Fig. 8. For both the configurations (conventional and hybrid) the fuel consumption increases with the range, but it is possible to note a value (about 1200NM) for which the two configurations have the same fuel consumption. Under that value the hybrid configuration is advantageous with respect to the traditional one. This effects is due to the battery sizing: under the breaking value, the sizing criteria is the power requirement at takeoff (which is 28MW), which means that the energy available is the same. When the range is decreased, the MTOW is decreased too, and this leads to a final SoC greater than 0.20: it is then possible to change the degree of hybridization and save more fuel. On the contrary, when the range is higher than 1200NM, the energy requirement becomes the most important criteria: batteries are resized and the increasing in weight make the hybrid architecture worse than the traditional one. For the rest of the work here presented, the design range considered is 1200NM, with the mission hybridization reported in Table 9: since the battery is sized according the power requirements, there is more energy than needed for the mission fully electric until 3000ft; for this reason the entire climb segment is fully electric, with a battery power rate of 70%. Table 10 shows the comparison for the two different configurations. In order to have a unique parameter for comparison, the Payload Fuel Energy Efficiency (PFEE) has been used as figure of merit. [START_REF] Hileman | Payload Fuel Energy Efficiency as a Metric for Aviation Environmental Performance[END_REF] This parameter is defined as the payload per range divided by the energy consummed: P F EE = (P L)(Range) E cons (18) PFEE has been used since it contains the payload carried for a certain range, and the energy consummed for the mission. The PFEE is similar for both configurations, which confirms the results that for the chosen range the hybrid and the traditional aircraft are comparable (about 98kg km MJ -1 ), but for the first concept there are no emissions close to ground. The OEI condition is already included in the FAST calculation, but is not critical in the design. The cases in which one generator or two batteries are inoperative have been then considered as additional failure cases: the hypothesis made is that the failure occurs during takeoff. As explained in section II and section III, the aircraft is designed in order to have all the EM operative, even if one energy source is inoperative. The fuel breakdown for these cases is reported in Table 11. In case one generator is inoperative, there are no differences in the takeoff and climb phase, since they are fully electric; then it is still possible to conclude the cruise, even if the fuel consumption is higher since more power is required to a single generator and the PSFC increases. For the reserve phase, the aircraft is not able to climb again for an alternate flight as the power requirement for this segment is higher than the maximum power of a generator, and only 45 minutes of holding have been considered. In case two batteries are out, instead, it is not possible to have a fully electric segment and the help of the generator is required at each phase. No great differences are shown for the reserve calculation, as for that phase only generators are used also in the baseline, but the fuel increases for the design flight. This study has been performed in order to understand the behavior in case of failure, but it does not consider yet the possible certification requirements (i.e. that if one source is inoperative the aircraft has to climb to 500ft and then lands again); a detailed work has to be done in the future. VII. Exploration of the design space The technology data for 2035 are affected by uncertainty. In order to assess the effects of a different technological level on the feasibility of the proposed concept, an exploration of the design space has been performed in this section. Battery, generator, electric motor and gasturbine technologies variation have been considered, as the effects of the engines number (to assess the DEP advantages) and the maximum 2D lift coefficient variation. Table 12 reports the minimum and maximum values for the parameters of the hybrid chain components. Some assumptions have been made: • The TLAR used are the same used in the previous section (Table 6). • The effect of each component has been studied separately, changing one component's technology and keeping all the others constant and equal to the baseline (Table 7, and reported also in Table 12 for sake of clarity). • The mission hybridization is the same used for the baseline (Table 9): in case at the end of a simulation the SoC is greater than 0.20 as required by equation (11), it has been changed in order to use all the available battery energy. Changes are always reported. For all the studies three key parameters have been considered as output: the MTOW, the wing area S w and the fuel consumption FC. They have been chosen since they are the most important parameters in a design process. The results are presented in Fig. 10: each column represents the impact of one component's technology variation for the key parameters. The same scale has been used, in order to better understand the effect of a variation on a single output (MTOW, S w and FC). In the next sections the impact of each parameter is described separately; for sake of clarity in Appendix C all the graphs are reported (from Fig. 14 to Fig. 19), both in the common and real scales. A. Impact of battery technology variation In this section the variation of battery technology in the range defined in Table 12 has been studied; results are shown in the first colume of Fig. 10. The battery technology affects all the parameters considered: between the minimum and the maximum value, the MTOW is reduced of about 20%, the wing area of about 15% and the fuel consumption of about 18%. It is possible to note that from the first and the second point (from an energy density of 350W h kg -1 to 500W h kg -1 ) the MTOW is reduced more than in the second segment: this happens because, for low values of E m batt batteries are resized accorgind to the energy requirement, leading to a divergence into the MTOW. In the last case ( E m batt =700W h kg -1 ), instead, the sizing criteria is the power at the takeoff, and since the MTOW is reduced of 8% with respect to the baseline, there is more energy to use in batteries and the degree of hybridization for the alternate climb is changed respect to what has been used in Table 9: -δ gen al,climb = 0.0 -δ batt al,climb = 0.65 Thus, for the last point also the alternate climb is fully electric, leading to a major gain in fuel consumption. B. Impact of generator technology variation The generator technology is varied into the range identified in Table 12. Minimum and maximum values correspond to a variation of ±50% respect to the design. The results correspond to the second column of Fig. 10. The effects on the MTOW, wing area and fuel consumption are smaller, compared to that of the battery technology: the MTOW varies of about 4%, meanwhile the wing area is almost constant. The major effect is on the FC (about 7%): when the weight of the generator is decreased, the nacelle is smaller, and thus there is a little gain on the efficiency, which affects the FC. C. Impact of electric motor technology variation The variation of electric motor technology has then been studied, within the range presented in Table 12: as for the generator, minimum and maximum value of power to mass ratio have been defined considering a variation of ±50% with respect to power to mass ratio base value. Results are shown in the third column of Fig. 10: both on the MTOW and the fuel consumption there is a gain of about 7%, meanwhile the effect on the wing area is not relevant. The effects are greater than that of the generator technology variation, but still smaller than that of the battery technology variation. D. Impact of engines number variation In this section the effect of engines number has been considered: it varies from 10 to 40 (as reported in Table 12). This parameter affects the maximum lift coefficient. In fact, as said earlier, the surface interested by the blowing has a maximum C l of about 4.5; the wing maximum C L is computed as: C L,wingmax = C l,max S blow S w (19) where S blow is the surface interested by blowing, shown also in Fig. 9 in red. From Eq. ( 19) it can be deduced that the maximum C L is reduced when S blow is reduced, that is when the engines number is smaller. Results of this study correspond to the fourth column of Fig. 10: there are no relevant effect on the MTOW and FC, meanwhile the wing area changes of about the 20%. This is explained because the wing area is sized according the approach speed constraint, and when there are less engines the maximum C L is smalled, and this leads to a greater wing area to sustain the flight. E. Impact of maximum 2D lift coefficient effects The effect of maximum 2D lift coefficient has been considered: as already said, it is estimated to vary between 4 and 5. Results are shown in the fifth column of Fig. 10. The only effect is on the wing area (which is reduced of the 15%): this means that the main advantages in having a higher C l are not in the FC, but in the possibility to have a shorter takeoff field length and a smaller power at takeoff requirements (parameter that affects the battery sizing). F. Impact of PSFC variation Finally, the effects of PSFC variation have been considered: the value is first decreased by 10% and then increased by the same amount with respect to the baseline (as in Table 12). Results are shown on the last column of Fig. 10: the main effect is in the FC, which varies of the 20% in the range considered. The result is expected, since the PSFC mainly depends on the combustion process efficiency. The effect on the MTOW and the wing area is instead negligible (less than 1%). These analyses show that, when the technologies improve, in general the performances are better (MTOW and fuel consumption are reduced), meanwhile the study on the number of engines clearly shows the advantage in using a DEP architecture. It is also possible to note that a linear change in a technology does not imply a linear change in the results: this happens because a reduction in the weight leads to a reduction in the energy consumption, and so in a reduction in battery weight (if it is sized according to energy) or in the fuel consumption (if it is sized according to power at takeoff, since more energy is available). In order to better understand the effects of each parameter, a sensitivity analysis has been performed, based on the method of the sparse polynomial chaos expansions, with a design of experiments made of 800 points. Fot its generation, a Latin Hybercube Sampling has been used. The idea is to identify the sensitivity index of the three key parameters (MTOW, S w and FC) with respect to the design variables, in order to understand the impact of each variable on a certain output. It is also possible to identify the interaction between the variables, if there are. VIII. Sensitivity studies with respect to technological levels The analysis performed is based on the sparse polynomial chaos expansions method for computing global sensitivity indices. [START_REF] Blatman | Efficient Computational of Global Sensitivity Indices Using Sparse Polynopmial Chaos Expansions[END_REF][START_REF] Dubreuil | Construction of Bootstrap Confidence Intervals on Sensitivity Indices Computed by Polynomial Chaos Expansion[END_REF] This method has been selected since it allows to compute the sensitivity (Sobol) indices as the Montecarlo method, but it requires less points for the estimation. In this section it is assumed that Y = M(X), where X = (X i ), i = 1, • • • , n (n being the number of design variables), is a random vector modeling the input parameters (independent and uniformly distributed) and M is the numerical solver used to compute a scalar quantity of interest Y (the sizing tool FAST in this work). Assuming that Y is a second order random variable, it can be shown that 36 Y = ∞ i=0 C i φ i (X) (20) where {φ i } i∈N is a polynomial basis orthogonal with respect to the probability density function (pdf) of X and C i are unknown coefficients. Sparse polynomial chaos consists in the construction of a sparse polynomial basis {φ i } α∈A , where α = (α 1 , • • • , α n ) is a multi index used to identify the polynomial acting with the power α i on the variable X i and A is a set of index α. In practice A is a subset of the set B which contains all the index α up to a dimension d i.e. card(B) = (d+n)! d!n! . Objective of sparse approach is to find an accurate polynomial basis {φ i } α∈A such as card(A) << card(B). This is achieved by Least Angle Regression i.e. unknown coefficients C i are computed by iteratively solving a mean square problem and selecting, at each iteration, the polynomial the most correlated with the residual. [START_REF] Blatman | Adaptive Sparse Polynomial Chaos Expansion Based on Least Angle Regression[END_REF] Finally, the following approximation is deduced: Y ≈ Ŷ = α∈A C α φ α (X) (21) Due to the orthogonality of the polynomial basis {φ i } α∈A it is possible to write: Figure 10. Parametric analysis results: each column represents the effect of the variation of a component's technology, keeping constant all the other. The first column represents the impact of the battery, the second column the impact of the generator, the third column the impact of the electric motors, the fourth the impact of the number of engines, the fifth the impact of the maximum 2D lift coefficient, and the sixth the effect of the PSFC reduction. The output considered are the MTOW, the wing (Sw) area and the fuel consumption (FC) E[ Ŷ ] = C 0 V ar[ Ŷ ] = α∈A C 2 α E[φ 2 α (X)] (22) where E[ Ŷ ] is the mean value and V ar[ Ŷ ] is the variance of the answered variable ( Ŷ ). Sudret [START_REF] Sudret | Global Sensitivity Analysis Using Polynomial Chaos Expansions[END_REF] identifies the polynomial chaos expansion using the ANOVA decomposition, from which it is possible to show that the first order sensitivity index of the variable X i is Ŝi = α∈Li C 2 α E[φ 2 α (X)] V ar[ Ŷ ] ( 23 ) where L i = {α ∈ A/∀ j = i α j = 0}; that is only the polynomials acting exclusively on variable X i have been considered. The total sensitivity index can also be computed: ŜTi = α∈L + i C 2 α E[φ 2 α (X)] V ar[ Ŷ ] ( 24 ) where L + i = {α ∈ A/α i = 0} ; that is all the polynomials acting on the variable X i have been considered (which means that all variance caused by its interaction, of any order, with any other input variables are included). The idea is to determine the sensitivity indices relative to the technology component used in FAST for the MTOW, the S w and the FC. From the analysis the same parameters considered in Table 12 have been considered, except for the battery specific power density (which from the table can be estimated to be 4 times the specific energy density) and the number of engines (since this method only considers continuous variables). For each variable it is possible to define the coefficient of variation CV: CV = σ µ (25) where σ = (xmax-xmin) 2 12 is the variance and µ = xmax-xmin 2 the mean value. It has been decided to work keeping constant the CV for each variable: once that the mean value is fixed, it is possible to deduce the minimum and maximum values of variation. In Table 13 the mean values and the range of variation for each parameter are reported. It has to be noted that, compared to Table 12, the range of variation is smaller. Table 13. Mean values, minimum and maximum values for the parameters considered for the sensitivity analysis (with CV =0.05 kept constant for all the variables). A database of 800 points has been generated for the experiment (via a Latin Hypercube Sampling). The coefficient of variation used is 0.05 (which means that each variable changes of 5% between the minimum and maximum value). Results are shown in Table 14 and Fig. 11. The following conclusions can be deduced: Mean value Minimum Maximum Battery specific energy density [W h kg • MTOW is mostly affected by the battery technology (which has a sensitivity index of 0.86). • The driving parameter for the FC is the PSFC reduction (sensitivity index is 0.64), but there is also an effect of the battery, due to the fact that when it is resized, the MTOW increases and so the FC. Also the EM efficiency has an effect: recalling the propulsive chain (Fig. 3), this parameter regulates the power required by generators, which affects the FC (with the same PSFC, more power means more fuel burnt). • The wing area is finally driven by the maximum 2D lift coefficient (sensitivity index is 0.84); also for this parameter there is a small effect of the battery technology, due to the fact that the sizing criteria for the wing is the approach speed, and when the MTOW increases, a greater area is needed to sustain the flight. From Table 14 it is also possible to note that the sum of all the indices is about one (from 0.99 to 1.07): this means that all the variance of the answered variables is explained, and there are no high order interactions between the input variables. Thus a study of the total sensitivity indices does not provide more information than the first order indices. (see Eq. ( 24)). Having fixed the CV, the range in which the parameters vary is smaller than the technological range established in Table 12. For this reason a second analysis has been performed, using the assumption made in the technological table (which means that the CV is not the same for each parameter anymore). A new database of 800 points has been defined. Results are presented in Table 15 and Fig. 12. Compared to the previous case, the effects are mostly due to the battery variation, except for the wing area, for which there is still an effect of the maximum lift coefficient, even if it is reduced. This means that, until the uncertainty in battery technology is as it has been hypotized, there is no gain in improving the technological level of the other components, since the sizing will be affected mostly by the battery parameters. In conclusion, with the current level of fidelity used in FAST, it is possible to consider the effects of the technology variation; results shown that the main driver for the design process is the battery. IX. Conclusion & future perspectives In this work the feasibility of a large passenger hybrid aircraft has been studied, in which a set of batteries and generators work in sinergy in order to supply power. The proposed concept is based on a distributed electric propulsion architecture, in which a certain number of ducted fans located along the wing provide the thrust necessary. A focus has been made on the advantages of such architecture (weight engine reduction due to the less stringent OEI condition, blowing effects which increase the maximum lift coefficient). Some efforts have been also made in electrical components' modeling and into the description of the propulsive energy The technological hypotheses made refer to a 2035 horizon. All these aspects have been coded into the FAST sizing tool and the modifications made have been presented. The sizing tool is based on empirical equations and low fidelity tool: its level of fidelity can be classified as low. Results show that the hybrid concept has a potential gain up to a certain range, after which the batteries weight become so large that the fuel consumption is increased compared with an aircraft with conventional engines, for the same technological horizon. Once that the baseline has been assessed, two failure cases (for batteries and generator) have been studied in order to understand if the aircraft can prevent the partial loss of one power source. A major investigation in the failure cases has to be considered in the future, according to new certification that could require to not be able to fullfill all the mission: in that case particular attention has to be put into the maximum power can be lost. Also, in the proposed concept the assumption that all electric motors work even if there is a energy source loss has been made: the case in which a loss of a energy source lead to a loss of a certain number of engines (which affects the maximum wing lift coefficient) has to be considered too. However, the scenarios considered in this work are conservative in that sense. Due to the uncertainity in the data for the 2035 horizon, an exploration of the design space, with the technology table available, and sensitivity analyses have been performed. The tradeoff shows that the main parameter for the design process is the battery technology, with the PSFC reduction and maximum 2D lift coefficient having minor effects on the FC and wing area. The conclusion of this analyses is that, until the uncertainty into the battery technology holds, an improving in others components' technologies does not affect the results in a relevant way. From an analysis point of view, FAST performs a MDA: this means that it reaches a viable aircraft, which is not necessary the optimum one (respect to fuel consumption). Next step is to bring the sizing loop here described into a MDO framework. This work can be divided into different phases: • First step is to choose a MDO framework and include the sizing loop into an optimization loop, in order to find the set of TLAR and hybridization degree which minimize the energy and fuel consumption. A suitable choice for the MDO framework could be OpenMDAO, an open software developed by NASA Glenn Research Centre, in collaboration with University of Michigan. 39 • FAST is a tool based on a low fidelity level. A second step is then to study different fidelity levels in FAST in order to assess the difference in results using multifidelity tools. These first two steps considering different scenarios only regarding the battery technology. • Finally, in order to better understand the effect of the technology level, a MDO formulation using uncertainty quantification could be derived. [START_REF] Brevault | Decoupled MDO formulation for interdisciplinary coupling satisfaction under uncertainty[END_REF] A. Mass breakdown standard As mentioned in section IV, the mass breakdown standard used in FAST is based on the French norm AIR 2001/D. The detailed mass breakdown is reported in Table 16: the aircraft has been divided into five categories: airframe, propulsion, systems and fixed installation, operational items and crew, plus fuel weight and payload. Each category has been divided into other subsections, one for each component, as clearly shown in the table. In category B, the sections B4, B5, B6, B7 and B8 have been added in order to consider the hybrid architecture too. B. Methodology of preliminary design of a ducted fan In this section the methodology used for the sizing of a ducted fan is explained. The scheme of the fan is shown in Fig. 13. Knowing the operating condition and the compressor ratio, it is possible to deduce the power required and then the size of the inlet and outlet area. The input for the model are: -Gas constants for the air: γ=1.4 and R=287J kg -1 K -1 ; -Mach number in flight M 0 -The non-dimensional thrust coefficient of one fan, defined as C T = T γ 2 p s0 M 2 0 S ref It has to be noted that the thrust coefficient is referred to the thrust required by a single ducted fan, and not the total thrust. The process is described below. 1. The first step is to compute the total pressure and temperature at the inlet, using the de Saint-Venant relations: p t0 = p s0 1 + γ -1 γ M 2 0 γ γ-1 θ t0 = θ s0 1 + γ -1 γ M 2 0 2. Then it is possible to compute the Mach number at the exit of the nozzle: M 3f = 2 γ -1 1 + γ -1 γ M 2 0 FPR γ-1 γ -1 = f (M 0 , FPR) This relation is obtained considering the nozzle adapted, that is the pressure at the exit of the nozzle is equal to the ambient pressure (p 3f = p s0 ). It is also possible to compute the velocity ratio β as follows: β = V 3f V 0 = M 3f M 0 θ 3f θ 0 = M 3f M 0 FPR γ-1 γ 1 + γ-1 2 M 2 0 1 + γ-1 2 M 2 3f = f (M 0 , FPR) η f being the polytropic efficiency of the fan. This value is introduced considering the ratio between the total pressure and the total temperature through the fan: θ t3f θ t0 = p t3f p t0 γ-1 γη f If η f =1 the compression is isentropic. In practice it is possible to compute the polytropic efficiency with the semiempirical relation: η f = 0.98 -0.08 (FPR -1) which take into account the effect of the FPR: the higher is, the less the compression is efficient. 3. At this stage it is possible to compute the nozzle exit area: S 3f S ref = C T 2 FPR γ(1-η f )-1 γη f 1 + γ-1 2 M 2 0 1 + γ-1 2 M 2 3f 1 γ-1 1 β 2 -β = f (M 0 , FPR, C T ) Finally, supposing the section circular, it is possible to deduce the diameter: D 3f = 2 A 3f π 4. At this step it is possible to compute the mass flow and then the power required by the fan. The mass flow which exits from the nozzle is: Knowing then the ratio between the hub and tip radius σ, it is possible to compute the fan radius: ṁ = p 3f M 3f S 3f γ Rθ 3f = p 3f M 3f S 3f γ Rθ t3f 1 + γ -1 2 M 2 r f an = S f an π(1 -σ 2 ) 6. Once the fan is sized, it is possible to deduce the rotational velocity and the torque. In order to do that, a tip velocity has to be defined: this value is determined by aerodynamic criteria and it allows to obtain the polytropic fan efficiency desired. Some values are summed up in the table below: it is possible to interpolate the data for different values of FPR. The process just described has an error estimated of about 10%. It is still valid for off design conditions, the only difference is that a different value of FPR has to be found, in order to provide the same S 3f . In practice a lower FPR corresponds to a lower RPM on a real fixed-pitch fan. This is automatically done in the code. C. Parametric analysis results In this section the detailed results for the design space exploration (presented in Section VII) are reported. As explained, each study refers to a variation of only one technology, keeping all the others constant to the base values (Table 12). The results are shown in a real and a normalized (the same used in Fig. 10), in order to better understand the overall effect of each variation. Target parameter [%] 2025 2035 2050 Noise -10.0 -11.0 -15.0 Emissions -81.0 -84.0 -90.0 Fuel\Energy consumption -49.0 -60.0 -75.0 Figure 1 . 1 Figure 1. Hybrid aircraft concept proposed with distributed electric ducted fan -modelisation in OpenVSP 7 Figure 2 . 2 Figure 2. Typical evolution of atmospheric boundary layer during the day. The convective boundary layer extension is shown 12 Figure 3 . 3 Figure 3. Distributed electric propulsion architecture Figure 4 . 4 Figure 4. Turboshaft scheme, as modeled in GSP (software developed at NLR)[START_REF] Visser | GSP: A Generic Object-Oriented Gas Turbine Simulation Environment[END_REF] Figure 5 . 5 Figure 5. General scheme of a ducted electric fan with its different parts Figure 7 . 7 Figure 7. FAST xDSM. Black lines represent the main workflow; thick grey lines represent the data flow, meanwhile green blocks indicate an analysis, grey and white block an input/output data. Algorithm 1 details the MDA. Figure 8 . 8 Figure 8. Fuel consumption with respect to the range variation, keeping constant all the others TLAR Figure 9 . 9 Figure 9. Wing view. The red zone represents the surface interested by blowing, used for computing the maximum lift coefficient in Eq. (19) Figure 11 . 11 Figure 11. Plot of sensitivity indices for MTOW, FC and Sw for the analysis considered. Figure 12 . 12 Figure 12. Plot of sensitivity indices for MTOW, FC and Sw, considering the range of variation defined in Table 12 Figure 13 . 13 Figure 13. Scheme of a ducted fan for the model presented 1 γη f - 1 and 5 .Rθt0 1 1151 with p t3f = p t0 FPR from the fan pressure ratio definition and T t3f = T t0 FPR γ-1 γη f . The total enthalpy variation is then∆H = c p (θ t3fθ t0 ) = γR γ -1 θ t0 F P Rγfinally the power required by the fan is P f an = ∆H ṁ It is finally possible to compute the fan area. For aerodynamic reasons, the Mach number at the fan section is 0.65, with this assumption the area is computed from the mass flow and the total conditions as S f an = ṁ p t0 M f an γ FPR 1 . 1 1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 V tip [m s -1 ] 200 230 290 330 370 390 400 400The rotational speed, in round per minute, is then:Ω f an = V tip r f an 60 2πand finally the torque, in N m, isC f an = P f an Ω f an 2π 60 Figure 14 .Figure 15 . 1415 Figure 14. Parametric analyses with respect to battery technology. (E/m) is the battery specific energy density, MTOW the Maximum TakeOff Weight, Sw the wing area and FC the fuel consumption Figure 17 . 17 Figure 17. Parametric analyses with respect to the number of engines. N EM is the number of engines (10, 20 and 40), MTOW the Maximum TakeOff Weight, Sw the wing area and FC the fuel consumption Table 1 . 1 ACARE targets for the next years[START_REF]ACARE project[END_REF] Table 2 . 2 Turboshaft design parameters Design Mach number 0.70 Design operating altitude 11000 m Power at design point 13293 kW PSFC at design point 0.22 kg kW -1 h -1 Algorithm 1 FAST algorithm Require: Initial design parameters (TLAR) Ensure: Sized aircraft, drag polars, masses, design mission trajectory 0: Initialize the values. Estimate weight, wing surfaces initial values, using methods from ISAE & Airbus design manual, 26 as initialization of DEP components. repeat 1: Initialize the loop. Table 5 . 5 Comparison between the step climb and the cruise climb approaches, using the CeRAS aircraft 27 (Npax=150, M=0.78, R=2750NM) Step climb Cruise climb Diff. % MTOW OWE Wing area Fuel mission [kg] [kg] [kg] [m 2 ] 74 618.96 42 200.58 122.74 18 799.11 74 562.82 42 190.71 122.68 18 798.85 -0.075 -0.023 -0.481 -0.001 Table 8 8 the estimated impacts of a 2035 technology on the weight of different components are reported: values come from an internal project at ONERA, in the frame of the EU program Clean Sky 2. 10 Table 6 . 6 Design parameters for the hybrid aircraft with DEP considered Range 800-1600 NM Cruise Mach number 0.70 Number of passengers 150 Approach speed 132 kn Wing span Number of engines ≤36 m 40 Number of generators 2 Number of batteries 4 Minimum power at takeoff 28 MW Table 7 . 7 Design parameters for the electric components for 2035 horizon Battery Generator Electric motor IDC Table 8 . 8 Estimated impact of new materials on weight for 2035 horizon Wing Fuselage Landing gear Cabin seats -10 % -5 % -5 % -30 % Table 10 . 10 Comparison between the Hybrid-Electric concept and the conventional aircraft, EIS2035, for the desired range of 1200NM Hybrid Traditional Diff. % Table 11 . 11 Fuel breakdown comparison between the baseline and the scenarios of failure identified (one generator or two batteries inoperatives) No failure Generator out Batteries out Taxi out [kg] 0 0 0 Takeoff [kg] 0 0 36.67 Initial climb [kg] 0 0 118.52 Climb [kg] 0 0 362.41 Cruise [kg] 4543.81 4984.86 4857.90 Descent [kg] 206.46 227.79 228.58 Alternate climb [kg] 562.62 - 604.35 Alternate cruise [kg] 325.39 - 365.10 Alternate descent [kg] 156.98 - 172.68 Holding [kg] 994.74 1102.73 1078.89 Block fuel [kg] 4750.26 5212.65 5604.13 Reserve fuel [kg] 2182.22 1259.11 2389.14 Mission fuel [kg] 6932.49 6471.76 7993.27 Table 12 . 12 Technology table for evaluating the sensitivity to technology Minimum Maximum Baseline Table 14 . 14 Sensitivity indices for MTOW, FC and Sw for the analysis considered (in bold the relevant values) MTOW FC S w Battery specific energy density 0.8642 0.1799 0.1561 Battery efficiency 0.0002 0.0000 0.0000 Generator power density 0.0126 0.0561 0.0010 Generator efficiency 0.0004 0.0001 0.0000 EM power density 0.0001 0.0086 0.0000 EM efficiency 0.1070 0.1848 0.0003 C l,max 0.0119 0.0089 0.8421 PSFC reduction 0.0003 0.6389 0.0000 Index sum 0.9968 1.0773 0.9995 Table 15 . 15 Sensitivity indices for MTOW, FC and Sw, considering the range of variation defined in Table12(in bold the relevant values) MTOW FC S w Battery specific energy density 0.9358 0.7388 0.3792 Battery efficiency 0.0000 0.0001 0.0000 Generator power density 0.0569 0.0561 0.0223 Generator efficiency 0.0002 0.0001 0.0002 EM power density 0.0017 0.0086 0.0004 EM efficiency 0.0032 0.1848 0.0005 C l,max 0.0015 0.0089 0.5953 PSFC reduction 0.0000 0.0026 0.0000 Index sum 0.9992 0.9975 0.9974 Table 16 . 16 Standard for mass breakdown used in FAST A Airframe A1 Wing A2 Fuselage A3 Horizontal and Vertical tail A4 Flight controls A5 Landing gear A6 Pylons A7 Paint B Propulsion B1 Engines B2 Fuel and oil systems B3 Unusable oil and fuel B4 Cables and cooling system B5 Batteries B6 Generators B7 IDC B8 Bus protection C Systems and fixed installations C1 Power systems (APU, electrical and hydraulical system) C2 Life support systems (Pressurization, de-icing, seats, ...) C3 Instrument and navigation C4 Transmissions C5 Fixed operational systems (radar, cargo hold mechanization) C6 Flight kit D Operational items E Crew F Fuel G Payload Table 17 . 17 Tip velocity for different values of FPR Acknowledgments The authors would like to thank: • AIRBUS for the financial support in the frame of Chair CEDAR (Chair for Eco Design of AircRaft). • The European Commission for the financial support within the frame of the Joint Technology Initiative JTI Clean Sky 2, Large Passenger Aircraft Innovative Aircraft Demonstration Platform "LPA IADP" (contract N CSJU-CS2-GAM-LPA-2014-2015-01). • Michael Ridel and David Donjat for their contribution on cables and cooling system models as well as Sylvain Dubreuil for his work on the sensitivity analysis.
66,514
[ "19121" ]
[ "531214", "531214", "531214", "531194", "467492", "110103" ]
01754876
en
[ "phys" ]
2024/03/05 22:32:10
2017
https://hal.science/hal-01754876/file/mortagne_19813.pdf
Caroline Mortagne Kevin Lippera Philippe Tordjeman Michael Benzaquen Thierry Ondarçuhu Dynamics of anchored oscillating nanomenisci I. INTRODUCTION The study of liquid dynamics in the close vicinity of the contact line is fundamental to understanding the physics of wetting [START_REF] De Gennes | Wetting: Statics and dynamics[END_REF][START_REF] Bonn | Wetting and spreading[END_REF]. The strong confinement inherent to this region leads, in the case of a moving contact line, to a divergence of the energy dissipation. This singularity can be released by the introduction of microscopic models based on long-range interactions, wall slippage, or diffuse interface [START_REF] Snoeijer | Moving contact lines: Scales, regimes, and dynamical transitions[END_REF], which are still difficult to determine experimentally. In most cases, the spreading is also controlled by the pinning of the contact line on surface defects [START_REF] Joanny | A model for contact angle hysteresis[END_REF][START_REF] Perrin | Defects at the Nanoscale Impact Contact Line Motion at all Scales[END_REF]. For nanometric defects, the intensity and localization of the viscous energy dissipation is crucial to understanding the wetting dynamics. The aim of this paper is to study the hydrodynamics of a nanomeniscus anchored on nanometric topographic defects and subjected to an external periodic forcing. This configuration allows one to investigate the viscous dissipation in a meniscus down to the very close vicinity of the fixed contact line and to assess the dynamics of the pinning of nanometric defects. In addition to being an important step towards the elucidation of the wetting dynamics on rough surfaces, this issue is relevant for vibrated droplets or bubbles [START_REF] Noblin | Vibrated sessile drops: Transition between pinned and mobile contact line oscillations[END_REF] and for the reflection of capillary waves on a solid wall [START_REF] Michel | Acoustic Measurement of Surface Wave Damping by a Meniscus[END_REF]. Atomic force microscopy (AFM) has proven to be a unique tool to carry out measurements on liquids down to the nanometer scale: liquid structuration [START_REF] Fukuma | Water distribution at solid/liquid interfaces visualized by frequency modulation atomic force microscopy[END_REF] or slippage [START_REF] Maali | Measurement of the slip length of water flow on graphite surface[END_REF] at solid interfaces was evidenced, while the use of specific tips fitted with either micro-or nanocylinders allowed quantitative measurements in viscous boundary layers [START_REF] Dupré De Baubigny | AFM study of hydrodynamics in boundary layers around micro-and nanofibers[END_REF] and at the contact line [START_REF] Guo | Direct Measurement of Friction of a Fluctuating Contact Line[END_REF]. In this study, we have developed an AFM experiment based on the frequency modulation mode (FM-AFM) to monitor, simultaneously, the mean force and the energy dissipation experienced by an anchored nanomeniscus. Artificial defects with adjustable size are deposited on cylindrical fibers (radius below 100 nm) to control the pinning of the contact line and the meniscus stretching during the oscillation. The experiments are analyzed in the frame of a nanohydrodynamics model based on the lubrification approximation. Interestingly, the meniscus oscillation does not lead to any stress divergence at the contact line allowing a full resolution without the use of cutoff lengths in contrast with the case of a moving contact line. This study thus provides a comprehensive description of dissipation mechanisms in highly confined menisci and an estimate of the critical depinning contact angle for nanometric defects. II. EXPERIMENTAL METHODS The fibers used in the experimental study were carved with a dual beam FIB (1540 XB Cross Beam, Zeiss) from conventional silicon AFM probes (OLTESPA, Bruker). Using a beam of Ga ions, a 2 to 3 µm long cylinder of radius R ∼ 80 nm is milled at the end of a classical AFM tip. An ELPHY MultiBeam (Raith) device allows to manufacture nanometric spots of platinum by electron beam induced deposition (EBID) in order to create ring defect of controlled thickness around the cylinders (see Supplemental Material [12]). An example of a homemade cylinder with three annular rings is displayed in Fig. 1(d). The liquids used are ethylene glycol (1EG), diethylene glycol (2EG), triethylene glycol (3EG), and an ionic liquid, namely, 1-ethyl-3-methylimidazolium tetrafluoroborate. The liquids have a low volatility at room temperature. Their dynamic viscosities are η = 19.5, 34.5, 46.5, and 44 mPa • s, and their surface tensions are γ = 49.5, 49.5, 48, and 56 mN • m -1 at 20 • , respectively. As surface conditions play a crucial role in wetting, measurements are made before and after a 5 min UV/O 3 treatment aimed at removing contaminants and making the surface more hydrophilic [START_REF] Vig | UV/ozone cleaning of surfaces[END_REF]. Using a PicoForce AFM (Bruker), the tips are dipped in and withdrawn from a millimetric liquid drop deposited on a silicon substrate. Prior to any experiment series, the cantilever quality factor Q and deflection sensitivity are measured, and its spring constant k is determined using standard calibration technique [START_REF] Butt | Calculation of thermal noise in atomic force microscopy[END_REF]. The experiments are performed in frequency modulation (FM-AFM) mode using a phase-lock loop device (HF2LI, Zurich Instrument) which oscillates the cantilever at its resonance frequency f . A proportional-integral-derivative controller is used to adjust the excitation signal A ex in order to maintain the tip oscillation amplitude A constant. The excitation signal A ex is therefore a direct indication of the system dissipation. In particular, it is linearly related to the friction coefficient of the interaction through β = β 0 (A ex /A ex,0 -1), where A ex,0 and β 0 = k/(ω 0 Q) are, respectively, the excitation signal and the friction coefficient of the free system in air, measured far from the liquid interface [START_REF] Giessibl | Advances in atomic force microscopy[END_REF]. We used cantilevers with quality factor Q ∼ 200 high enough to ensure that the resonant frequency is related to the natural angular frequency through ω 0 = 2πf . We showed recently that this procedure, and the appropriate calibration used, gives quantitative measurements of dissipation in the viscous layer around the tip [START_REF] Dupré De Baubigny | AFM study of hydrodynamics in boundary layers around micro-and nanofibers[END_REF]. In the present case, it allows us to monitor, during the whole process, both the capillary force F and the friction coefficient β, which are related to the shape of the meniscus and to the viscous dissipation, respectively. Note that both values are obtained with a 20% accuracy mainly coming from the uncertainty in the determination of k. III. RESULTS Figure 1 shows the results of a typical experiment performed on a 3EG drop. The measured force F [Fig. 1(a)] and friction coefficient β [Fig. 1(b)] are plotted as a function of the immersion depth d for a ramp of 2.5 µm. The cylinder is dipped in (light blue curves) and withdrawn (dark blue curves) from the liquid bath at 2.5 µm • s -1 . The tip oscillates at its resonance frequency (66 820 Hz in air) with an amplitude of 6 nm. The cantilever stiffness is k = 1.5 N • m -1 , soft enough to perform deflection measurements while being adapted for the dynamic mode. The force curve can be interpreted using the expression of the capillary force [START_REF] Delmas | Contact Angle Hysteresis at the Nanometer Scale[END_REF]: F = 2πRγ cos θ , where R is the fiber radius and θ is the mean contact angle during the oscillation. After the meniscus formation at d = 0, and until the contact line anchors on the first ring [at reference (i)] F and θ remain constant, consistent with Refs. [START_REF] Delmas | Contact Angle Hysteresis at the Nanometer Scale[END_REF][START_REF] Barber | Static and Dynamic Wetting Measurements of Single Carbon Nanotubes[END_REF][START_REF] Yazdanpanah | Micro-Wilhelmy and related liquid property measurements using constant-diameter nanoneedle-tipped atomic force microscope probes[END_REF]. A small jump of the force is observed when the contact line reaches a platinum ring on reference points (i), (ii), or (iii). Once the meniscus is pinned, the contact angle increases as the cylinder goes deeper into the liquid, leading to a decrease of the force F . Conversely, the withdrawal leads to a decrease of θ and an increase of the force F on the left of (i), (ii), and (iii). Hence, each ring induces two hysteresis cycles characteristic of strong topographic defects [START_REF] Joanny | A model for contact angle hysteresis[END_REF]. Different contributions to the probe-liquid system account for the friction coefficient behavior. The global increase of β with d observed on Fig. 1(b) results from the contribution of the viscous layer around the tip which is proportional to the immersion depth [START_REF] Dupré De Baubigny | AFM study of hydrodynamics in boundary layers around micro-and nanofibers[END_REF]. At withdrawal, β increases dramatically when the probe reaches the reference points (iv), (v), and (vi) of Fig. 1(b). In those regions, the force curve indicates that the meniscus is pinned on a defect. The dissipation growth is therefore attributed to the decrease of the contact angle before depinning as schematized on the zoom on the friction coefficient curve [Fig. 1(c)]. This large effect can be qualitatively understood considering that small contact angles-corresponding to reduced film thickness-generate strong velocity gradients in the meniscus and thus a large dissipation. Note that a similar behavior is observed on a moving contact line for which the friction coefficient also displays a strong dependance upon the contact angle β ∼ 1/ θ [START_REF] De Gennes | Wetting: Statics and dynamics[END_REF]. IV. THEORETICAL MODEL In order to account for the experimental results, we developed a theoretical model for the oscillation of a liquid meniscus in cylindrical geometry (see the Supplemental Material [12]). We consider the problem in the frame of reference attached to the cylinder (see Fig. 2). The flow induced by the interface motion leads to a friction coefficient β men . The latter is related to the mean energy loss P during an oscillation cycle, through P = β men (Aω) 2 /2 [START_REF] Pérez | Mécanique: fondements et applications: avec 300 exercices et problèmes résolus[END_REF]. Since the capillary number is small (see Ref. [START_REF]The global capillary number can be evaluated to Ca = Aωη/γ ∼ 10 -3 . However[END_REF]) we may safely state that viscous effects do not affect the shape of the liquid interface. Therefore, the meniscus profile is solution of the Laplace equation resulting from the balance between capillary and hydrostatic pressures, which in turn yields the well-known catenary shape [START_REF] Derjaguin | Theory of the distortion of a plane surface of a liquid by small objects and its application to the measurement of the contact angle of the wetting of thin filaments and fibres[END_REF][START_REF] James | The meniscus on the outside of a small circular cylinder[END_REF][START_REF] Dupré De Baubigny | Shape and effective spring constant of liquid interfaces probed at the nanometer scale: finite size effects[END_REF]: h = (R + r 0 ) cos θ cosh z (R + r 0 ) cos θ -ln(ζ ) (1) were ζ = cos θ/(1 + sin θ ). The meniscus height Z 0 is given, in the limit of small contact angles, by with γ E ≃ 0.577 the Euler constant and l c the capillary length. Since Z 0 (t) oscillates around its mean position as Z 0 [θ (t)] = Z 0 ( θ ) + A cos(ωt), we can derive the temporal evolution of the contact angle: Z 0 = (R + r 0 ) cos θ ln 4 l c R + r 0 -γ E (2) cos θ (t) = cos θ + A cos(ωt) (R + r 0 ) ln 4l c R+r 0 -γ E . (3) Note that our model is meant to deal with positive contact angles only, even if the defect thickness could in principle allow slightly negative ones. This defines a critical contact angle θ crit related to the minimum value of θ allowed by the model. One has cos θ crit = 1 -A/((R + r 0 ){ln[4 l c /(R + r 0 )] -γ E }). This critical depinning angle on an ideally strong defect increases with respect to A and decreases with respect to R + r 0 . The interface motion being known, the velocity field is derived using the Stokes equation. Indeed, gravity and inertia can be safely neglected (Re ∼ 10 -8 and l c ≃ 2 mm). Moreover, the viscous diffusion time scale τ ν = R 2 /ν is much smaller than the oscillation period (τ ν f ∼ 10 -7 ), such that the Stokes equation reduces to the simplest steady Stokes equation. Using the lubrication approximation, we have finally ∂ z P = η r v where P is the hydrodynamic pressure and v is the velocity component in the z direction. Finally, combining the mass conservation equation, ∂ t (π h 2 ) + ∂ z q = 0, where q is the local flow rate through a liquid section of normal z, the no-slip (at r = R), and free interface (at r = h) boundary conditions yields the velocity profile: v(r,z,t) = 2[R 2 + 2 h 2 ln(r/R) -r 2 ] z 0 du ∂ t (h 2 ) R 4 + 3 h 4 -4 h 2 R 2 -4 h 4 ln(h/R) . (4) From Eq. ( 4) we derive the expression of β men : β men ( θ ) = 4πη A 2 ω 2 Z 0 0 h R (∂ r v) 2 r dr dz t , (5) where t , designates the temporal average over an oscillation cycle (see the Supplemental Material [12]). Figure 2 displays an example of viscous stress field (color gradient) and velocity profile (vertical dark arrows) inside a nanomeniscus. The latter are computed from Eqs. (1), (3), and (4) for a fiber of radius R = 85 nm and a defect with r 0 = 10 nm, for typical operating conditions (f = 65 kHz and A = 10 nm). We observe that the stress is essentially localized at the fiber wall and is at maximum at a distance of the order of R beneath the contact line. Interestingly, the degrees FIG. 3. Normalized friction coefficient β men /η plotted as a function of θ [see Eq. ( 5)]. The dashed line signifies the theoretical model, and the experimental dotted curves are performed over all the studied liquids, before and after UV/O 3 treatment, with R = 85 nm, A = 18 nm, and r 0 = 40 nm. The values of the free parameters used are θ break = 18.5 • , 12.6 • , 15.1 • , 9.5 • , 10.9 • and 14.9 • and β bottom /ηR = 8.7, 4.5, 10.2, 8.9, 13.2, 9.1 for 1EG, 2EG and 3EG, before and after UV/O 3 treatment, respectively. meniscus oscillation does not lead to any stress singularity at the contact line. It does not require the introduction of a slippage in the vicinity of the (moving) contact line as in the case of wetting dynamics [START_REF] Kirkinis | Hydrodynamic Theory of Liquid Slippage on a Solid Substrate Near a Moving Contact Line[END_REF][START_REF] Thompson | Simulations of Contact-Line Motion: Slip and the Dynamic Contact Angle[END_REF]. We therefore used the standard no-slip boundary condition, validated by the molecular scale values of slip lengths measured on hydrophilic surfaces [START_REF] Bocquet | Nanofluidics, from bulk to interfaces[END_REF]. The viscous stress maps also allow one to check a posteriori the interface profile hypothesis. The local capillary number is Ca local = η∂ z v/ P where P ≃ γ /Rγ /(R + A) ≃ Aγ /R 2 . Taking a maximum value of η∂ z v = 3000 Pa obtained for 10 nm defects [see Fig. 2(b)] we find that Ca 5 × 10 -2 , thus validating the hypothesis. The fact that the viscous stress strongly decays when z becomes of the order of a few probe radii also strengthens the lubrication approximation, only valid for small surface gradients (∂ z h ≪ 1). When the mean contact angle θ is decreased, a strong increase of the viscous stress is observed but its localization remains mostly unchanged (see the Supplemental Material [12]). Another striking result is the influence of the defect height r 0 : for contact angles close to the critical one, a reduction in size of the defect increases significantly the viscous stress but also affects its localization, which becomes concentrated closer from the contact line as r 0 is decreased (Fig. 2). This effect is not straightforward and may have important consequences on the wetting on surfaces with defects. Finally, the integration of the stress according to Eq. ( 5) leads to the normalized friction coefficient β men /η as a function of θ , an example of which is plotted in Fig. 3 (dashed line). A significant increase of β men is observed for decreasing contact angles in agreement with the experimental observations. V. DISCUSSION To quantitatively confront the FM-AFM experiments to the theoretical model, we use the force signal to determine the experimental contact angles θ. We assume that, due to the inhomogeneous thickness of the platinum rings, the meniscus depins from the defect for a contact angle θ break larger than θ crit value expected for an ideal defect. The maximum force before depinning then reads F max = 2πγ (R + r 0 ) cos θ break , which allows one to calculate the experimental contact angle for any d values using cos θ = (F /F max ) cos θ break . The latter equation enables us to determine the contact angle for each d position without using the cantilever stiffness k only known within 20% error. For each experiment, we make a linear fit of the friction coefficient curve only taking into account the regions which are not influenced by the defects such as, for example, the portion between points (iv) and (v) in Fig. 1(b). The subtraction of this fit allows one to dispose of the viscous layer , and 12.5 • and β bottom /ηR = 8.6, 8.2, and 7 for r 0 = 30, 40, and 50 nm, respectively. (b) Influence of oscillation amplitude A for 1EG on a defect with r 0 = 40 nm: A = 6 nm, 17.7 nm, and 29.5 nm are plotted in blue, green, and yellow, respectively. The values of the free parameters used are θ break = 7.9 • , 9.5 • , and 11.5 • and β bottom /ηR = 0.13, 0.89, and 2.7 for A = 6 nm, 17.7 nm, and 29.5 nm, respectively. Inset: plot of θ break (symbols) and θ crit (solid line) as a function of the oscillation amplitude for a defect of thickness r 0 = 40 nm. Symbol size corresponds to the error bar in θ break measurements. contribution on the side of the fiber, leaving only β men and a constant term induced by the dissipation associated with the bottom of the tip, called β bottom . The data are then fitted by computing the two free parameters β bottom and θ break which minimize the standard deviation between the experimental data and the theoretical curve [Eq. [START_REF] Perrin | Defects at the Nanoscale Impact Contact Line Motion at all Scales[END_REF]]. The routine is performed with MATLAB, using the Curve Fitting toolbox. It (independently) determines the values of the adjusting parameters β bottom and θ break using the nonlinear least squares method. As for R and r 0 , we use effective values measured by SEM. FM experiments were then performed over all the studied liquids. More than 90 experiments were carried out with two different home-made probes (R = 80 nm and 85 nm), defect thicknesses r 0 between 10 and 50 nm and oscillation amplitudes A ranging from 5 to 35 nm. Additionally, experiments were performed before and after surface cleaning by UV/O 3 treatment to assess the influence of tip wettability. As an example, Fig. 3 displays six curves performed with three different liquids, before and after UV/O 3 treatment, on the same defect (R = 85 nm and r 0 = 40 nm) with an amplitude A = 18 nm. The agreement between the experimental data and the theoretical model is remarkable. A 10-fold enhancement of dissipation is observed when the contact angle is decreased from 50 • to 10 • . As expected, the 5 min surface cleaning does not affect the dissipation process since all curves superpose on a same master curve. Yet ozone cleaning has a strong impact on the θ break values. The hydrophilic surfaces obtained after UV/O 3 treatment lead to a strong pinning which allows to reach smaller contact angle values. For example, for 1EG θ break decreases from 18.5 • to 9.5 • , the latter value being very close to the value of θ crit = 9.4 • . Consequently, the dissipation can reach larger values after ozone treatment. This is a common observation on all the measurements. When the tip is more hydrophobic, the liquid may detach between the dots forming the defect before the θ crit value is reached. Note that, while the model is developed for small contact angles, confrontation with experiments demonstrates that it remains valid until θ break ∼ 50 • , values giving a weak dissipation. This is consistent with previous observations that the lubrication approximation yields good predictions for moderately large contact angles [START_REF] Bonn | Wetting and spreading[END_REF]. In order to discuss further the influence of the various parameters and the resulting values of the fitting variables θ break and β bottom , we report in Fi. 4(a) a comparison between the theoretical model and FM experiments performed on 3EG for (a) different defect thicknesses and (b) various degrees FIG. 5. Superposition of 30 experimental curves. In order to visualize different curves, the color is related to the θ break value. The range of theoretical values is limited by two solid lines (r 0 = 5 nm, A = 33 nm for the higher one and r 0 = 50 nm, A = 6 nm for the lower one); Inset: Histogram of the β bottom values extracted from the experimental data. oscillation amplitudes. Figure 4(a) shows that the ring thickness r 0 has a low impact on the friction coefficient curve for 30 nm r 0 50 nm. Nevertheless, a systematic evolution of θ break is observed: larger defect thicknesses lead to a stronger pinning of the defect, which results in a smaller θ break value, as marked by the arrows on the curves. We also found that the oscillation amplitude plays a significant role only for contact angles close to θ crit . Therefore its influence can be noticed only after the UV/O 3 treatment. The theoretical model reproduces well the influence of amplitude observed for contact angles smaller than 15 • [see Fig. 4(b)]. A larger amplitude increases the value β men at low θ and also leads to an increase of the θ break value, a general trend observed on all experiments. These results show that the experimental conditions, namely, the defect size r 0 , the oscillation amplitude A, and the surface wettability, have a small influence on the shape of the friction coefficient as a function of the contact angle. We therefore report in Fig. 5 30 curves obtained using different tips, defects, liquids, and amplitudes. All curves superimpose in a rather thin zone which is nicely bounded by the theoretical curves giving the extreme cases within the range of experimental conditions (10 nm r 0 50 nm and 6 nm A 33 nm). The highest dissipation is obtained for small defect and high amplitude (r 0 = 5 nm and A = 33 nm). From all the measurements (more than 90), we extracted values of the two adjustable parameters, namely, θ break and β bottom . The value of θ break gives an indication of the pinning behavior. Strong pinning, which corresponds to low θ break values, is reached for large defects on hydrophilic tips under weak forcing. This trend, consistent with macroscopic expectation, therefore remains valid down to nanometer-scale defects. In the optimal case, the θ crit value expected for an ideal defect could be approached [see Fig. 4(c)]. Dynamic effects are also probably involved in the depinning transition since three liquids with similar surface tension and contact angle but varying viscosities show different pinning behaviors. This result, which has important consequences for the description of wetting dynamics on real surfaces, requires further investigations. Unlike θ break , β bottom does not show any systematic influence of amplitude, defect size, and wettability as expected from the model. Statistics over all experiments (see inset of Fig. 5) show that β bottom is proportional to the liquid viscosity and is centered around a mean value β bottom /(ηR) = 7. This is consistent with expected values for a either a flat end or an hemispherical end leading to β bottom = 8η R [START_REF] Zhang | Oscillatory motions of circular disks and nearly spherical particles in viscous flows[END_REF] or β bottom = 3πη R. This large dispersion comes from the fact that the tip end is ill-defined and moreover may evolve with time since measurements on hard surfaces are required, after each series of measurements, for calibration purposes. This hinders a more quantitative comparison with the theory. VI. CONCLUSION In conclusion, this work provides a comprehensive investigation of the viscous dissipation in anchored oscillating menisci. We find an excellent agreement between the experimental results and our lubrication-based theoretical model describing the flow pattern inside the oscillating meniscus. The confinement induced by the stretching of the meniscus leads to a strong increase of viscous stress which accounts for the surge of dissipated energy observed at a small angle. Note that this effect is amplified for small defect sizes, in which case the stress is strongly localized at the contact line with important consequences for the wetting dynamics on surfaces with defects. The fabrication of artificial nanometric defects also gives new insights on the depinning of the contact line which appears for a contact angle value θ break larger than the theoretical one θ crit obtained for a perfect pinning. The latter value could be approached using hydrophilic tips showing that the pinning is all the stronger that the oscillation amplitude A is small and the defect size r 0 is large. This study demonstrates that FM-AFM combined with the nanofabrication of dedicated probes with controlled defects is a unique tool for quantitative measurements of dissipation in confined liquids, down to the nanometer scale, and paves the way for a systematic study of open questions in wetting science regarding the extra dissipation which occurs when the contact line starts to move. In particular, our approach brings new insights for the role of surface defects, their pinning behavior, and the associated induced dissipation, down to the nanometer scale. FIG. 1 . 1 FIG. 1. FM-AFM spectroscopy curves performed on a 3EG liquid drop. (a) Force F and (b) friction coefficient β as a function of the immersion depth d. (c) SEM image of the 3.2 µm long and 170 nm diameter probe, covered by three platinum rings of thicknesses r 0 = 10, 15, and 40 nm, from bottom to top. (d) Zoom on the friction coefficient curve on the second defect with sketches of the meniscus. FIG. 2 . 2 FIG. 2. (a) Oscillating meniscus anchored on a defect, displayed in the frame of reference of the fiber. The velocity profile (black arrows) is calculated from Eq. (4). The stress field η∂ r v (color gradient) is computed for R = 100 nm, r 0 = 40 nm, l c = 2 mm, A = 10 nm, f = 65 kHz, θ = θ crit = 6.73 • , and η = 30 mPa • s. Color bar in Pa. (b) Same with r 0 = 10 nm and θ = θ crit = 7.5 • . FIG. 4 . 4 FIG. 4. Normalized friction coefficient β men /η vs mean contact angle θ for different operating conditions. The dashed lines are plots of the theoretical model [Eq. (5)]. (a) Influence of ring thickness r 0 on 2EG for A = 6 nm. The arrows indicate the value of θ break . The values of the free parameters used are θ break = 19.6 • , 15• , and 12.5 • and β bottom /ηR = 8.6, 8.2, and 7 for r 0 = 30, 40, and 50 nm, respectively. (b) Influence of oscillation amplitude A for 1EG on a defect with r 0 = 40 nm: A = 6 nm, 17.7 nm, and 29.5 nm are plotted in blue, green, and yellow, respectively. The values of the free parameters used are θ break = 7.9 • , 9.5 • , and 11.5 • and β bottom /ηR = 0.13, 0.89, and 2.7 for A = 6 nm, 17.7 nm, and 29.5 nm, respectively. Inset: plot of θ break (symbols) and θ crit (solid line) as a function of the oscillation amplitude for a defect of thickness r 0 = 40 nm. Symbol size corresponds to the error bar in θ break measurements. ACKNOWLEDGMENTS The authors thank P. Salles for his help in the development of tip fabrication procedures, Dominique Anne-Archard for viscosity measurements, and J.-P. Aimé, D. Legendre, and E. Raphaël for fruitful discussions. This study has been partially supporter through the ANR by the NANOFLUIDYN project (Grant No. ANR-13-BS10-0009).
28,535
[ "184703", "1128860" ]
[ "460", "460", "690", "1164", "460" ]
01754878
en
[ "spi" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01754878/file/PapierNESactifR2.pdf
P Y Bryk S Bellizzi R Côte email: cote@lma.cnrs-mrs.fr Experimental study of a hybrid electro-acoustic nonlinear membrane absorber Keywords: Noise Reduction, Hybrid Absorber, Nonlinear Energy Sink, Electroacoustic Absorber, High Sound Level, Low Frequency A hybrid electro-acoustic nonlinear membrane absorber working as a nonlinear energy sink (here after named EA-NES) is described. The device is composed of a thin circular visco-elastic membrane working as an essentially cubic oscillator. One face of the membrane is coupled to the acoustic field to be reduced and the other face is enclosed. The enclosure includes a loudspeaker for the control of the acoustic pressure felt by the rear face of the membrane through proportional feedback control. An experimental set-up has been developed where the EA-NES is weakly coupled to a linear acoustic system. The linear acoustic system is an open-ended tube, coupled on one side to the EA-NES by a box, and on the other side to a source loudspeaker by another box. Only sinusoidal forcing is considered. It is shown that the EA-NES is able to perform resonance capture with the acoustic field, resulting in noise reduction by targeted energy transfer, and to operate in a large frequency band, tuning itself passively to any linear system. We demonstrate the ability of the feedback gain defining the active loop to modify the resonance frequency of the EA-NES, which is a key factor to tune the triggering threshold of energy pumping. The novelty of this work is to use active control combined to passive nonlinear transfer energy to improve it. In this paper, only experimental results are analyzed. Introduction The reduction of noise and vibration at low frequencies is still nowadays a main issue in many fields of engineering. In order to overpass this issue, a new concept of absorbers including nonlinear behavior has been developed in the past decade. This type of absorbers is based on the principle of the "Targeted Energy Transfer" (TET) also named "energy pumping" [START_REF] Vakakis | Nonlinear targeted energy transfer in mechanical and structural systems[END_REF]. TET is an irreversible transfer of the vibrational energy from an input subsystem to a nonlinear attachment (the absorber) called Nonlinear Energy Sink (NES). TET permits to reduce undesirable large vibration amplitudes of structures or acoustic modes. Nonlinear energy transfer results from nonlinear mode bifurcations, or through spatial energy localization by formation of nonlinear normal modes. The phenomena can be described as a 1:1 resonance capture [START_REF] Vakakis | Energy pumping in nonlinear mechanical oscillators: Part II -Resonance capture[END_REF] and, considering harmonic forcing, as response regimes characterized in terms of periodic and Strongly Modulated Responses (SMR) [START_REF] Starosvetsky | Dynamics of a strongly nonlinear vibration absorber coupled to a harmonically excited two-degree-of-freedom system[END_REF]. The basic NES generally consists of a lightmass, an essentially nonlinear spring and a viscous linear damper. In the field of structural vibration, a wide variety of NES designs with different types of stiffness (cubic , non-polynomial, non-smooth nonlinearities...) has been proposed [START_REF] Gourdon | Nonlinear energy pumping under transient forcing with strongly nonlinear coupling: Theoretical and experimental results[END_REF][START_REF] Sigalov | Resonance captures and targeted energy transfers in an inertially-coupled rotational nonlinear energy sink[END_REF][START_REF] Gourc | Quenching chatter instability in turning process with a vibro-impact nonlinear energy sink[END_REF][START_REF] Mattei | Nonlinear targeted energy transfer of two coupled cantilever beams coupled to a bistable light attachment[END_REF]. In the acoustic field, to the best of our knowledge only one type of vibro-acoustic NES design has been tested, see the series of papers [START_REF] Cochelin | Experimental evidence of energy pumping in acoustics[END_REF][START_REF] Bellet | Experimental study of targeted energy transfer from an acoustic system to a nonlinear membrane absorber[END_REF][START_REF] Mariani | Toward an adjustable nonlinear low frequency acoustic absorber[END_REF][START_REF] Cote | Experimental evidence of simultaneous multi-resonance noise reduction using an absorber with essential nonlinearity under two excitation frequencies[END_REF][START_REF] Shao | Theoretical and numerical study of targeted energy transfer inside an acoustic cavity by a non-linear membrane absorber[END_REF]. It was demonstrated that a passive control of sound at low frequency can be achieved using a vibroacoustic coupling between the acoustic field (the primary system) and a geometrically nonlinear thin clamped structure (the NES). In [START_REF] Cochelin | Experimental evidence of energy pumping in acoustics[END_REF][START_REF] Bellet | Experimental study of targeted energy transfer from an acoustic system to a nonlinear membrane absorber[END_REF][START_REF] Shao | Theoretical and numerical study of targeted energy transfer inside an acoustic cavity by a non-linear membrane absorber[END_REF], the thin baffled structure consists of a simple thin circular latex (visco-elastic) membrane whereas in [START_REF] Mariani | Toward an adjustable nonlinear low frequency acoustic absorber[END_REF] a loudspeaker used as a suspended piston is considered. In both cases, the thin baffled structure has to be part of the frontier of the closed acoustic domain (to be controlled). Hence only one face (named the front face) is exposed to the acoustic field whereas the other face (the rear face) radiates outside. Hence this type of devices has to be modified to be used in cavity noise reduction. A simple way to do this is to enclose the rear face of the thin clamped structure limiting the sound radiation. This principle has been used to design electroacoustic absorbers based on the use of an enclosed loudspeaker including an electric load that shunts the loudspeaker electrical terminals [START_REF] Boulandet | Optimization of electroacoustic absorbers by means of designed experiments[END_REF]. An electroacoustic absorber can either be passive or active in terms of their noise suppression characteristics including as in [START_REF] Lissek | Electroacoustic absorbers : Bridging the gap between shunt loudspeaker and active sound absorption[END_REF][START_REF] Boulandet | Toward broad band electroacoustic resonators through optimized feedback control strategies[END_REF] pressure and/or velocity feedback techniques. Loudspeakers have also been used to design devices to control normal surface impedance [START_REF] Lacour | Preliminary experiments on noise reduction in cavities using active impedance changes[END_REF]. Two approaches have been developed. The first is referred to as direct control: the acoustic pressure is measured close to the diaphragm of the loudspeaker and used to produce the desired impedance. In the second approach, passive and active means are combined: the rear face of a porous layer is actively controlled so as to make the front face normal impedance take a prescribed value. In this paper, a hybrid passive/active nonlinear absorber is developed. The absorber is composed of a clamped thin circular visco-elastic membrane with its rear face enclosed. The acoustic field inside the hood (i.e the acoustic load of the rear face) is controlled using a loudspeaker with proportional feedback control. Three objective are assigned. Firstly, the device has to be designed such that it can be used inside a cavity. Secondly, noise reduction must mainly result from TET due to the nonlinear behavior of the membrane, thus defining a new concept of NES. Thirdly, the control loudspeaker has to be used as a linear electroacoustic absorber inside the hood. The control loudspeaker does not act directly on the acoustic field to be reduced. It only modifies the relative acoustic load exciting the membrane. This absorber is here after named hybrid electroacoustic NES (EA-NES). The paper is organized as follows. In Section 2, the principle of functioning of the EA-NES under study is described considering each sub-structure separately. In Section 3, we first describe the experimental set-up. It is composed of an acoustic field (in a pipe, excited by a loudspeaker) coupled to the EA-NES. Then we check the stability analysis of the feedback loop and perform a frequency analysis under broadband excitation. In Section 4, we analyze in detail the responses under sinusoidal excitations and we bring some confirmations on the efficiency of the EA-NES. The Hybrid Electro-Acoustic NES General presentation The EA-NES is shown in Fig. 1 The EA-NES is based on the conjugate functioning of three elements : • The clamped membrane that interacts with the acoustic field in order to provide noise attenuation in its non-linear range; • The hood by which the EA-NES can work inside a surrounding acoustic field unlike previous developed NES (see for example [START_REF] Bellet | Experimental study of targeted energy transfer from an acoustic system to a nonlinear membrane absorber[END_REF]); • The feedback loop that reduces the pressure in the hood allows to use a small hood volume and also to tune the stiffness and damping linear behaviour of the EA-NES. 2.2. About each subsystem of the EA-NES The clamped membrane The clamped membrane with its supporting device is shown in Fig. 1(right). It was already used in [START_REF] Bellet | Experimental study of targeted energy transfer from an acoustic system to a nonlinear membrane absorber[END_REF]. The device allows to change the diameter of the membrane (from 40 to 80 mm). It includes a sliding system used to apply a constant and permanent in-plan pre-stress to the membrane. Once the prestress is set, the membrane is clamped to the supporting device. Applying an in-plane pre-stress modifies the modal component of the associated underlying linear system. Coupled to an undesirable acoustic field, the clamped membrane device can be used as an acoustic NES absorber [START_REF] Bellet | Experimental study of targeted energy transfer from an acoustic system to a nonlinear membrane absorber[END_REF] to reduce the acoustic pressure. Direct [START_REF] Shao | Theoretical and numerical study of targeted energy transfer inside an acoustic cavity by a non-linear membrane absorber[END_REF] or indirect [START_REF] Bellet | Experimental study of targeted energy transfer from an acoustic system to a nonlinear membrane absorber[END_REF] couplings are possible. If the clamped membrane device is properly designed, TET is obtained thanks to resonance capture phenomena. Resonance capture occurs between two nonlinear modes resulting of a weak coupling between the nonlinear subsystem (the NES) and the linear subsystem (the acoustic field). At low excitation level, the two nonlinear modes coincide respectively with the linear mode of the NES and the target mode of the acoustic field. Hence, when the in-plane pre-stress of the clamped membrane sets the resonance frequency of the NES lower than the resonance frequency of the target mode of the acoustic field, TET is possible above a threshold excitation level. Furthermore, Bellet [START_REF] Bellet | Vers une nouvelle technique de controle passif du bruit : Absorbeur dynamique non lineaire et pompage energetique[END_REF] has also shown that the gap between the two resonance frequencies has an impact on the threshold of the TET: the closer are the two resonance frequencies, the smaller is the threshold of the TET. As a subsystem of the EA-NES, the clamped membrane provides the coupling between the EA-NES and the acoustic field to reduce. It is also responsible for the resonance capture phenomena (as a nonlinear component). The other subsystems (the hood and the feedback loop) must preserve the essential prop-erties of a NES, namely a weakly damped system with a hardening nonlinear mode, and with a frequency at low vibratory amplitude smaller than the resonance frequency of the target mode of the acoustic field. The hood The hood is a wooden cubic box with the clamped membrane fixed on one face and an enclosed control loudspeaker on the opposite face (see Fig. 1(left)). The hood has been added to meet two key objectives. Firstly, with hood, the EA-NES can be used to reduce the noise in a cavity where only the front face of the clamped membrane is load by the undesirable acoustic field. Secondly, the hood creates a difference of pressure between the two faces of the membrane, which permits to place it anywhere inside the acoustic field. When the acoustic wavelength is large in comparison with the largest dimension of the hood, the acoustic field can be considered as homogeneous in the hood and the acoustic pressure p e (t) can be related to the relative variation of the volume as p e (t) = -ρ a c 2 0 ∆V e (t) V e (1) where V e denotes the volume of the box at rest, ∆V e (t) the variation of the volume, ρ a the volumetric mass of the air and c 0 the celerity of sound in the air. Assuming the motion of the membrane primarily defined on the first transversal mode, Eq. (1) reduces to p e (t) = ρ a c 2 0 V e (S e m x m (t) -S e ls x ls (t)) (2) where x m denotes the transversal membrane motion with S e m the effective area of the membrane and x ls denotes the transversal motion of the diaphragm of the loudspeaker with S e ls the effective area. Eq. ( 2) means that the air inside the hood is equivalent to an acoustic compliance that resists to the motion of the membrane and hardens it. This compliance being in return inversely related to the volume V e , it results that the smaller is the volume of hood, the higher is the resonance frequency of the underlying linear EA-NES. Finally Eq. ( 2) also shows that the acoustic pressure p e (t) can be reduced to zero if x ls and x m are in-phase. The feedback loop is composed of an enclosed loudspeaker inside the hood, a microphone placed at the geometrical center of the hood and a unit control (see The reference of the loop (u t (t) = 0) means a pressure reduction p e (t) until zero, which is only possible with an infinite value of K as illustrated by the following total loop transfer in Laplace variables P e (s) = (1 -H p (s)KH F (s)H s (s)) -1 P p (s) (3) between the acoustic pressure p p (t) and the acoustic pressure p e (t). However the choice of the gain K is limited by instability phenomena which exist usually for positive gain K at high frequencies, resulting from the acoustic modes of the cavity and for negative gain K at low frequency (< 100 Hz), resulting from the control loudspeaker and membrane dynamics. The influence of the acoustic modes of the cavity is reduced by placing the microphone at the geometrical center of the hood. This location is near the node of pressure of some of the first modes. Stability margin analysis results from the properties of the Open Loop Transfer Function (OLTF) between (t) and u M (t) defined by H F (s)H S (s)H p (s)K (see Fig. ( 2 ) ). The variation range of the gain values can be increased by selecting adequately the filter H F (s). It is that we do in the next section. Choice and sizing of components of the EA-NES Our objective is to design a EA-NES able to interact with primary acoustic fields in the frequency range [40, 100] Hz. The clamped latex membrane was designed following the recommendations discussed in [START_REF] Bellet | Experimental study of targeted energy transfer from an acoustic system to a nonlinear membrane absorber[END_REF]. The membrane has a radius of 0.05 m and a thickness of 0.0002 m. These dimensions guarantee a proper functioning of the clamped latex membrane as a NES when the membrane is coupled to a resonant tube [START_REF] Bellet | Experimental study of targeted energy transfer from an acoustic system to a nonlinear membrane absorber[END_REF][START_REF] Cote | Experimental evidence of simultaneous multi-resonance noise reduction using an absorber with essential nonlinearity under two excitation frequencies[END_REF] without hood (alone). The sliding system is adjusted such that the pre-stress of the membrane gives the coupled setup (EA-NES and primary system) a resonance associated to the EA-NES at a frequency around 70 Hz (see Fig. 5). The Experimental set-up and stability analysis Experimental set-up The experimental set-up shown in Fig. 3 consists in a vibroacoustic system (also named primary system) coupled to the EA-NES. The same set-up was used in [START_REF] Bellet | Experimental study of targeted energy transfer from an acoustic system to a nonlinear membrane absorber[END_REF]. The primary system is made of an open interchangeable U-shaped tube which length L can be adjusted, coupled at each end to coupling boxes. The diameter of the tube is 0.095 m. One coupling box (box 1) contains an acoustic source, and the EA-NES (as described in the previous section) is mounted on one face of the other coupling box (box 2). The volume of the coupling box During the measurement, a target voltage signal e(t) from a generator (not shown in Fig. 3) and a power amplifier (TIRA, BAA120) provide an input current or input voltage signal to the source loudspeaker (depending the selected driving mode: current-or voltage-feedback control mode). The responses of the system are recorded simultaneously (using a multi-channel analyzer/recorder OROS, OR38): the acoustic pressures p tube (t) (at the mid-length of the tube), The lowest resonance around 20 Hz results from the whole coupled system (the tube acts as an acoustic mass coupled to the coupling boxes acting like springs). The next two resonances, around 72 Hz and 87 Hz, as seen in Fig. 5, are respectively assigned to the EA-NES and to the first mode of the tube alone. 1 is V 1 = 27 × 10 -3 m Beyond 100 Hz, the resonance frequencies coming from the highest modes of the tube and the coupling boxes appear. From now on we will focus on a frequency span below 100 Hz. Stability of the feedback loop As a preliminary step, the OLTF, H F (s)H S (s)H p (s)K at s =j2πf , is measured with the EA-NES coupled to the tube of length L 1 with a unity gain (K = 1) (see Fig. 3 Selecting the gain K in the range [0, 200] results in an OLTF with gain margin greater than or equal to 6 dB, the 0 dB gain margin corresponding 260 to K = 400. Gain margins and phase margins with the associated critical frequencies are reported Table 1 for several values of K. Similarly selecting the gain K in the range [-64, 0] results in an OLTF with gain margin from to 0 dB to 36 dB. . Note that, one can also modify the use of the feedback loop in order to amplify p e (t) instead of reducing it (hardening the clamped membrane behaviour). K f Gm (Hz) G m (dB) f Pm (Hz) P m ( • ) L 1 L 2 L 1 L 2 L 1 L 2 L 1 L 2 -1 75 To achieve this goal, the loudspeaker is fed with a negative value for the gain K. This case is equivalent to study the feedback loop with OLTF shifted of 180 degrees, which gives different phase and gain margins reported in Table 1 for K = -1. One can notice that the gain margin for K = 1 is higher than the gain margin with K = -1. It means the feedback loop can soften more the membrane than harden it. The gain K as a tuning parameter of the EA-NES The objective of this section is to verify that the gain K in the feedback loop can be used to tune the modal component associated with the EA-NES without disturbing the modal component associated with the primary system. The influence of the gain K on the behavior of the EA-NES is analyzed after measuring the FRF p 2 /e. Here also the source loudspeaker is driven in voltagefeedback control mode. It is excited with a low-level band-limited white-noise. In terms of modal parameters, these results confirm that the gain K affects simultaneously the resonance frequency and the associated damping ratio of the modal component assigned to the EA-NES. Increasing the gain K reduces the resonance frequency and simultaneously increases the damping ratio. Finally, as expected, negative values of the gain K increase the resonance frequency. NES is able to work as a linear electroacoustic absorber inside the hood with any linear primary system having its resonance frequency in a large frequency range. Study at high excitation level: efficiency of the EA-NES from TET Now let us look at the behavior of the coupled system under a sinusoidal forcing when the frequency and the amplitude of the sinusoidal forcing vary. Here the source loudspeaker is driven using a current-feedback control mode reducing the dissipation introduced in the system by the source loudspeaker [START_REF] Bortoni | Effects of acoustic damping on current-driven loudspeakers[END_REF]. During the measurement, a target signal e(t) from a generator (TTI TGA1244) (not shown in Fig. 3) and a power amplifier (TIRA, BAA120) provide an input current signal to the source loudspeaker. The target signal e(t) is of the form e(t) = E cos(2πf e t + φ e ) ( 4 ) where f e denotes the excitation frequency and E is the associated excitation amplitude. The phase φ e is introduced arbitrarily by the signal generator. A measurement run consists in making a series of experiments, where the value of the scanning frequency f e is updated for each experiment, while the other parameter E remains unchanged. The duration of an experiment must be limited for practical reasons, but must be long enough to capture the physics of the response. We have chosen a duration of 13 s for each experiment. There In order to characterize the pumping phenomenon, we first describe in details the results obtained with the EA-NES with K = 200 coupled to the tube of L 1 length, followed by an analysis of the influence of K. Then, the experiment involving the tube of L 2 length is considered. EA-NES with the tube of L 1 length and K = 200 The behavior of the system with the tube of L 1 length and K = 200 under sinusoidal excitation, scanned around the first mode of the tube is presented. Concerning the command level of the source loudspeaker, the power amplifier driving the loudspeaker in current-feedback mode provides a nearly constant source loudspeaker current i s (t) as shown in Fig. 9 where the Root Mean Square (RMS) values of i s (t) are plotted versus frequency for some excitation amplitudes. Hence the source loudspeaker is not modified by the system (tube+EA-NES) and it plays its full role as a controlled source. Concerning the system response, the RMS value in the steady state regime is extracted from each measurement of the acoustic pressure p tube (t) and it is plotted in Fig. 10(a peak around f = 83.5 Hz, smaller than the resonance frequency observed at low excitation level. These behaviors were classically reported considering NES analysis [START_REF] Bellet | Experimental study of targeted energy transfer from an acoustic system to a nonlinear membrane absorber[END_REF] and they can be attributed to the pumping phenomenon and TET from the primary system to the NES. TET signatures are of two types. First, TET occurs only when the primary linear system reaches a certain vibration energy threshold and secondly TET is associated to quasi-periodic response regime with a slow evolution of the amplitudes (also named Strongly Modulated Regimes (SMR) [START_REF] Starosvetsky | Dynamics of a strongly nonlinear vibration absorber coupled to a harmonically excited two-degree-of-freedom system[END_REF]). This point can be confirmed by an analysis of energy conversion. To analyze the energy conversion occurring from the fundamental frequency f e , to the harmonic frequencies (kf e for the integer k > 1) and to the non harmonic frequencies (f = kf e ), the Fundamental Conversion Ratio (FCR), the Harmonic Conversion Ratio (HCR) and the Non Harmonic Conversion Ratio (NHCR) are used. The definition are these indicators are recalled in [START_REF] Cote | Experimental evidence of simultaneous multi-resonance noise reduction using an absorber with essential nonlinearity under two excitation frequencies[END_REF]. For each signal, FCR, HCR and NHCR are obtained from Fourier analysis estimated from the steady state response. Basically, the FCR is the proportion of signal energy at the source frequency. For a linear system this indicator should be 100%. The HCR is the proportion of signal at the integer harmonics of the source frequency. It can give information about nonlinear effects like saturation. The NHCR is the proportion of signal frequencies out of the integer harmonics of the source frequency. It can give information about nonlinear effects like a loss of periodicity. 10(d)). This transfer of energy increases with the excitation amplitude but the associated RMS level of the acoustic pressure in the tube remains limited (see Fig. 10(a)). In this domain, the responses of the system are no more periodic and are replaced by so-called SMR. This domain is the domain where the EA-NES works well. Another amplitude-frequency domain which is detached from the previous domain is also visible in Fig. 10(c) at low frequencies and high amplitudes showing that, in this case, energy can be transferred from the fundamental frequency to the high harmonic frequencies. In this domain, the response of the system is periodic but unfortunately the associated RMS level increases, which leads to undesired periodic regimes. This domain is the domain where is EA-NES not efficient for noise reduction and it is characterized by the appearance of undesired periodic regimes. The threshold for the appearance of undesired periodic regimes, here denoted EA-NES leads to a two-to-three decrease (6 to 10 dB) in the acoustic pressure. The influence of the gain K is similar to the influence of the linear stiffness of a classical NES [START_REF] Bellet | Experimental study of targeted energy transfer from an acoustic system to a nonlinear membrane absorber[END_REF]. 4.3. EA-NES with the tube of L 2 length: influence of K The same analysis was also conducted with a tube of length L 2 . A very important property of this EA-NES can be first highlighted: its ability to adapt and tune itself to the resonance frequency of different linear systems. To demonstrate experimentally this property we report in Fig. 16 the ridge lines of the acoustic pressure p tube measured on the system with the tube of L 1 length and on the system with the tube of L 2 length using the the EA-NES with the same gain value K = 50. In both cases, we observe that EA-NES works well and the associated efficiency span ([E s , E S ])are [0.5, 1.1] and [1, 2.2] respectively. However it appears that the EA-NES does not perform noise reduction with the same efficiency for the two tubes. Indeed, with the tube of L 1 length the acoustic pressure in the efficiency span is near 750 Pa whereas is about 1500 Pa for the tube of length L 2 . This difference is due to the fact that the tube of L 2 length has a higher resonance frequency than the tube of L 1 length. In consequence, more energy has to be provided to the tube of length L 2 so that resonance capture occurs [START_REF] Bellet | Vers une nouvelle technique de controle passif du bruit : Absorbeur dynamique non lineaire et pompage energetique[END_REF]. A complete excursion as in case of tube of L 1 length was not be possible due to a limitation of the acoustic source performance used in this setup. Finally, the maximum power consumed by the control loudspeaker are displayed in Fig. 18 for several values of the gain K. For all cases, the power consumed by the loudspeaker increases with the excitation amplitude and slowly varies with the gain K for positive values. The maximum power consumed by the loudspeaker is inferior to 1.5 W rms . Thus the EA-NES requires a limited amount energy to work, which is a good asset for industrial applications. Conclusion A new acoustic NES with a controlled acoustic load has been presented. The control of the linear stiffness of the membrane by the help of a feedback loop has been validated experimentally. Furthermore this NES has been tested coupled to two different tubes and has performed acoustic pumping with various efficiencies depending on the gain K. It appears that there is no optimum value of the gain K because it depends on the excitation level. Indeed one can obtain either a low threshold but a short span when f N ES is close to f T , or a high threshold with a large span when f N ES is distant from f T . However, unlike the previous passive NES, the frequency of the EA-NES can be easily modified in real time. Future work is ongoing, focusing on using a gain K variable. It would allow to optimize K according to the excitation in order to get the strongest sound attenuation. We work also on the integration of the EA-NES in a cavity, with a narrow band noise excitation, in the framework of progress towards applications. . It is composed of a clamped circular latex membrane with one face (the front face) exposed to the acoustic field to be reduced and the other one (the rear face) enclosed. The hood includes a feedback loop composed of a microphone, a loudspeaker and a unit control. The feedback loop controls the acoustic pressure inside the hood and seen by the rear face of the membrane. Figure 1 : 1 Figure 1: EA-NES: (a) Schematic representation and (b) Front face. Fig. 1 ( 1 Fig.1(left)). The feedback loop is based on a proportional controller following the block-diagram displayed in Fig.2where H p (the plant transfer) denotes the transfer function between the tension u c (t) applied to the control loudspeaker (voltage-feedback control) and the acoustic pressure p e (t) and H s the transfer function characterizing the microphone. A part of the acoustic pressure p e (t) is due to p p (t) (perturbation term) resulting from the undesirable acoustic field acting on the front face of the clamped membrane. The unit control includes an analogue band-pass filter H F and a controller sets a scalar (real) gain K. dimensions of the enclosure are 0.38 m×0.22 m×0.22 m, which gives V e = 0.018 m 3 . The membrane and the control loudspeaker are fixed respectively on the opposite faces with size 0.22 m × 0.22 m. Note that the pressure p e (t) inside the volume V e is equivalent to a compliance as long as the acoustic wavelength is large in comparison with the largest dimension of the enclosure. With λ > 10 × 0.38 one obtains a maximum frequency of f max = 89.5 Hz. Efficient control loudspeakers need the following properties: high resonance frequency of the driver part (such that the resonance frequency of the enclosed loudspeaker is above the working frequency band of the EA-NES), large force factor, linear behavior for large excursion of the diaphragm and effective area of the diaphragm as large as possible compared to the effective area of the latex membrane. Note that the area of the diaphragm is also limited by the size of the boxes. The BEYMA 8P300Fe/N (8 Inch) loudspeaker family was selected corresponding to the following Thiele/Small parameter values: m LS = 19.4 10 -3 kg, c LS = 1.7 Nsm -1 and k LS = 2834.9 Nm -1 , S e LS = 0.022 m 2 , R e = 6.6 Ω and Bl = 9.21 NA -1 . The rear enclosure of the loudspeaker is chosen as V LS = 0.0248 m 3 . The effective diaphragm area is six time larger than the effective latex membrane, so for the same volume variation the displacement of the diaphragm is reduced to the same extent with respect to the displacement of the membrane (see Eq. (2)), which should happen in case of a perfect control. The resonance frequency of the driver part alone of the control loudspeaker is 60.8 Hz. It increases to 95 Hz when V e and V LS are taken into account. Finally the other elements of the control unit are a G.R.A.S 40BH microphone chosen for its high dynamics (until 181 dB SPL), an analogic band-pass filter KEMO Benchmaster VBF8, and a TIRA BAA 120 amplifier used to set the gain K. 3 Figure 3 : 33 Figure 3: Experimental set-up under study: (a) Picture and (b) Schema (the blue lines define the configuration used to measure the OLTF and the same notations as in Fig. 2 were used). p 2 2 (t) (inside the box 2) and p e (t) (inside the hood of the EA-NES) with three microphones (GRAS, 40BH), (see Fig.3, right). Also recorded (not shown on the schema) are the source loudspeaker current i s (t) and voltage e s (t) responses and the control loudspeaker current i c (t) and voltage responses e c (t). The sampling frequency is f s = 8192 Hz. Figure 4 4 Figure 4 shows the Frequency Response Function (FRF) denoted p 2 /e between the voltage applied to the source loudspeaker e(t) (in voltage-feedback Figure 4 : 4 Figure 4: FRF p 2 /e measured with the EA-NES with K = 0 coupled to the tube of L 1 length: (a) Modulus and (b) Phase. Figure 5 5 Figure 5 shows the FRF p tube /e and p 2 /e measured with the tube of L 1 length and with the tube of L 2 length and imposing in both cases the control gain K = 0. For the two tubes, two resonance peaks appear in the frequency band [40, 120] Hz. In both cases (L 1 and L 2 lengths), the first resonance peak is localized around the frequency 70 Hz and can be attributed to the EA-NES.The second resonance peak (around the frequency 87 Hz for the tube of L 1 length and around the frequency 99 Hz for the tube of L 2 length) is primarily attributed to the tube (in accordance with L 1 < L 2 ). In both case, they exhibit high response levels. Figure 5 : 5 Figure 5: FRFs (a,c) p tube /e and (b,e) p 2 /e measured with the EA-NES with K = 0 coupled to the tube of L 1 (blue curves) and L 2 (red dashed curves) length: (a,b) Modulus and (c,d) Phase. 2 -Figure 6 : 26 Figure 6: Open loop transfer function in the Nysquist domain measured with the EA-NES with unity gain (K = 0) coupled to the tube of L 1 (red dashed curves) and L 2 (blue curves) length. Figure 7 7 Figure7shows the modulus of the FRF p 2 /e with the tube of L 1 length for several values of the gain K. Also reported is the FRF obtained by replacing the EA-NES with the clamped membrane NES alone (no hood). First of all, we Figure 7 : 7 Figure 7: FRF p 2 /e measured with the EA-NES for several values of the gain K and with a clamped membrane NES (red continuous line) coupled to the tube of L 1 length. Figure 8 : 8 Figure 8: FRF p 2 /e measured with the EA-NES for several values of the gain K coupled to the tube of L 2 length. are two steps in an experiment. The first step lasts 3 s with no source signal. It permits us to get null initial conditions, whatever happened before. The second step lasts 10 s with the source on, but we record only the last 7 s, the first 3 s include generally the transient effects of the excitation. A measurement test consists in making a series of runs where the value of the amplitude E is updated for each run. Six tests were performed with the tube of L 1 length with the following six values of the gain K: -40, 0, 50, 100, 200 and 400. Five tests were performed with the tube of L 2 length with the following five values of the gain K: -40, 0, 50, 65 and 100. The frequency step δf = 0.25 Hz was used to define runs in the frequency band [80, 93] Hz for the tube of L 1 length and [90, 105] Hz for the tube of L 2 length. The step in the amplitude band [0.01414, E max ] is equal to 0.07 with E max = 2.9 for the tube of L 1 length and E max = 4 for the tube of L 2 length. Figure 9 : 9 Figure 9: RMS values of the source loudspeaker current is(t) measured with the EA-NES with K = 200 coupled to the tube of L 1 length for several values of the excitation amplitude E versus excitation frequency. Figure 10 : 10 Figure 10: System with the EA-NES with K = 200 coupled to the tube of L 1 length: (a) RMS values, (b) FCR, (c) HCR and (d) NHCR of the steady state regime of p tube as a surface level according to frequency and excitation amplitude. Fig. 10(b)). The corresponding response regimes are periodic resulting from the linear behavior of the EA-NES. By increasing the excitation amplitude, an amplitude-frequency domain appears, characterized by steady state responses, a fraction of energy of which is transferred from the fundamental frequency to the non harmonic frequency domain (see Fig. 10(d)). This transfer of energy Figure 11 : 11 Figure 11: System with the EA-NES with K = 200 coupled to the tube of L 1 length: Steady state responses of (a) p tube and (b) pe versus time and (c) parametric plot (p tube , pe) obtained with the excitation frequency fe = 86.75 and excitation amplitude E = 0.29. Figure 12 : 12 Figure 12: Idem as Fig. 11 with the excitation frequency fe = 86.75 and excitation amplitude E = 1.5. Figure 13 :Figure 14 : 1314 Figure 13: Idem as Fig. 11 with the excitation frequency fe = 83.5 and excitation amplitude E = 2.24. Figure 16 : 16 Figure 16: System with the EA-NES with the gain K = 50 coupled to the tube of (a) L 1 and (b) L 2 length: Ridge line (blue line) of the acoustic pressure p tube and corresponding resonance frequencies (red with bullets line, y-axis on right side) versus the excitation amplitude. Figure 17 : 17 Figure 17: System with the EA-NES with several values of gain K coupled to the tube of L 2 length: Ridge line of the RMS values of p tube according to the level of excitation. Figure 17 ) 17 Figure 17) shows the ridge lines of the acoustic pressure p tube obtained with five values of gain K (= -40, 0, 50, 65 and 100). The results are similar to that observed with the tube of length L 1 for the configuration with the EA-NES with K = -40, 0, and 50 where the thresholds E s and E S are visible. For the configuration with the EA-NES with K > 50, only the threshold E s are reached. Figure 18 : 18 Figure 18: System with the EA-NES with several values of gain K coupled to the tube of L 2 length: Maximum value of power consumed by the control loudspeaker depending on the level of excitation. Table 1 : 1 Gain Gm and phase Pm margins and associated frequencies f Gm and f Pm of the feedback loop measured with the EA-NES for several values of the gain K coupled to the tubes of L 1 and L 2 length. 76.25 36.1 36.9 - - 360 360 1 441 436 52. 52.7 - - 360 360 50 441 436 18. 18.7 - - 360 360 100 441 436 12. 12.7 131.9 133 93 102 200 441 436 6. 6.7 207 203 45 40 E S , is defined as the excitation amplitude where the variation of the acoustic pressure again increases corresponding to an abrupt change in the resonance frequency resulting in a smaller value. As shown in Fig. 7,E Acknowledgment The first author acknowledges DGA-France for the financial support.
38,782
[ "8321", "1447" ]
[ "136844", "136844", "136844" ]
01753246
en
[ "chim", "spi" ]
2024/03/05 22:32:10
2014
https://hal.science/hal-01753246/file/LeMoigne-ECCM-Seville-2014.pdf
Martial Sauceau Nicolas Le Moigne email: nicolas.le-moigne@mines-ales.fr Mohamed Benyakhlef Rabeb Jemai Jean-Charles Benezet José-Marie Lopez-Cuesta Élisabeth Rodier Jacques Fages Jean-Charles Bénézet Elisabeth Rodier Processing and characterization of PHBV/clay nano-biocomposite foams by supercritical CO2 assisted extrusion Keywords: polyhydroxyalkanoates, nanocomposite, foam, supercritical fluid Introduction Bio-based polymers like polyhydroxyalkanoates (PHAs) are marketed as eco-friendly alternatives to the currently widespread non-degradable oil-based thermoplastics, due to their natural and renewable origin, their biodegradability and biocompatibility. Poly 3hydroxybutyrate (PHB) properties are similar to various synthetic thermoplastics like polypropylene and hence it can be used alternatively in several applications, especially for agriculture, packaging but also biomedicine where biodegradability and biocompatibility are of great interest. However, some drawbacks have prevented its introduction to the market as an effective alternative to the oil-based thermoplastics. PHB is indeed brittle and presents a slow crystallization rate and a poor thermal stability which makes it difficult to process [START_REF] Bordes | Nano-biocomposites: biodegradable polyester/nanoclay systems[END_REF][START_REF] Cabedo | Studying the degradation of polyhydroxybutyratecovalerate during processing with clay-based nanofillers[END_REF]. In order to improve the PHB properties, several kinds of PHAs copolymers have been described in the literature such as the Poly(3-hydroxybutyrate-co-3-hydroxyvalerate) (PHBV) with various hydroxyvalerate (HV) contents and molecular weights, which present better mechanical properties, lower melting temperature and an extended processing window [START_REF] Bordes | Effect of clay organomodifiers on degradation of polyhydroxyalkanoates[END_REF]. Improved properties can be also obtained by the addition of nanoparticles of layered silicates such as clays. Indeed, clay minerals present high aspect ratio and specific surface, and can be dispersed in small amounts in polymer matrices to prepare nanocomposites with improved thermal stability, mechanical properties or barrier properties [START_REF] Ray | Polymer/layered silicate nanocomposites: a review from preparation to processing[END_REF]. One of the key parameters is the clay dispersion that can be controlled by the elaboration route either the solvent intercalation, the in-situ intercalation or the melt intercalation; the latter being preferred for a sustainable development since it limits the use of organic solvents [START_REF] Bordes | Nano-biocomposites: biodegradable polyester/nanoclay systems[END_REF]. The organomodifiers inserted in clays interlayer spaces to improve polymer / clay affinity and the polymer chains intercalation have a strong influence on the dispersion but it has been also shown to catalyse the PHBV degradation during processing [START_REF] Bordes | Effect of clay organomodifiers on degradation of polyhydroxyalkanoates[END_REF][START_REF] Hablot | Thermal and thermomechanical degradation of PHB-based multiphase systems[END_REF]. The use of supercritical fluids have recently appeared as an innovative way to improve clay dispersion leading new "clean and environment friendly" processes. Supercritical carbon dioxyde (sc-CO 2 ) has favorable interaction with polymers and it has the ability to dissolve in large quantities and to act as a plasticizer, which modify drastically polymer properties (viscosity, interfacial tension, …). In addition, the dissolved sc-CO 2 can acts as a foaming agent during processing. It is therefore possible to control pore generation and growth by controlling the operating conditions [START_REF] Nikitine | Controlling the structure of a porous polymer by coupling supercritical CO 2 and single screw extrusion process[END_REF][START_REF] Sauceau | New challenges in polymer foaming: a review of extrusion processes assisted by supercritical carbon dioxide[END_REF], and to generate low density porous structures of interest for the lightening of packaging or the storage of active ingredients, e.g. for drug release applications. All these features make sc-CO 2 also able to modify the nanoparticles dispersion inside polymer matrices, which in turn has an effect on the foam structure. Improved dispersion of clays and modified porous structures of synthetic polymers have been obtained with sc-CO 2 , mainly in batch processes via the in-situ intercalation method. Only few studies have reported on the preparation of nanocomposites systems by sc-CO2 assisted continuous processes [START_REF] Sauceau | New challenges in polymer foaming: a review of extrusion processes assisted by supercritical carbon dioxide[END_REF], more easily adaptable for an industrial scale-up. Zhao et al. [START_REF] Zhao | Processing and characterization of solid and microcellular poly(lactic acid)/polyhydroxybutyrate-valerate (PLA/PHBV) blends and PLA/PHBV/ Clay nanocomposites[END_REF][START_REF] Zhao | Morphology and Properties of Injection Molded Solid and Microcellular Polylactic Acid/Polyhydroxybutyrate-Valerate (PLA/ PHBV) Blends[END_REF] have recently investigated the possibility to use a supercritical N 2 assisted injection molding process to develop microcellular PLA/PHBV clay nano-biocomposites. The results showed a decrease of the average cell size and an increased cell density with the addition of clays in PLA/PHBV blends. Rheological behaviour of the PLA/PHBV/clays nanocomposites suggested good dispersion of the clays within the matrix. In this study, we developed a continuous sc-CO 2 assisted extrusion process to prepare PHBV/clays nano-biocomposite foams by two methods: a one-step method based on the direct foaming of physical PHBV / clays mixtures, and a two-step method based on the foaming of PHBV / clays mixtures prepared beforehand by twin-screw extrusion. The structures obtained are characterized in terms of clay dispersion, matrix crystallization, porosity and pore size distribution and density, and discussed as regard to the processing conditions such as temperature, shearing/pressure, CO 2 mass fraction. Materials and methods Materials PHBV with a HV content of 13 wt%, nucleated with boron nitride and plasticized with 10 % of a copolyester was purchased from Biomer (Germany). The weight-average molecular weight is 600 kDa. The clay used is an organo-modified montmorillonite (MMT), Cloisite C30B (C30B), produced by Southern Clay Products, Inc. (USA). To limit hydrolysis of PHBV upon processing, C30B and PHBV were dried at 80°C before use. Preparation of PHBV / C30B extruded mixtures and physical mixtures PHBV based nanocomposites containing 2.5% w/w C30B and PHBV based masterbatches containing 10%, 20% w/w C30B were prepared by melt intercalation using a co-rotating twin-screw extruder BC21 (Clextral, France) having a L/D (length to diameter ratio) of 48. A parabolic temperature profile not exceeding 165°C was used to limit thermal degradation of PHBV. The mixing and the dispersion of the C30B clays within the PHBV matrix were ensured by two kneading sections. Extrudates were water-cooled at the exit of the die, and dried overnight at 50°C under vacuum. About 2.5 kg of granules were collected for each batch. All the batches were moulded by injection with a Krauss Maffei KM-50-180-CX into test specimens. The barrel to die temperature profile was 40 to 165°C. PHBV based masterbatches containing 10%, 20% w/w C30B were diluted to 2.5 % w/w C30B in the injection molding machine to analyze the effect of the dilution on the nanocomposite structures and properties. As will be shown in the following, this dilution procedure was also used to produce PHBV / 2.5% nanocomposite foams by sc-CO 2 assisted extrusion. In addition to extruded mixtures, physical mixtures of PHBV pellets coated with 2.5% w/w of C30B were prepared with a simple manual batch mixing by placing a mixture of both components in a stainless steel rotative drum for 10 minutes (Faraday cage linked to Keithley 6514 electrometer). These physical mixtures of PHBV pellets / 2.5% C30B were then also foamed by sc-CO 2 assisted extrusion. Foaming by sc-CO 2 assisted extrusion Figure 1 shows the experimental set up, which has previously been described elsewhere [START_REF] Nikitine | Controlling the structure of a porous polymer by coupling supercritical CO 2 and single screw extrusion process[END_REF][START_REF] Kamar | Biopolymer foam production using a (sc-CO 2 ) assisted extrusion process[END_REF]. The single-screw extruder (Rheoscam, SCAMEX) has a screw diameter of 30 mm and a length to diameter ratio (L/D) of 37. It is equipped with four static mixer elements (SMB-H 17/4, Sulzer, Switzerland). Sensors allow measuring the temperature and the pressure of the polymer during the extrusion process. CO 2 (N45, Air liquide) is pumped from a cylinder by a syringe pump (260D, ISCO, USA) and then introduced at constant volumetric flow rate. The pressure, the temperature and the volumetric sc-CO 2 flow rate are measured within the syringe pump. Sc-CO 2 density, obtained with the equation of state established by Span and Wagner [START_REF] Span | A new equation of state for carbon dioxide covering the fluid region from the triplepoint temperature to 1100 K at pressures up to 800 MPa[END_REF], is used to calculate mass flow rate and thus the sc-CO 2 mass fraction w CO2 . Once steady state conditions are reached with the chosen operating conditions, extrudates are collected and water-cooled at ambient temperature. Several samples were collected during each experiment in order to check the homogeneity of the extrudates. Experimental conditions chosen after preliminary trials (not described here) are summarized in Table 1. The lowest possible screw speed was selected to increase the residence time of the mixtures, and thus the mixing time. For all experiments, T a , T b and T c were fixed at 160°C to ensure the melting of the PHBV matrix while limiting thermal degradation. Moreover, the polymer temperature was reduced at the exit of the die in order to favour foaming. T d and T e were fixed at 140°C and T f not higher than 140°C. Series Structural characterizations The structures of nanocomposites and masterbatches were observed with a Quanta 200 FEG (FEI Company) electron microscope in transmission mode (STEM). Ultra-thin specimens of 70 nm thicknesses were cut from the middle section of moulded specimens and deposited on Cu grids. The wide angle X-Ray diffraction (WAXD) were performed using an AXS D8 Advance diffractometer (Bruker, Germany) equipped with a Cu cathode (λ = 1.5405 Å). The interlayer distance d 001 of the C30B clays were determined from the (001) diffraction peak using Bragg's law. Porosity ε is defined as the ratio of void volume to the total volume of the sample and can be calculated by the Equation ( 1): ε = 1 -ρ app / ρ p (1) Where ρ app is the apparent density calculated from the weight of the samples and their volumes evaluated by measuring their diameter and length with a vernier (Facom, France). ρ p is the solid polymer density, determined by helium pycnometry (Micromeretics, AccuPYC 1330), which is about 1.216. The fracture surfaces of foamed extrudates were sputter coated with gold and observed using an Environmental Scanning Electronic Microscope XL30 ESEM FEG (Philips, Netherlands). On the basis of the images obtained, a mean diameter D cell is determined and a cell density N cell per volume unit of unfoamed sample were calculated according to Equation (2): N cell = (N 3/2 / A) × (ρ p / ρ app ) (2) N cell is the number of cells on an SEM image, A the area of the image, ρ app the apparent density of the foam and ρ p the solid polymer density. Results and discussion Structure and rheology of extruded PHBV / C30B nanocomposites and masterbatches The structures of the PHBV / C30B nanocomposites and masterbatches were investigated by two complementary methods, WAXD and STEM. The WAXD measurements allowed to determine the interlayer distance of the C30B clays within the PHBV matrix. The STEM gave a direct visualization of the clay dispersion within the matrix. As shown on Figure 2a, WAXD patterns showed a diffraction peak at 2θ = 4.6° for C30B clays, which corresponds to an interlayer distance of 18 Å. Two diffraction peaks were detected on the patterns of the PHBV / 2.5, 10 and 20 wt% C30B mixtures. For the PHBV / 10% and 20% C30B masterbatches, the peak at d 001 distance of 17 -18 Å corresponding to the initial interlayer distance of C30B, suggests that a part of the clays is still aggregated. The second peak observed at d 001 distance of 38 Å, 36.1 Å, 34 Å for PHBV / C30B containing 2.5, 10 and 20 wt% C30B, respectively, indicates an important intercalation of the polymer chains within the interlayer space of the clays. Similar results were found by Choi et al. [START_REF] Choi | Preparation and characterization of poly(hydroxybutyrate-co-hydroxyvalerate)-organoclay nanocomposites[END_REF] and Bordes et al. [START_REF] Bordes | Structure and properties of PHA/clay nano-biocomposites prepared by melt intercalation[END_REF] for PHBV / 2 -3% C30B prepared by melt intercalation. The intensity of the peak at 38 Å for the PHBV / 2.5% C30B is particularly weak suggesting a possible exfoliation of the clays. Moreover, masterbatches have been diluted in a single-screw injection press to analyse the effect on the clay dispersion. The WAXD patterns are very similar for the PHBV / 2.5% C30B nanocomposite and the PHBV / 2.5% C30B nanocomposites diluted from the masterbatches. These results were confirmed by the STEM pictures, on which mostly intercalated and possibly exfoliated layered structures are observed for both PHBV / 2.5% C30B (Figure 2b) and PHBV / 2.5% C30B diluted from the 20% C30B masterbatch (Figure 2c). The rheological behaviour of extruded PHBV and PHBV / C30B nanocomposites was compared to unprocessed PHBV. The zero-shear rate viscosity of the extruded PHBV was strongly decreased after the extrusion process. This can be directly related to a decrease of the average molecular weight of the PHBV chains due to thermal degradation and shearing upon extrusion. When adding 2.5% C30B, a strong shear thinning behaviour was observed at high pulsations, highlighting the catalytic degradation effect of the clays on the PHBV matrix. The rheological behaviour at high pulsations is indeed known to be mainly determined by the macromolecular structure of the polymer matrix with little influence of the clays. When diluting the 20% C30B masterbatch to 2.5% C30B, it was interesting to observe that at high pulsations, the viscosity of the unprocessed PHBV was recovered due to the input of unprocessed and hence non-degraded PHBV in the mixture. This preliminary study on the nanostructure and the rheological behaviour of the PHBV / 2.5% C30B nanocomposites and those produced from the masterbatches allowed to demonstrate that the dilution of the masterbatches to lower clay contents in a single-screw apparatus is a good approach to prepare PHBV / C30B nanocomposites with good dispersion and limited degradation. The good dispersion capacity of the clays is attributed to the physico-chemical interactions between PHBV and C30B that originates from strong hydrogen bonding between the ester carbonyl groups of PHBV and the hydroxyl groups in the interlayer space of C30B [START_REF] Choi | Preparation and characterization of poly(hydroxybutyrate-co-hydroxyvalerate)-organoclay nanocomposites[END_REF]. In the following, this dilution procedure has thus been used to prepare PHBV / 2.5% C30B nanocomposites foams, i.e. PHBV / 20% C30B was diluted to 2.5% C30B with unprocessed PHBV during the sc-CO2 assisted single-screw extrusion. The obtained foams were compared to PHBV / 2.5% C30B foams based on physical mixtures. Processing and characterization of the PHBV / clays nano-biocomposite foams 3.2.1. Effect of the sc-CO 2 mass fraction on the PHBV and PHBV / 2.5% C30B foams The effect of the sc-CO 2 mass fraction on the porosity ε is represented on Figure 3a. In all cases, porosity decreases with increasing sc-CO 2 mass fraction, which is rather astonishing since more sc-CO 2 is theoretically available for nucleation. This could be explained by a faster cooling of the extrudates, as sc-CO 2 pressure drop is endothermic. The fast cooling thus increases the stiffness of the polymer which limits the growth of the cells and hence the expansion and the porosity. A part of the sc-CO 2 could also be in excess [START_REF] Han | Continuous microcellular polystyrene foam extrusion with supercritical CO 2[END_REF] due to the formation of PHBV crystals upon cooling that limits the diffusion. This excess of sc-CO 2 diffuses to the walls of the extruder and does not participate to cell nucleation. This also decreases the processing pressures and temperatures, which in turn limits the expansion. SEM pictures of the extrudates of neat PHBV and physical and extruded PHBV / 2.5% C30B mixtures are represented on Figure 4. When sc-CO 2 mass fraction increases, pores become smaller, more numerous and regular. These observations are illustrated by the evolution of the cell density and the mean cell diameter on Figure 3b andc. It confirms that the presence of more sc-CO 2 increases nucleation (higher cell density) but accelerates the cooling and thus limits the growth and the coalescence of the pores (lower cell diameter). The sc-CO 2 mass fraction must thus be optimized to promote nucleation while keeping enough growth and expansion. Generally, the PHBV and PHBV / 2.5% C30B foam structure dependency to sc-CO 2 mass fraction can be summarized as follows: (i) Low sc-CO 2 mass fractions induce higher processing pressures and temperatures that promote higher pressure drop, extensive growth and higher porosity, but also less nucleation and poor homogeneity due to the coalescence of the pores. (ii) High sc-CO 2 mass fractions promote nucleation and homogeneity, but also induce lower processing pressures and temperatures, that limit the growth and porosity due to the increased stiffness of the frozen polymer. Clays / Foaming interrelationships and effect on the nanocomposite foams structure As shown on Figure 4, the clay particles are inhomogeneously dispersed in the extrudates based on the physical PHBV / 2.5% C30B mixtures (series 3). Several aggregates of 10 µm to 350 µm are observed whatever the sc-CO 2 mass fraction. The effect of the sc-CO 2 on the properties of the polymer matrix and the presence of the static mixer were thus not sufficient to induce the intercalation of the polymer chains within the interlayer space of the C30B clays and their dispersion during single-screw extrusion. The static mixer indeed enhances the distributive mixing of the sc-CO 2 and the clays but only have little dispersive efficiency. Concerning the extruded PHBV / 2.5% C30B mixtures (series 4), a very good dispersion of the C30B is observed with no visible aggregates (Figure 4). This supports that the prior preparation of a PHBV / 20% C30B masterbatch and its further dilution to 2.5% C30B during the sc-CO 2 assisted extrusion process is a necessary step to obtain a good dispersion. WAXD patterns (not shown) revealed that the position of the intercalation peak at 36.2° for series 4 remains unchanged by the foaming process whatever the sc-CO 2 mass fraction, meaning that no significant enhanced intercalation occurs with the plasticization of the matrix induced by the sc-CO 2 . A possible improvement of the dispersion capacity of the clays could be obtained by improving their 'CO 2 -philicity' with a surfactant bearing together hydroxyl groups to conserve a good affinity with PHBV and a 'CO 2 -philic' carbonyl group. Good dispersion of the clays has been shown to favour homogeneous nucleation, limit the coalescence, and hence give porous structures with higher cell density [START_REF] Ngo | Processing of nanocomposite foams in supercritical carbon dioxide. Part I: Effect of surfactant[END_REF][START_REF] Zeng | Polymer-Clay Nanocomposite Foams Prepared Using Carbon Dioxide[END_REF]. As shown on Figures 3 and4 at low sc-CO 2 mass fraction (w CO2 < 1.5%), extruded PHBV / 2.5% C30B mixtures based foams indeed appear less heterogeneous with rather smaller cell diameter as compared to neat PHBV foams, while showing equivalent porosity. At high sc-CO 2 mass fraction (w CO2 .> 3.5%), the presence of the clays decreases significantly the porosity. The diffusion of the sc-CO 2 within the PHBV matrix may be slightly hampered by the C30B clays, which are known to be barrier to gas and fluid in polymeric materials [START_REF] Ray | Biodegradable polymers and their layered silicate nanocomposites: In greening the 21 st century materials world[END_REF]. Consequently, the excess of sc-CO 2 diffuses to the walls of the extruder and accelerates the cooling of the extrudate, limiting the growth of the pores and the expansion of the foams. Conclusions A continuous sc-CO 2 assisted extrusion process has been developed to prepare PHBV/clays nano-biocomposite foams. The prior preparation of a PHBV / 20% C30B masterbatch and its further dilution during the sc-CO 2 assisted single-screw extrusion process are necessary steps to obtain good clay dispersion and limited PHBV degradation. By controlling the sc-CO 2 mass fraction in a narrow window, good clay dispersion appears to favour homogeneous nucleation while limiting the coalescence, and hence allows to obtain PHBV/clays nanobiocomposite foams with better homogeneity and porosity higher than 50 %. Figure 1 . 1 Figure 1. Experimental device used for the foaming by sc-CO 2 assisted single-screw extrusion Figure 2 . 2 Figure 2. WAXD patterns (a) and STEM pictures of PHBV / 2.5% C30B (b), and PHBV / 2.5% C30B diluted from the 20% C30B masterbatch (c). Figure 3 . 3 Figure 3. Evolution of (a) the porosity ε, (b) the cell density N cell and (c) the mean diameter D cell as a function of the CO 2 mass fraction for foams series 2, 3 and 4. Figure 4 . 4 Figure 4. SEM pictures for foams series 2, 3 and 4 at different sc-CO 2 mass fraction. Table 1 . 1 Experimental conditions used for the foaming by sc-CO2 assisted single-screw extrusion Material Screw speed (rpm) T f (°C) w CO2 (mass %) Static mixer Die length (mm) Die diameter (mm) 2 Neat PHBV 30 140 0 to 4 no 20 1 3 PHBV / 2.5% C30B (physical mixture) 40 140 0 to 3 yes 5 0.5 4 PHBV / 2.5% C30B (extruded mixture) 55 140 0 to 4 yes 20 1
22,705
[ "21464", "19511", "746339", "789292", "3639" ]
[ "397287", "242220", "216607", "242220", "397287", "216607", "242220", "242220" ]
01755086
en
[ "spi" ]
2024/03/05 22:32:10
2018
https://inria.hal.science/hal-01755086/file/ECC18_Emilia_A.pdf
On hyper-exponential output-feedback stabilization of a double integrator by using artificial delay The problem of output-feedback stabilization of a double integrator is revisited with the objective of achieving the rates of convergence faster than exponential. It is assumed that only position is available for measurements, and the designed feedback is based on the output and its delayed values without an estimation of velocity. It is shown that by selecting the closedloop system to be homogeneous with negative or positive degree it is possible to accelerate the rate of convergence in the system at the price of a small steady-state error. Efficiency of the proposed control is demonstrated in simulations. I. INTRODUCTION The design of regulators for dynamical systems is a fundamental and complex problem studied in the control theory. An important feature of different existing methods for control synthesis is the achievable quality of transients and robustness against exogenous perturbations and noises. Very frequently the design methods are oriented on various canonical models, and the linear ones are the most popular. Then the double integrator is a conventional benchmark system, since the tools designed for it can be easily extended to other more generic models. If non-asymptotic rates of convergence (i.e. finite-time or fixed time [START_REF] Bernuau | On homogeneity and its application in sliding mode[END_REF]) are needed in the closed-loop system, then usually homogeneous systems come to the attention as canonical dynamics, which include linear models as a subclass. The theory of homogeneous dynamical systems is welldeveloped for continuous time-invariant differential equations [START_REF] Bacciotti | Liapunov Functions and Stability in Control Theory[END_REF], [START_REF] Bhat | Geometric homogeneity with applications to finite-time stability[END_REF], [START_REF] Kawski | Progress in systems and control theory: New trends in systems theory[END_REF], [START_REF] Zubov | On systems of ordinary differential equations with generalized homogenous right-hand sides[END_REF] or time-delay systems [START_REF] Efimov | Development of homogeneity concept for time-delay systems[END_REF], [START_REF] Efimov | Weighted homogeneity for time-delay systems: Finite-time and independent of delay stability[END_REF] (applications of the conventional homogeneity theory to analysis of timedelay systems considering delay as a kind of perturbation have been considered in [START_REF] Aleksandrov | On the asymptotic stability of solutions of nonlinear systems with delay[END_REF], [START_REF] Asl | Analytical solution of a system of homogeneous delay differential equations via the Lambert function[END_REF], [START_REF] Bokharaie | D-stability and delayindependent stability of homogeneous cooperative systems[END_REF], [START_REF] Diblik | Asymptotic equilibrium for homogeneous delay linear differential equations with l-perturbation term[END_REF]). The main feature of a homogeneous system (described by ordinary differential equation) is that its local behavior of trajectories is the same as global (local attractiveness implies global asymptotic stability, for example [START_REF] Bernuau | On homogeneity and its application in sliding mode[END_REF]), while for time-delay homogeneous D. Efimov systems the independent on delay (IOD) stability follows [START_REF] Efimov | Weighted homogeneity for time-delay systems: Finite-time and independent of delay stability[END_REF], with certain robustness to exogenous inputs in both cases. The rate of convergence for homogeneous ordinary differential equations is related with degree of homogeneity [START_REF] Bernuau | On homogeneity and its application in sliding mode[END_REF], but for time-delay systems the links are not so straightforward [START_REF] Efimov | Comments on finite-time stability of time-delay systems[END_REF]. In addition, the homogeneous stable/unstable systems admit homogeneous Lyapunov functions [START_REF] Zubov | On systems of ordinary differential equations with generalized homogenous right-hand sides[END_REF], [START_REF] Rosier | Homogeneous Lyapunov function for homogeneous continuous vector field[END_REF], [START_REF] Efimov | Oscillations conditions in homogenous systems[END_REF]. Analysis of delay influence on the system stability is vital in many cases [START_REF] Gu | Stability of Time-Delay Systems[END_REF], [START_REF] Fridman | Introduction to Time-Delay Systems: Analysis and Control[END_REF]. Despite of variety of applications, most of them deal with the linear time-delay models, which is originated by complexity of stability analysis for time-delay systems [START_REF] Fridman | Introduction to Time-Delay Systems: Analysis and Control[END_REF]. However, in some cases introduction of a delay may lead to an improvement of the system performance [START_REF] Fridman | Delay-induced stability of vector secondorder systems via simple Lyapunov functionals[END_REF], [START_REF] Fridman | Stabilization by using artificial delays: An LMI approach[END_REF]. The goal of this work is to develop the results obtained in [START_REF] Fridman | Delay-induced stability of vector secondorder systems via simple Lyapunov functionals[END_REF], [START_REF] Fridman | Stabilization by using artificial delays: An LMI approach[END_REF] for linear systems to a nonlinear homogeneous case restricting for brevity the attention to the case of the double integrator model. A design method is proposed, which uses position and its delayed values for practical output stabilization with hyper-exponential convergence rates. The outline of this work is as follows. The preliminary definitions and homogeneity concept for time-delay systems are given in Section II. The problem statement and the control design and stability analysis are presented in sections III and IV, respectively. An example is considered in Section V. II. PRELIMINARIES Consider an autonomous functional differential equation of retarded type with inputs [START_REF] Kolmanovsky | Stability of functional differential equations[END_REF]: ẋ(t) = f (x t , d(t)), t ≥ 0 (1) where x(t) ∈ R n and x t ∈ C [-τ, ; f : C [-τ,0] × R m → R n is a continuous function ensuring forward uniqueness and existence of the system solutions, f (0, 0) = 0. We assume that for the initial functional condition x 0 ∈ C [-τ,0] and d ∈ L m ∞ the system (1) admits a unique solution x(t, x 0 , d), which is defined on some time interval [-τ, T ) for T > 0. The upper right-hand Dini derivative of a locally Lipschitz continuous functional V : C [-τ,0] → R + along the system (1) solutions is defined as follows for any φ ∈ C [-τ,0] and d ∈ R m : D + V (φ, d) = lim h→0 + sup 1 h [V (φ h ) -V (φ)], where φ h ∈ C [-τ,0] for 0 < h < τ is given by φ h = φ(θ + h), θ ∈ [-τ, -h) φ(0) + f (φ, d)(θ + h), θ ∈ [-h, 0]. A continuous function σ : R + → R + belongs to class K if it is strictly increasing and σ(0) = 0; it belongs to class K ∞ if it is also radially unbounded. A continuous function β : R + × R + → R + belongs to class KL if β(•, r) ∈ K and β(r, •) is a strictly decreasing to zero for any fixed r ∈ R + . The symbol 1, m is used to denote a sequence of integers 1, ..., m. For a symmetric matrix P ∈ R n×n , the minimum and maximum eigenvalues are denoted as λ min (P ) and λ max (P ), respectively. A. ISS of time delay systems The input-to-state stability (ISS) property is an extension of conventional stability paradigm to the systems with external inputs [START_REF] Pepe | A Lyapunov-Krasovskii methodology for ISS and iISS of time-delay systems[END_REF], [START_REF] Teel | Connections between Razumikhin-type theorems and the ISS nonlinear small gain theorem[END_REF]. Definition 1. [START_REF] Pepe | A Lyapunov-Krasovskii methodology for ISS and iISS of time-delay systems[END_REF] The system (1) is called ISS, if for all x 0 ∈ C [-τ,0] and d ∈ L m ∞ the solutions are defined for all t ≥ 0 and there exist β ∈ KL and γ ∈ K such that |x(t, x 0 , d)| ≤ β( x 0 , t) + γ(||d|| ∞ ) ∀t ≥ 0. Definition 2. [20] A locally Lipschitz continuous functional V : C [-τ,0] → R + is called ISS Lyapunov-Krasovskii functional for the system (1) if there exist α 1 , α 2 ∈ K ∞ and α, χ ∈ K such that for all φ ∈ C [-τ,0] and d ∈ R m : α 1 (|φ(0)|) ≤ V (φ) ≤ α 2 ( φ ), V (φ) ≥ χ(|d|) =⇒ D + V (φ, d) ≤ -α(V (φ)). Theorem 1. [START_REF] Pepe | A Lyapunov-Krasovskii methodology for ISS and iISS of time-delay systems[END_REF] If there exists an ISS Lyapunov-Krasovskii functional for the system (1), then it is ISS with γ = α -1 1 • χ. B. Homogeneity For any r i > 0, i = 1, n and λ > 0, define the dilation matrix Λ r (λ) = diag{λ ri } n i=1 and the vector of weights r = [r 1 , ..., r n ] T . Definition 3. [START_REF] Efimov | Homogeneity for time-delay systems[END_REF] The function g : C [-τ,0] → R is called r-homogeneous (r i > 0, i = 1, n), if for any φ ∈ C [-τ,0] the relation g(Λ r (λ)φ) = λ ν g(φ) holds for some ν ∈ R and all λ > 0. The vector field f : C [-τ,0] → R n is called r- homogeneous (r i > 0, i = 1, n), if for any φ ∈ C [-τ,0] the relation f (Λ r (λ)φ) = λ ν Λ r (λ)f (φ) holds for some ν ≥ -min 1≤i≤n r i and all λ > 0. In both cases, the constant ν is called the degree of homogeneity. The introduced notion of homogeneity in C [-τ,0] is reduced to the standard one in R n [START_REF] Zubov | On systems of ordinary differential equations with generalized homogenous right-hand sides[END_REF] under a vector argument substitution. For any x ∈ R n the homogeneous norm can be defined as follows |x| r = n i=1 |x i | /ri 1/ , ≥ max 1≤i≤n r i . For all x ∈ R n , its Euclidean norm |x| is related with the homogeneous one: σ r (|x| r ) ≤ |x| ≤ σ r (|x| r ) for some σ r , σ r ∈ K ∞ . The homogeneous norm in the Banach space has the same homogeneity property that is ||Λ r (λ)φ|| r = λ||φ|| r for all φ ∈ C [a,b] . In C [-τ,0] , for a radius ρ > 0, denote the corresponding sphere S τ ρ = {φ ∈ C [-τ,0] : ||φ|| r = ρ} and the closed ball B τ ρ = {φ ∈ C [-τ,0] : ||φ|| r ≤ ρ}. An advantage of homogeneous systems described by nonlinear ordinary differential equations is that any of its solution can be obtained from another solution under the dilation re-scaling and a suitable time parameterization. A similar property holds for functional homogeneous systems. Proposition 1. [START_REF] Efimov | Weighted homogeneity for time-delay systems: Finite-time and independent of delay stability[END_REF] Let x(t, x 0 ) be a solution of the rhomogeneous system dx(t)/dt = f (x t ), t ≥ 0, x t ∈ C [-τ,0] (3) with the degree ν for an initial condition x 0 ∈ C [-τ,0] , τ ∈ (0, +∞). For any λ > 0 the functional differential equation dy(t)/dt = f (y t ), t ≥ 0, y t ∈ C [-λ -ν τ,0] (4) has a solution y(t, y 0 ) = Λ r (λ)x(λ ν t, x 0 ) with the initial condition y 0 ∈ C [-λ -ν τ,0] , y 0 (s) = Λ r (λ)x 0 (λ ν s) for s ∈ [-λ -ν τ, 0]. In [START_REF] Efimov | Development of homogeneity concept for time-delay systems[END_REF], using that result it has been shown that for (3) with ν = 0 the local asymptotic stability implies global one (for the ordinary differential equations even more stronger conclusion can be obtained: local attractiveness implies global asymptotic stability [START_REF] Bernuau | On homogeneity and its application in sliding mode[END_REF]). For time-delay systems with ν = 0 that result has the following correspondences: Lemma 1. [START_REF] Efimov | Weighted homogeneity for time-delay systems: Finite-time and independent of delay stability[END_REF] Let the system (3) be r-homogeneous with degree ν = 0 and globally asymptotically stable for some delay 0 < τ 0 < +∞, then it is globally asymptotically stable for any delay 0 < τ < +∞ (i.e. IOD). Corollary 1. [7] Let the system (3) be r-homogeneous with degree ν and asymptotically stable with the region of attraction B τ ρ for some 0 < ρ < +∞ for any value of delay 0 ≤ τ < +∞, then it is globally asymptotically stable IOD. Corollary 2. [7] Let the system (3) be r-homogeneous with degree ν < 0 and asymptotically stable with the region of attraction B τ ρ for some 0 < ρ < +∞ for any value of delay 0 ≤ τ ≤ τ 0 with 0 < τ 0 < +∞, then it is globally asymptotically stable IOD. Corollary 3. [7] Let the system (3) be r-homogeneous with degree ν > 0 and the set B τ ρ for some 0 < ρ < +∞ be uniformly globally asymptotically stable for any value of delay 0 ≤ τ ≤ τ 0 , 0 < τ 0 < +∞ 1 , then (3) is globally asymptotically stable (at the origin) IOD. III. PROBLEM STATEMENT Consider the double integrator system: ẋ1 (t) = x 2 (t), ẋ2 (t) = u(t), (5) y(t) = x 1 (t), where x 1 (t) ∈ R and x 2 (t) ∈ R are the position and velocity, respectively, u(t) ∈ R is the control input and y(t) ∈ R is the output available for measurements. The goal is to design a static output-feedback control practically stabilizing the system with a hyper-exponential convergence rate, i.e. with a convergence faster than any exponential. IV. MAIN RESULTS The solution considered in this paper is the delayed nonlinear controller u(t) = -(k 1 + k 2 ) y(t) α + k 2 y(t -h) α , (6) where y α = |y| α sign(y), k 1 > 0 and k 2 > 0 are tuning gains, α > 0, α = 1 is a tuning power and h > 0 is the delay (if α = 1 then the control ( 6) is linear and it has been studied in [START_REF] Fridman | Delay-induced stability of vector secondorder systems via simple Lyapunov functionals[END_REF], [START_REF] Fridman | Stabilization by using artificial delays: An LMI approach[END_REF]). The restrictions on selection of 1 In this case for any 0 ≤ τ ≤ τ 0 , any ε > 0 and κ ≥ 0 there is 0 ≤ T ε κ,τ < +∞ such that |x(t, x 0 )|r ≤ ρ + ε for all t ≥ T ε κ,τ for any x 0 ∈ B τ κ , and |x(t, x 0 )|r ≤ στ (||x 0 ||r) for all t ≥ 0 for some function στ ∈ K∞ for all x 0 ∈ C [-τ,0] . these parameters and the conditions to check are given in the following theorem. Theorem 2. For any k 1 > 0, k 2 > 0, h 0 > 0, if the system of linear matrix inequalities Q ≤ 0, P > 0, q > 0, (7) Q =    Q 11 k 2 Zb Zb k 2 b Z k 2 2 h 2 -4 e -h h 2 q qh 2 k 2 b Z qh 2 k 2 qh 2 -γ    , Q 11 = A P + P A + qh 2 A bb A + P, Z = P + qh 2 A , A = 0 1 -k 1 -k 2 h , b = 0 1 is feasible for some > 0, γ > 0 and any 0 < h ≤ h 0 , then for any 0 < η < +∞ there exists ∈ (0, 1) sufficiently small such that the system (5), ( 6) is a) globally asymptotically stable with respect to the set B 2h η for any α ∈ (1 -, 1); b) locally asymptotically stable at the origin from B 2h η for any α ∈ (1, 1 + ). All proofs are omitted due to space limitations. Note that for any α ≥ 0 the closed-loop system (5), ( 6) is rhomogeneous for r 1 = 1 and r 2 = α+1 2 with the degree ν = α-1 2 , then the result of Proposition 1 can be used for substantiation. The requirement that the matrix inequalities [START_REF] Efimov | Weighted homogeneity for time-delay systems: Finite-time and independent of delay stability[END_REF] have to be verified for any 0 < h ≤ h 0 may be restrictive for given gains k 1 and k 2 , then another local result can be obtained by relaxing this constraint. Corollary 4. For any k 1 > 0, k 2 > 0 and 0 < h 1 < h 0 , let the system of linear matrix inequalities (7) be verified for some > 0 and all h 1 ≤ h ≤ h 0 . Then for any 0 < ρ 1 < +∞ there exist ∈ (0, 1) sufficiently small and ρ 2 > ρ 1 such that the system (5), ( 6) is asymptotically stable with respect to the set B 2h ρ1 with the region of attraction B 2h ρ2 for any α ∈ (1 -, 1 + ). Remark 1. The result of Theorem 2 complements corollaries 2 and 3. Note that in all cases, for ν = 0, the global stability at the origin cannot be obtained in ( 5), (6) (due to homogeneity of the system, following the result of Lemma 1 the globality implies IOD result), while in the linear case with ν = 0 such a result is possible to derive for any 0 < h ≤ h 0 . Then it is necessary to justify a need in the control with ν = 0 comparing to the linear feedback with the same gains. An answer to this question is presented in the following result, and to this end denote for the system (5), (6): T (α, ρ 1 , ρ 2 , h) = arg sup t≥T -h sup x0∈S 2h ρ 2 |x(t, x 0 )| r ≤ ρ 1 as the time of convergence of all trajectories initiated on the sphere S 2h ρ2 to the set B 2h ρ1 provided that the delay h and the power α applied in the feedback. Proposition 2. For given k 1 > 0, k 2 > 0, h 0 > 0, let the system of linear matrix inequalities [START_REF] Efimov | Weighted homogeneity for time-delay systems: Finite-time and independent of delay stability[END_REF] be verified for some > 0 and any 0 < h ≤ h 0 . Then there exist ∈ (0, 1) sufficiently small and 0 < ρ 1 < ρ 2 < +∞ such that in the system (5), ( 6) In other words, the result above claims that for any fixed feedback gains k 1 and k 2 , if the conditions of Theorem 2 are satisfied, then the nonlinear closed-loop system ( 5), ( 6) with ν = 0 (α = 1) is always converging faster than its linear analog with ν = 0 (α = 1) between properly selected levels ρ 1 and ρ 2 (which values depend on smaller or high than 1 is α) for a delay h . T (α, ρ 1 , ρ 2 , h ) < T (1, ρ 1 , ρ 2 , h ) (8 The result of Proposition 2 provides a motivation for using nonlinear control in this setting: playing with degree of homogeneity of the closed-loop system it is possible to accelerate the obtained linear feedback by fixing the gains and delay values, but introducing an additional power tuning parameter. Note that another, conventional solution, which consists in gains k 1 and k 2 increasing for acceleration, may be infeasible for the given delay value h 0 . Let us consider some results of application of the proposed control and an illustration of the obtained acceleration. the matrix inequalities (7) are satisfied for h 1 < h ≤ h 0 with h 1 = 5 × 10 -4 , and the results of verification are presented in Fig. 1. Thus, all conditions of Corollary 4 are verified. The errors of regulation obtained in simulation of the system (5), [START_REF] Efimov | Development of homogeneity concept for time-delay systems[END_REF] with delay h 0 for different initial conditions with α = 0.8 and α = 1.2, in comparison with the linear controller with α = 1, are shown in figures 2 and 3, respectively (the solid lines represent the trajectories of the system with α = 1 and the dashed ones correspond to α = 1, since the plots are given in a logarithmic scale, then the latter trajectories are close to straight lines). As we can conclude, in the nonlinear case the convergence is much faster than in the linear one close to the origin for α ∈ (0, 1) and far outside for α > 1, which confirms the statement of Proposition 2. Note that the value of η (the radius of the set to which the trajectories converge for α < 1 or from which they converge to the origin for α > 1) is not restrictive. VI. CONCLUSIONS The paper addresses the problem of output stabilization of the double integrator using a nonlinear delayed feedback by obtaining hyper-exponential (faster than any exponential) rates of convergence. The control does not need an estimation of velocity, and the applicability of the approach can be checked by resolving linear matrix inequalities. The efficiency of the proposed approach is demonstrated in simulations and a comparison with a linear controller is carried out. The homogeneous norm is a rhomogeneous function of degree one: |Λ r (λ)x| r = λ|x| r for all x ∈ R n . Similarly, for any φ ∈ C [a,b] , -∞ ≤ a < b ≤ +∞ the homogeneous norm can be defined as follows ||φ|| r = n i=1 ||φ i || /ri 1/ , ≥ max 1≤i≤n r i , and there exist two functions ρ r , ρ r ∈ K ∞ such that for all φ ∈ C [a,b] ρ r (||φ|| r ) ≤ ||φ|| ≤ ρ r (||φ|| r ). ) for some 0 < h ≤ h 0 and any α ∈ (1-, 1) or α ∈ (1, 1+ ), provided that sup 0<h≤h0 T (α, 0.5, 1, h) ≤ T α for some T α ∈ R + and all α ∈ (1 -, 1 + ). 3.73 × 10 -2 , q = 1.5 × 10 -11 Figure 1 .Figure 2 . 12 Figure 1. The results of verification of (7) for different h Figure 3 . 3 Figure 3. Trajectories of stabilized double integrator with α = 1.2 , W. Perruquetti and J.-P. Richard are at Inria, Non-A team, Parc Scientifique de la Haute Borne, 40 av. Halley, 59650 Villeneuve d'Ascq, France and CRIStAL (UMR-CNRS 9189), Ecole Centrale de Lille, BP 48, Cité Scientifique, 59651 Villeneuve-d'Ascq, France. E. Fridman is with School of Electrical Engineering, Tel-Aviv University, Tel-Aviv 69978, Israel. D. Efimov is with Department of Control Systems and Informatics, Saint Petersburg State University of Information Technologies Mechanics and Optics (ITMO), 49 Kronverkskiy av., 197101 Saint Petersburg, Russia. This work was partially supported by the Government of Russian Federation (Grant 074-U01), the Ministry of Education and Science of Russian Federation (Project 14.Z50.31.0031) and by Israel Science Foundation (grant No 1128/14).
20,870
[ "20438", "838164", "739707", "17232" ]
[ "525219", "130965", "120930", "374570", "525219", "120930" ]
00175517
en
[ "sdv" ]
2024/03/05 22:32:10
2005
https://ens-lyon.hal.science/ensl-00175517/file/brodie_etal_manuscript.pdf
Edward-Benedict B Brodie Of Brodie Samuel Nicolay Marie Touchon Benjamin Audit Yves D'aubenton-Carafa Claude Thermes A Arneodo From DNA sequence analysis Keywords: numbers: 87.15.Cc, 87.16.Sr, 87.15.Aa come DNA replication is an essential genomic function responsible for the accurate transmission of genetic information through successive cell generations. According to the "replicon" paradigm derived from prokaryotes [1], this process starts with the binding of some "initiator" protein to a specific "replicator" DNA sequence called origin of replication (ori). The recruitement of additional factors initiates the bidirectional progression of two divergent replication forks along the chromosome. One strand is replicated continuously from the origin (leading strand), while the other strand is replicated in discrete steps towards the origin (lagging strand). In eukaryotic cells, this event is initiated at a number of ori and propagates until two converging forks collide at a terminus of replication (ter ) [2]. The initiation of different ori is coupled to the cell cycle but there is a definite flexibility in the usage of the ori at different developmental stages [3,4]. Also, it can be strongly influenced by the distance and timing of activation of neighbouring ori, by the transcriptional activity and by the local chromatin structure [3]. Actually, sequence requirements for an ori vary significantly between different eukaryotic organisms. In the unicellular eukaryote Saccharomyces cerevisiae, the ori spread over 100-150 bp and present some highly conserved motifs [2]. In the fission yeast Schizosaccharomyces pombe, there is no clear consensus sequence and the ori spread over at least 800 to 1000 bp [2]. In multi-cellular organisms, the ori are rather poorly defined and initiation may occur at multiple sites distributed over thousands of base pairs [5]. Actually, cell diversification may have led higher eukaryotes to develop various epigenetic controls over the ori selection rather than to conserve specific replicator sequences [6]. This might explain that only very few ori have been identified so far in multi-cellular eukaryotes, namely around 20 in metazoa and only about 10 in human [7]. The aim of the present work is to show that with an appropriate coding and an adequate methodology, one can challenge the issue of detecting putative ori directly from the genomic sequences. According to the second parity rule [8], under no-strand bias conditions, each genomic DNA strand should present equimolarities of A and T and of G and C. Deviations from intrastrand equimolarities have been extensively studied in prokaryotic, organelle and viral genomes for which they have been used to detect the ori [START_REF] Mrazek | Proc. Natl. Acad. Sci[END_REF]. Indeed the GC and TA skews abruptly switch sign at the ori and ter displaying step like profiles, such that the leading strand is generally richer in G than in C, and to a lesser extent in T than in A. During replication, mutational events can affect the leading and lagging strands differently, and an asymmetry can result if one strand incorporates more mutations of a particular type or if one strand is more efficiently repaired [START_REF] Mrazek | Proc. Natl. Acad. Sci[END_REF]. In eukaryotes, the existence of compositional biases has been debated and most attempts to detect the ori from strand compositional asymmetry have been inconclusive. In primates, a comparative study of the β-globin ori has failed to reveal the existence of a replication-coupled mutational bias [10]. Other studies have led to rather opposite results. The analysis of the yeast genome presents clear replication-coupled strand asymmetries in subtelomeric chromosomal regions [11]. A recent space-scale analysis [12] of the GC and TA skews in M bp long human contigs has revealed the existence of compositional strand asymmetries in intergenic regions, suggesting the existence of a replication bias. Here, we show that the (T A+ GC) skew profiles of the 22 human autosomal chromosomes, display a remarkable serrated "factory roof" like behavior that differs from the crenelated "castle rampart" like profiles resulting from the prokaryotic replicon model [START_REF] Mrazek | Proc. Natl. Acad. Sci[END_REF]. This observation will lead us to propose an alternative model of replication in higher eukaryotes. Sequences and gene annotation data were downloaded from the UCSC Genome Bioinformatics site and correspond to the assembly of July 2003 of the human genome. To exclude repetitive elements that might have been inserted recently and would not reflect long-term evolutionary patterns, we used the repeat-masked version of the genome leading to a homogeneous reduction of ∼ 40 -50% of sequence length. All analyses were carried out using "knowngene" gene annotations. The TA and The values of ST A and SGC were calculated in adjacent 1 kbp windows. The dark (light) grey dots refer to "sense" ("antisense") genes with coding strand identical (opposed) to the sequence; black dots correspond to intergenes. GC skews were calculated as S T A = (T -A)/(T + A) and S GC = (G -C)/(G + C). Here, we will mainly consider S = S T A +S GC , since by adding the two skews, the sharp transitions of interest are significantly amplified. In Fig. 1 are shown the skew S profiles of 3 fragments of chromosomes 8 and 20 that contain 3 experimentally identified ori. As commonly observed for eubacterial genomes [START_REF] Mrazek | Proc. Natl. Acad. Sci[END_REF], these 3 ori correspond to rather sharp (over several kbp) transitions from negative to positive S values that clearly emerge from the noisy background. The leading strand is relatively enriched in T over A and in G over C. The investigation of 6 other known human ori [7] confirms the above observation for at least 4 of them (the 2 exceptions, namely the Lamin B2 and βglobin ori, might well be inactive in germline cells or less frequently used than the adjacent ori). According to the gene environment, the amplitude of the jump can be more or less important and its position more or less localized (from a few kbp to a few tens kbp). Indeed, it is known that transcription generates positive TA and GC skews on the coding strand [13,14], which explains that larger jumps are observed when the sense genes are on the leading strand and/or the antisense genes on the lagging strand, so that replication and transcription biases add to each other. On the contrary to the replicon characteristic step like profile observed for eubacteria [START_REF] Mrazek | Proc. Natl. Acad. Sci[END_REF], S is definitely not constant on each side of the ori location making quite elusive the detection of the ter since no corresponding downward jumps of similar amplitude can be found in Fig. 1. In Fig. 2 are shown the S profiles of long fragments of chromosomes 9, 14 and 21, that are typical of a fair proportion of the S profiles observed for each chromosome. Sharp upward jumps of amplitude (∆S ∼ 0.2) similar to the ones observed for the known ori in Fig. 1, seem to exist also at many other locations along the human chro- mosomes. But the most striking feature is the fact that in between two neighboring major upward jumps, not only the noisy S profile does not present any comparable downward sharp transition, but it displays a remarkable decreasing linear behavior. At chromosome scale, one thus gets jagged S profiles that have the aspects of "factory roofs" rather than "castle rampart" step like profiles as expected for the prokaryotic replicon model [START_REF] Mrazek | Proc. Natl. Acad. Sci[END_REF]. The S profiles in Fig. 2 look somehow disordered because of the extreme variability in the distance between two successive upward jumps, from spacings ∼ 50-100 kbp (∼ 100-200 kbp for the native sequences) up to 2-3 M bp (∼ 4-5 M bp for the native sequences) in agreement with recent experimental studies that have shown that mammalian replicons are heterogeneous in size with an average size ∼ 500 kbp, the largest ones being as large as a few M bp [15]. We report in Fig. 3 the results of a systematic detection of upward and downward jumps using the wavelet-transform (WT) based methodology described in Ref. [12(b)]. The selection criterium was to retain only the jumps corresponding to discontinuities in the S profile that can still be detected with the WT microscope up to the scale 200 kbp which is smaller than the typical replicon size and larger than the typical gene size. In this way, we reduce the contribution of jumps associated with transcription only and maintain a good sensitivity to replication induced jumps. A set of 5100 jumps was detected (with as generally expected an almost equal proportion of upward and downward jumps). In Fig. 3(a) are reported the histograms of the amplitude |∆S| of the so-identified upward (∆S > 0) and downward (∆S < 0) jumps respectively, for the repeat-masked sequences. These histograms do not superimpose, the former being significantly shifted to larger |∆S| values. When plotting N (|∆S| > ∆S * ) vs ∆S * in Fig. 3 (b), one can see that the number of large amplitude upward jumps overexceeds the number of large amplitude downward jumps. These results confirm that most of the sharp upward transitions in the S profiles in Figs 1 and2, have no sharp downward transition counterpart. This demonstrates that these jagged S profiles are likely to be representative of a general asymmetry in the skew profile behavior along the human chromosomes. As reported in a previous work [14], the analysis of a complete set of human genes revealed that most of them present TA and GC skews and that these biases are correlated to each other and are specific to gene sequences. One can thus wonder to which extent the transcription machinery can account for the jagged S profiles shown in Figs 1 and2. According to the estimates obtained in Ref. [14], the mean jump amplitudes observed at the transition between transcribed and non-transcribed regions are |∆S T A | ∼ 0.05 and |∆S GC | ∼ 0.03 respectively. The characteristic amplitude of a transcription induced transition |∆S| ∼ 0.08 is thus significantly smaller than the amplitude ∆S ∼ 0.20 of the main upward jumps in Fig. 2. Hence, it is possible that, at the transition between an antisense gene and a sense gene, the overall jump from negative to positive S values may reach sizes ∆S ∼ 0.16 that can be comparable to the ones of the upward jumps in Fig. 2. However, if some coorientation of the transcription and replication processes may account for some of the sharp upward transitions in the skew profiles, the systematic observation of "factory roof" skew scenery in intergenic regions as well as in transcribed regions, strongly suggests that this peculiar strand bias is likely to originate from the replication machinery. To further examine if intergenic regions present typical "factory roof" skew profiles, we report in Fig. 4 the results of the statistical analysis of 287 pairs of putative adjacent ori that actually correspond to 486 putative ori almost equally distributed among the 22 autosomal chromosomes. These putative ori were identified by (i) selecting pairs of successive jumps of amplitude ∆S ≥ 0.12, and (ii) checking that none of these upward jumps could be explained by an antisense gene -sense gene transition. In Fig. 4(a) is shown the S profile obtained after rescaling the putative ori spacing l to 1 prior to computing the average S values in windows of width 1/10 that contain more than 90% of intergenic sequences. This average profile is linear and crosses zero at the median position n/l = 1/2, with an overall upward jump ∆S ≃ 0.17. The corresponding average S profile over windows that are now more than 90% genic is shown in Fig. 4(b). A similar linear profile is obtained but with a jump of larger mean amplitude ∆S ≃ 0.28. This is a direct consequence of the gene content of the selected regions. As shown in Fig. 4(b), sense (antisense) genes are preferentially on the left (right) side of the 287 selected sequences, which implies that the replication and -when present -transcription biases tend to add up. In Fig. 4(c) is shown the histogram of the linear slope values of the 287 selected skew profiles after rescaling their length to 1. The histogram of mean absolute deviation from a linear decreasing profile reported in Fig. 4 (d), confirms the linearity of each selected skew profiles. Following these observations, we propose in Fig. 5 a rather crude model for replication that relies on the hypothesis that the ori are quite well positioned while the ter are randomly distributed. In other words, replication would proceed in a bi-directional manner from well defined initiation positions, whereas the termination would occur at different positions from cell cycle to cell cycle [16]. Then if one assumes that (i) the ori are identi- cally active and (ii) any location in between two adjacent ori has an equal probability of being a ter, the continuous superposition of step-like profiles like those in Fig. 5(a) leads to the anti-symmetric skew pattern shown in Fig. 5 (b), i.e. a linear decreasing S profile that crosses zero at middle distance from the two ori. This model is in good agreement with the overall properties of the skew profiles observed in the human genome and sustains the hypothesis that each detected upward jump corresponds to an ori. To summarize, we have proposed a simple model for replication in the human genome whose key features are (i) well positioned ori and (ii) a stochastic positioning of the ter. This model predicts jagged skew profiles as observed around most of the experimentally identified ori as well as along the 22 human autosomal chromosomes. Using this model as a guide, we have selected 287 domains delimited by pairs of successive upward jumps in the S profile and covering 24% of the genome. The 486 corresponding jumps are likely to mark 486 ori active in the germ line cells. As regards to the rather large size of the selected sequences (∼ 2 M bp on the native sequence), these putative ori are likely to correspond to the large replicons that require most of the S-phase to be replicated [15]. Another possibility is that these ori might correspond to the so-called replication foci observed in interphase nuclei [15]. These stable structures persist throughout the cell cycle and subsequent cell generations, and likely represent a fundamental unit of chromatin organization. Although the prediction of 486 ori seems a significant achievement as regards to the very small number of experimentally identified ori, one can reasonably hope to do much better relatively to the large number (probably several tens of thousands) of ori. Actually what makes the analysis quite difficult is the extreme variability of the ori spacing from 100 kbp to several M bp, together with the necessity of disentangling the part of the strand asymmetry coming from replication from that induced by transcription, a task which is rather delicate in regions with high gene density. To overcome these difficulties, we plan to use the WT with the theoretical skew profile in Fig. 5 (b) as an adapted analyzing wavelet. The identification of a few thousand putative ori in the human genome would be a very promising methodological step towards the study of replication in mammalian genomes. This work was supported by the Action Concertée Incitative IMPbio 2004, the Centre National de la Recherche Scientifique, the French Ministère de la Recherche and the Région Rhône-Alpes. FIG. 1 : 1 FIG. 1: S = ST A + SGC vs the position n in the repeatmasked sequences, in regions surrounding 3 known human ori (vertical bars): (a) MCM4 (native position 48.9 M bp in chr. 8 [7(b)]); (b) c-myc (nat. pos. 128.7 M bp in chr. 8 [7(a)]); (c) TOP1 (nat. pos. 40.3 M bp in chr. 20 [7(c)]).The values of ST A and SGC were calculated in adjacent 1 kbp windows. The dark (light) grey dots refer to "sense" ("antisense") genes with coding strand identical (opposed) to the sequence; black dots correspond to intergenes. FIG. 2 : 2 FIG. 2: S = ST A +SGC skew profiles in 9 M bp repeat-masked fragments in the human chromosomes 9 (a), 14 (b) and 21 (c). Qualitatively similar but less spectacular serrated S profiles are obtained with the native human sequences. FIG. 3 : 3 FIG. 3: Statistical analysis of the sharp jumps detected in the S profiles of the 22 human autosomal chromosomes by the WT microscope at scale a = 200 kbp for repeat-masked sequences [12(b)]. |∆S| = |S(3 ′ ) -S(5 ′ )|, where the averages were computed over the two adjacent 20 kbp windows respectively in the 3' and 5' direction from the detected jump location. (a) Histograms N (|∆S|) of |∆S| values. (b) N (|∆S| > ∆S * ) vs ∆S * . In (a) and (b), the solid (resp. thin) line corresponds to downward ∆S < 0 (resp. upward ∆S > 0) jumps. FIG. 4 : 4 FIG. 4: Statistical analysis of the skew profiles of the 287 pairs of ori selected as explained in the text. The ori spacing l was rescaled to 1 prior to computing the mean S values in windows of width 1/10, excluding from the analysis the first and last half intervals. (a) Mean S profile (•) over windows that are more than 90% intergenic. (b) Mean S profile (•) over windows that are more than 90% genic; the symbols ( ) (resp. ( )) correspond to the percentage of sense (antisense) genes located at that position among the 287 putative ori pairs. (c) Histogram of the slope s of the skew profiles after rescaling l to 1. (d) Histogram of the mean absolute deviation of the S profiles from a linear profile. FIG. 5 : 5 FIG. 5: A model for replication in the human genome. (a) Theoretical skew profiles obtained when assuming that two equally active adjacent ori are located at n/l = 0 and 1, where l is the ori spacing; the 3 profiles in thin, thick and normal lines, correspond to different ter positions. (b) Theoretical mean S profile obtained by summing step-like profiles as in (a), under the assumption of a uniform random positioning of the ter in between the two ori.
18,150
[ "843005", "178489", "843006", "843007", "13923" ]
[ "13", "19271", "13", "19271", "433", "433", "433", "13", "19271" ]
01755248
en
[ "spi" ]
2024/03/05 22:32:10
2018
https://pastel.hal.science/tel-01755248/file/65155_LEE_2018_archivage.pdf
Heejae Lee email: heejae.lee@polytechnique.edu Jean-Charles Vanel Abderrahim Yassar Gael Zucchi Chloe Dindault Minjin Kim Dr Warda Hadouchi Dr Anna Shirinskaya Dr Mark Chaigneau Dr Jongwoo Jin Dr Taewoo Jeon Dr Chang-Seok Lee Dr Chang-Hyun Kim Dr Heechul Woo Dr Sungyup Jung Seonyong Park Sanghuk Yoo Junha Park Hyeonseok Sim Gookbin Cho Analysis of Current-Voltage Hysteresis and Ageing Characteristics for CH 3 NH 3 PbI 3-x Cl x Based Perovskite Thin Film Solar Cells The three years have already passed, and I want to prepare another new start. It was a doctoral course that started with curiosity and worry, The synthesis of the halide perovskite materials is optimized in a first step by controlling the deposition conditions such as annealing temperature (80°C) and spinning rate (6000 rpm) in the one step-spin-casted process. CH 3 NH 3 PbI 3-x Cl x based perovskite solar cells are then fabricated in the inverted planar structure and characterized optically and electrically in a second step. Direct experimental evidence of the motion of the halide ions under an applied voltage has been observed using glow discharge optical emission spectroscopy (GDOES). Ionic diffusion length of 140 nm and ratio of mobile iodide ions of 65 % have been deduced. It is shown that the current-voltage hysteresis in the dark is strongly affected by the halide migration which causes a substantial screening of the applied electric field. Thus we have found a shift of voltage at zero current (< 0.25 V) and a leakage current (< 0.1 mA/cm 2 ) in the dark versus measurement condition. Through the current-voltage curves as a function of temperature we have identified the freezing temperature of the mobile iodides at 260K. Using the Nernst-Einstein equation we have deduced a value of 0.253 eV for the activation energy of the mobile ions. Finally, the ageing process of the solar cell has been investigated with optical and electrical measurements. We deduced that the ageing process appear at first at the perovskite grain surface and boundaries. The electrical characteristics are degraded through a deterioration of the silver top-electrode due to the diffusion of iodides toward the silver as shown by GDOES analysis. Abstract (in French) Les perovskites organiques-inorganiques en halogénures de plomb sont des matériaux très prometteurs pour la prochaine génération de cellules solaires avec des avantages intrinsèques tels que leur faible coût de fabrication (grande disponibilité des matériaux de base et leur mise en oeuvre à basse température) et leur bon rendement de conversion photovoltaïque. Cependant, les cellules solaires pérovskites sont encore instables et montrent des effets d'hystérésis courant-tension délétères. Dans cette thèse, des résultats de l'analyse physique de couches minces de pérovskite à base de CH 3 NH 3 PbI 3-x Cl x et de cellules solaires ont été présentés. Les caractéristiques de transport électrique et les processus de vieillissement ont été étudiés avec différentes approches. Dans une première étape, la synthèse du matériau pérovskite a été optimisée en contrôlant les conditions de dépôt des films en une seule étape telles que la vitesse de rotation (6000 rpm) de la tournette et la température de recuit des films (80 °C). Dans un second temps, des cellules solaires perovskites à base de CH 3 NH 3 PbI 3-x Cl x ont été fabriquées en utilisant la structure planaire inversée et caractérisées optiquement et électriquement. Grace à l'utilisation de la spectroscopie optique à décharge luminescente (GDOES), un déplacement des ions halogénures a été observé expérimentalement et de façon directe sous l'application d'une tension électrique. Une longueur de diffusion ionique de 140 nm et un rapport de 65% d'ions mobiles ont été déduits. Il est montré que l'hystérésis courant-tension dans l'obscurité est fortement affectée par la migration des ions halogénures provoquant un écrantage substantiel du champ électrique appliqué. Nous avons donc trouvé sous obscurité un décalage de la tension à courant nul jusque 0,25 V et un courant de fuite jusque 0,1 mA / cm 2 en fonction des conditions de mesure. Grâce aux courbes courant-tension en fonction de la température, nous avons déterminé la température de transition de la conductivité ions/électrons à 260K et analysé les résultats expérimentaux en utilisant l'équation de Nernst-Einstein donnant une énergie d'activation de 0.253 eV pour les ions mobiles. Enfin, le processus de vieillissement de la cellule solaire a été étudié avec des mesures optiques et électriques. Nous avons déduit que le processus de vieillissement apparaît d'abord à la surface des cristaux de pérovskite ainsi qu'aux joints de grains. Les mesures GDOES nous indiquent que les caractéristiques électriques des cellules pérovskites sont perdues par une corrosion progressive de l'électrode supérieure en argent causée par la diffusion des ions iodures. Chapter 1. Introduction Solar cells operation Photovoltaics The worldwide consumption of energy has increased every year by several percentage over the last thirty years 1 . Due to an overall growth of world population and a further development in area like Asia, Africa and Latin-America, it is believed that the growth will persist or even accelerate over the coming decades 2 . Most of this energy is nowadays supplied by fossil on the one hand and by nuclear energy on the other hand. However, these resources are limited and their use has a serious environmental impact, which extends probably over several future generations 3 . This situation poses an enormous challenge already for the present generation to start up a transition in energy consumption and production. More efficient usage of produced energy could possibly lead to a decreased consumption whereas new technologies could steer this transition toward a more sustainable energy production. Sustainable development meets the needs of today without jeopardizing the future. In this respect, renewable energy sources fit in very well. Besides their environmental friendliness, they offer several advantages 4 . Diversification of energy supplies can lead to more economical and political stability. Moreover, countries and regions can become more independent by supplying their own energy from renewable instead of having to import fuels or electricity from large production plants. Finally, a transition from traditional fuel based energy production to renewable energy resources can even lead to substantial increase of employment. Several renewable energy resources are under development or even already introduced on the market. Still, they make up only a limited part of the total energy production 5 . Among these renewable energy resources, the direct and indirect use of solar energy is believed to have much larger possible application than used nowadays. The total amount of solar irradiation per year on the earth's surface equals 10000 times the world's yearly energy need. This solar energy can on the one hand be applied passively as lighting resource and space heating in buildings. Besides this, active applications concern the heating of water or heat fluids through concentrator systems for domestic use or even in industrial processes. The Where is the angle from the vertical (solar zenith angle). When the sun is directly overhead, the air mass is 1. By passing through the earth's atmosphere, light scattering and absorption occur such that the spectrum and the intensity at the earth's surface have been altered. Combining these effects with the complication that besides a component of radiation directly from the sun also a significant part is scattered as indirect or diffuse light, other AMstandards are developed. A commonly used reference radiation distribution is the global spectrum. This global spectrum combines a direct AM1.5 part with a (= 100 mW/cm 2 ). It accords rather well to circumstances for Western European countries on a clear sunny day in summer. common to list the short-circuit current density (mA/ ) rather than the short-circuit current. Second condition is the number of photons. J SC from a solar cell is directly dependent on the light intensity. So the spectrum of the incident light is also important. Third condition is the optical properties with absorption and reflection of the solar cell. It can be controlled directly by the thickness of active layer. And last condition is the collection probability of the solar cell. It depends chiefly on the surface passivation and the minority carrier lifetime in the base. When comparing solar cells of the same material type, the most critical material parameter is the diffusion length and surface passivation. In a cell with perfectly passivated surface and uniform generation, the equation for the short-circuit current can be approximated as: = ( + ) (1.5) When G is the generation rate, and L n and L p and the electron and hole diffusion lengths respectively. Although this equation makes several assumptions which are not true for the conditions encountered in most solar cells, the above equation nevertheless indicates that the short-circuit current depends strongly on the generation rate and the diffusion lengths. Open-circuit voltage (V OC ) The open-circuit voltage, V OC , is the maximum voltage available from a solar cell, and The above equation shows that V OC depends on the saturation current of the solar cell and the light-generated current. While J SC typically has a small variation, the key effect is the saturation current, since this may vary by orders of magnitude. The saturation current, I 0 depends on recombination in the solar cell. Open-circuit voltage is then a measure of the amount of recombination in the device. The V OC can also be determined from the carrier concentration 11 : this = ( ∆ )∆ (1.7) Where kT/q is the thermal voltage, N A is the doping concentration, n is the excess carrier concentration and n i is the intrinsic carrier concentration. The determination of V OC from the carrier concentration is also termed implied V OC . Fill factor & Power conversion efficiency (PCE) The J SC and the V OC are the maximum current and voltage respectively from a solar cell. However, at both of these operating points, the power from the solar cell is zero. The fill factor (FF) is a parameter which, when considering the J-V curve in conjunction with V OC and J SC , determines the maximum power from a solar cell. The FF is defined as the ratio of the maximum power from the solar cell to the product of V OC and J SC . Graphically, the FF is a measure of the squareness of the J-V curve of the solar cell, so is the area of the largest rectangle inscribed in the J-V curve in Figure 1.3b. % = × ×100 = × × ×100 (1.8) The PCE is the most commonly used parameter to compare the performance of one solar cell to another. Efficiency is defined as the ratio of electrical power output from the solar cell to input incoming optical power from the sun. In addition to reflecting the performance of the solar cell itself, the efficiency depends on the spectrum and intensity of the incident sunlight and the temperature of the solar cell. Therefore, conditions under which efficiency is measured must be carefully controlled in order to compare the performance of one device to another. The efficiency of a solar cell is determined as the fraction of incident power which is converted to electricity and is defined as following equation. PCE = -exp -1 -(1.10) In ideal case, R S become zero and R SH become infinity. To understand the effect of resistances on solar cell properties, Figure 1.4 shows the equivalent circuit and example of J-V characteristics with poor R S and R SH . 12 Introduction of Perovskite Solar Cells While the organic-inorganic halides have been of interest since the early twentieth century 13 , the first report of perovskite-structured hybrid halide compounds was published by Weber in 1978 14,15 . He reported both CH 3 NH 3 PbX 3 (X = Cl, Br, I) and the CH 3 NH 3 SnBr 1-x I x alloy. In the subsequent decades, these materials were studied in the context of their unusual chemistry and physics [16][17][18] with the first solar cell appearing in 2009 19 . The notable achievements in the photovoltaic applications of hybrid perovskites have been the subject of many reviews. The basics of the perovskite crystal structure are introduced and the unique dynamic behaviors of the hybrid organic-inorganic materials are presented in this chapter, which underpins their performance in photovoltaic devices that will be discussed in the later chapters. Walsh et al. is the leading group that verified the molecular motion and dynamic crystal structure of hybrid halide perovskite [20][21][22][23][24][25][26][27][28][START_REF] Leguy | The dynamics of methylammonium ions in hybrid organic-inorganic perovskite solar cells[END_REF] . Perovskite Alkali-metal lead and tin halides had been already synthesized in 1893 [START_REF] Wells | Uber die Caesium-und Kalium-Bleihalogenide[END_REF] , yet the first crystallographic studies that determined that cesium lead halides had a perovskite structure with the chemical formula CsPbX 3 (X = Cl, Br or I) were only carried out in 1957 by the Danish scientist Christian Møller [START_REF] Møller | Crystal structure and photoconductivity of caesium plumbohalides[END_REF] . He also observed that these coloured materials were photoconductive, thus suggesting that they behave as semiconductors. In 1978, Dieter Weber replaced cesium with methylammonium (MA) cations (CH 3 NH 3 + ) to generate the first threedimensional organic-inorganic hybrid perovskites [START_REF] Weber | CH 3 NH 3 PbI 3 , a Pb(ǁ)-system with Cubic Perovskite Structure[END_REF][START_REF] Weber | CH 3 NH 3 PbI 3 , a Pb(ǁ)-system with Cubic Perovskite Structure[END_REF] . The general crystal structure of these materials is shown in Fig. 1.5. the undistorted facets of the cuboctahedral cavity. Correspondingly, molecules belonging to different planes are anti-aligned with a head-tail motif. Such an anti-ferroelectric alignment is expected from consideration of the molecular dipole-dipole interaction 21 . In the lowtemperature orthorhombic phase, the CH 3 NH 3 + sub-lattice is fully ordered (a low entropy state). The ordering may be sensitive to the material preparation and/or cooling rate into this phase, i.e. the degree of quasi-thermal equilibrium. It is possible that different ordering might be frozen into the low-temperature phase by mechanical strain or electric fields. At 165 K, MAPI goes through a first-order phase transition from the orthorhombic to the tetragonal space group, which continuously undergoes a second-order phase transition to the cubic phase by ca. 327 K 16,[START_REF] Weller | Complete structure and cation orientation in the perovskite photovoltaic methylammonium lead iodide between 100 and 352[END_REF] . As with the orthorhombic phase, this can be considered a 2× 2×2 expansion of the cubic perovskite unit cell. The molecular cations are no longer in a fixed position as in the orthorhombic phase. CH 3 NH 3 + is disordered between two non-equivalent positions in each cage. [START_REF] Wasylishen | Cation rotation in methylammonium lead halides[END_REF] With increasing temperature the tetragonal lattice parameters become more isotropic. The molecular disorder also increases to the point where a transition to a cubic phase occurs around 327 K. The transition can be seen clearly from changes in the heat capacity, 17 as well as in temperature-dependent neutron diffraction [START_REF] Weller | Complete structure and cation orientation in the perovskite photovoltaic methylammonium lead iodide between 100 and 352[END_REF] . Indeed, for the bromide and chloride analogues of MAPI, pair-distribution function analysis of X-ray scattering data indicates a local structure with significant distortion of the lead halide framework at room temperature. Hysteresis characteristics and device stability 1.2.3.1. Hysteresis The rapid rise in performance of the cells has been accomplished essentially by minor modifications in the device structure, morphology of the films and fabrication methods, etc. However, there are several fundamental issues despite the excellent device efficiencies. Many observations indicate that the perovskite films suffer from un-stabilized optoelectronic performance. They are hysteresis in current and voltage curves, wide distribution in performance, performance durability, difficulties in reproducing the results, etc. These issues require deeper scientific understanding and demand serious attention. Among all the above problems, hysteresis has been apparently considered as a major. It has been widely observed that perovskite solar cells show substantial discrepancy between the current density-voltage (J-V) curves measured on forward scan (from short circuit to open circuit) and backward scan (from open circuit to short circuit). J-V hysteresis can be found in dye-sensitized solar cells (DSSCs), organic thin film solar cells (OSCs), and Si solar cells, when the voltage scan is too fast [START_REF] Naoki | Methods of measuring energy conversion efficiency in dye-sensitized solar cells[END_REF] . This hysteresis is explained by the effect of capacitive charge, including space charges and trapped charges. When the scanning speed is faster than the release rate of traps, or faster than the space charge relaxation time, hysteresis is seen. In organic-inorganic perovskite solar cells, hysteresis behavior is much slower but more complex and anomalous. This anomalous property of the perovskite solar cells creates confusion about the actual cell performance [START_REF] Editorial | Solar cell woes[END_REF][START_REF] Editorial | Bringing solar cell efficiencies into the light[END_REF][START_REF] Editorial | Perovskite fever[END_REF] . Hysteretic J-V curves imply that there is a clear difference in transient carrier collection at a given voltage during the forward and backward scan. It is generally known that backward scan measures higher current than the forward scan, independent of scan sequence. This confirms that carrier collection is always more efficient during the backward scan. In general, carrier collection (or current) in the device depends on carrier generation, separation, and its transport at recombination in the bulk of the layers and across different interfaces in the device. As carrier generation and separation are considered fast processes and depend only on illumination (not on the voltage scan) any difference in initial collection must be influenced by transport and/or transfer at the interfaces. Parameters affecting hysteresis As carrier collection depends on the conductivity of perovskite and other layered materials and the connectivity at their interfaces, hysteresis is observed to be affected by many factors that can slightly change the characteristics of the layers in perovskite device. Therefore, the diversity in device structure and the fabrication methods, and even changes in measurement conditions result in wide variation in the trends of hysteresis. As a result, the problem of hysteresis becomes too complex to be understood completely. Device structure and process parameters Perovskite devices of different architectures using the same perovskite but different electron and hole collecting layers show different magnitudes of hysteresis [START_REF] Kim | Control of I-V Hysteresis in CH3NH3PbI3 Perovskite Solar Cell[END_REF] . For instance, the standard planar heterojunction architecture; FTO / TiO 2 compact layer / perovskite / spiro-OMeTAD / Ag, FTO / PCBM / perovskite / spiro-OMeTAD / Ag and FTO / TiO 2 -PCBM / perovskite / spiro-OMeTAD / Ag (PCBM is [6, 6]-phenyl-C61-butyric acid methyl In addition, temperature and light intensity can also alter the J-V hysteresis significantly. For a planar perovskite solar cell (FTO/TiO2 compact layer/perovskite/spiro-OMeTAD/Au), as reported by Ono et al. [START_REF] Ono | Temperature-dependent hysteresis effects in perovskite-based solar cells[END_REF] , J-V hysteresis is large at low (250 K) and room temperatures (300 K) whereas the behavior becomes feeble at higher temperature (360 K). Grätzel et al. [START_REF] Meloni | Ionic polarization-induced current-voltage hysteresis in CH3NH3PbX3 perovskite solar cells[END_REF] also witnessed hysteresis of iodide based perovskite cells increasing with decrease in temperature. The other interesting fact that was noticeable here is that the reverse scan shows minor dependence on the temperature while the forward J-V curve is strongly affected by change in temperature. Even, in case of hysteresis-free perovskite cells of inverted architecture, remarkably large hysteresis comes up when the device is measured at low temperature [START_REF] Bryant | Observable hysteresis at low temperature in "Hysteresis Free" organicinorganic lead halide perovskite solar cells[END_REF] . Like in other solar cells, photocurrent of perovskite cells increases linearly with light intensity. With increase in photocurrent, the gap between the forward and backward J-V curves increases proportionately. For a planar perovskite cell (FTO/TiO2 compact layer/CH 3 NH 3 PbI 3-x Cl x /spiro-OMeTAD/Au), as observed in our lab, the difference between the forward and backward performance increases with light intensity. However, when normalized with the photocurrent, the J-V hysteresis looks almost unchanged. For a TiO2 mesoscopic device (FTO/TiO2 dense layer/TiO2 mesoporous layer/ CH 3 NH 3 PbI 3-x Cl x /spiro-OMeTAD/Au), as reported by Grätzel et al., the shape and magnitude of J-V hysteresis remain independent of light intensity. Such independence of hysteresis on light intensity rejects direct involvement of the photo-generated carriers in hysteresis (Fig. 1.14). Piezoresponse force microscopy (PFM) of CH 3 NH 3 PbI 3 at different poling (prebiasing in dark) conditions shows ferroelectric domains in CH 3 NH 3 PbI 3 [START_REF] Yasemin Kutes | Direct observation of ferroelectric domains in solution-processed CH3NH3PbI3 perovskite thin films[END_REF][START_REF] Kim | Ferroelectric polarization in CH3NH3PbI3 perovskite[END_REF][START_REF] Chen | Interface band structure engineering by ferroelectric polarization in perovskite solar cells[END_REF] . Such poling of the perovskite films alters the cell performance and J-V hysteresis dramatically, which is attributed to polarization of the ferroelectric domains that leads to modification of band structure at the interfaces [START_REF] Chen | Interface band structure engineering by ferroelectric polarization in perovskite solar cells[END_REF] . It While some results evidence ferroelectric property of perovskite and support its strong effect on hysteresis some other reports contradict the presumption of hysteresis being caused by ferroelectric polarization [START_REF] Snaith | Anomalous hysteresis in perovskite solar cells[END_REF] . Fan et al. [START_REF] Fan | Ferroelectricity of CH3NH3PbI3 perovskite[END_REF] reported that perovskites are not ferroelectric at room temperature although their theoretical calculations predicted the material to be mild ferroelectric. Therefore, exhibition of ferroelectric behavior by the perovskite compound at the operating conditions of the device still remains a debate. Furthermore, the effect in the actual device consisting of thin layers can be different from the ferroelectric property of bulk perovskite. The explanation of transient effect caused by ferroelectric polarization must be consistent when all the different interfaces are included. Therefore, it is reasonable that the interface properties play equally important role in causing hysteresis. Unbalanced carrier extraction Although polarization by pre-biasing perovskite was found to enhance hysteresis, Jena et al. [START_REF] Jena | The interface between FTO and the TiO2 compact layer can be one of the origins to hysteresis in planar heterojunction perovskite solar cells[END_REF] discovered that hysteresis is also exhibited by a cell made of non-ferroelectric PbI2. (Fig. This result indicated that ferroelectric property couldn't be the only reason for hysteresis. 1.16) Examination of J-V characteristics of different interfaces in simplified structures like FTO/TiO2 compact layer (CL)/Spiro-OMeTAD/Au and FTO/TiO2 CL disclosed that the interface between FTO and TiO2 can be one of the contributors to hysteresis. Besides, it strongly suggested that the interface between perovskite and TiO2 could be a major player for hysteretic J-V curves. In devices made of perovskite, modification of the interface between perovskite and electron collecting layer apparently affects hysteresis. For instance, modification of TiO2 compact layer (CL) with C60 83 reduces hysteresis. Incorporation of Zr [START_REF] Nagaoka | Zr incorporation into TiO2 electrodes reduces hysteresis and improves performance in hybrid perovskite solar cells while increasing carrier lifetimes[END_REF] and Au nanoparticles [START_REF] Yuan | Hot-electron injection in a sandwiched TiOx-Au-TiOx structure for highperformance planar perovskite solar cells[END_REF] in the TiO2 compact layer also reduces hysteresis. In addition, no/negligible hysteresis observed in the perovskite-based cells using organic electron collecting layers [START_REF] Im | 18.1 % hysteresis-less inverted CH3NH3PbI3 planar perovskite hybrid solar cells[END_REF][START_REF] Tao | 17.6 % steady state efficiency in low temperature processed planar perovskite solar cells[END_REF] instead of TiO2 compact layer sandwiched between FTO and perovskite also supports the fact that interface properties can be crucial for hysteresis Carrier extraction depends strongly on physical and electrical contact at the interfaces. Therefore, conductivity of the layers and their morphology and the interface connectivity play a role in causing hysteresis. Gaps at the interfaces can act as capacitors due to carrier accumulation and thereby alter carrier extraction significantly [START_REF] Cojocaru | Origin of the hysteresis in I-V curves for planar structure perovskite solar cells rationalized with a surface boundary-induced capacitance model[END_REF] . Although ferroelectrics can be involved in the above-mentioned interfacial phenomena, it is not solely responsible for hysteresis. The carrier dynamics at the interface, which might be influenced either by ferroelectric polarization or ion migration, holds responsible for causing hysteresis. Ion migration In halide perovskites, smaller organic cations can diffuse faster than larger organic cations and the halide anions are considered to have significantly high mobility compared to the heavy metal cation (Pb2+). The calculated activation energy for migration of halide vacancies (iodide vacancy) is significantly lower than the organic cation vacancies [START_REF] Haruyama | First-principles study of ion diffusion in perovskite solar cell sensitizers[END_REF][START_REF] Eames | Ionic transport in hybrid lead iodide perovskite solar cells[END_REF] . Methyammmonium cation (MA+) and iodide anion (I-) are freely diffusible in MAPbI 3 at room temperature. Even, MA+I-is thermally unstable and it can be evaporated from the crystal structure at elevated temperature [START_REF] Alberti | Similar structural dynamics for the degradation of CH3NH3PbI3 in air and in vacuum[END_REF] . These ions in halide perovskite never undergo reaction with photo-generated carriers and contribute to self-organization electrostatically and geometrically in crystal structure formation. However, migration of the ions in perovskite is considered to cause carrier localization that gives rise to J-V hysteresis. More importantly, the process of ion motion can affect stability of perovskite devices through continuous change in chemical and physical properties of the perovskite under photovoltaic operation. Such ionic migration under applied voltage to the device has been discussed in relation to generation of hysteresis. Snaith et al., based on photocurrent behavior of perovskite cell influenced by pre-biasing, proposed that under forward bias, MAPbI 3 becomes polarized due to accumulation of positive and negative space charges near electron and hole collector interfaces [START_REF] Zhang | Charge selective contacts, mobile ions and anomalous hysteresis in organic-inorganic perovskite solar cells[END_REF] . This charge accumulation is assumed to cause n-type and p-type doping at the interfaces (formation of p-i-n structure), which temporarily enhances photocurrent generation. Accumulation of migrated ions at the interfaces can change the polarity [START_REF] Zhao | Anomalously large interface charge in polarity-switchable photovoltaic devices: an indication of mobile ions in organic-inorganic halide perovskites[END_REF] (Fig. 1.17) and the dynamic change in accumulation of these charges with the scanning voltage (changing internal field) is assumed to generate hysteresis in photocurrent of the perovskite devices [START_REF] Zhang | Charge selective contacts, mobile ions and anomalous hysteresis in organic-inorganic perovskite solar cells[END_REF][START_REF] Zhao | Anomalously large interface charge in polarity-switchable photovoltaic devices: an indication of mobile ions in organic-inorganic halide perovskites[END_REF] . Figure 1. 17. Device and mechanism of switchable polarity in perovskite photovoltaic devices [START_REF] Zhao | Anomalously large interface charge in polarity-switchable photovoltaic devices: an indication of mobile ions in organic-inorganic halide perovskites[END_REF] Li et al. [START_REF] Li | Iodine migration and its effect on hysteresis in perovskite solar cells[END_REF] Trap states Among all possible causes of hysteresis, trap states are also considered to be major. Even though the recent reports strongly support the hypothesis of ion migration being responsible for hysteresis, most of the proposed models cannot rule out involvement of trap states. While some results partially agree with the assumption that trap sates must be causing hysteresis some other results certainly indicate their participation in the process. Trap states on surface and grain boundaries of perovskite have been proposed to be the origin of hysteresis [START_REF] Shao | Origin and elimination of photocurrent hysteresis by fullerene passivation in CH3NH3PbI3 planar heterojunction solar cells[END_REF] . Fullerene deposition on perovskite is believed to passivate these trap states and thereby eliminate the notorious hysteresis in photocurrent (Fig. 1.18). While ion migration is being talked much in relation to J-V hysteresis in the present time, lack of more direct evidences and comprehensive models makes it hard to conclude that ion migration is the sole cause of hysteresis. In fact, according to a model proposed by Snaith et al., only ion migration cannot explain hysteresis but ion migration and interfacial trap states together explains hysteresis [START_REF] Van Reenen | Modeling anomalous hysteresis in perovskite solar cells[END_REF] . The reduction of density of either the mobile ions or the trap states can reduce hysteresis. However, there are few facts that conflict with the theory of trap states being responsible for hysteresis. One of them is the observed low trap density in perovskite [START_REF] Shi | Low trap-state density and long carrier diffusion in organolead trihalide perovskite single crystals[END_REF][START_REF] Oga | Improved understanding of the electronic and energetic landscapes of perovskite solar cells: high local charge carrier mobility, reduced recombination, and extremely shallow traps[END_REF] . Relatively small trap density in perovskite is expected to result in slight/no hysteresis if trap states are the only players in hysteresis. In addition, the effective change in trap density with illumination intensity must be reflected as altered hysteresis in the perovskite cells. As trap states are better filled under light of higher intensity, hysteresis is expected to be less in the cases of higher intense illumination. On the contrary, it has been observed that hysteresis increases with light intensity, in proportion to the photocurrent. Therefore, the presumption that trapping-detrapping of existing trap sates in perovskite causes hysteresis is not completely convincing. Further investigation is needed to find more direct and distinct evidences for active participation of trap states in creating hysteresis. Also, mechanism of generation of the trap states and their density in perovskite need to be examined further to support the models based on trap states. Stability In order to check the performance stability, Jena et al. measured photocurrent density of the cells (not encapsulated) operated across the load (600 Ω) with cyclic on and off of light (1 sun). [START_REF] Jena | Steady state performance, photoinduced performance degradation and their relation to transient hysteresis in perovskite solar cells[END_REF] The time for each on and off cycle was fixed to 3 min and all the devices were measured for 10 cycles, enduring almost 1 h for the complete measurement. As can be seen in Fig. 1.20, hysteresis increased and performance stability decreased with thickness of the perovskite film in the planar perovskite solar cells, indicating that hysteresis may be directly related to such performance instability. *PEDOT = PEDOT :PSS Motivation First motivation of the thesis is the lack of theoretical understanding on J-V hysteresis for organic-inorganic hybrid perovskite films and devices. In spite of impressive progress in terms of device performance, current knowledge on many fundamental phenomena is still incomplete. In a historical point of view, this problem might be due to the urgent need to prove that perovskite device can compete with existing technologies in the photovoltaic field. Tremendous research efforts have thus mainly been put into the experimental works, which could finally meet such requirements. However, many theoretical questions remain unsolved. To answer these questions, various analysis technologies are used in the thesis. In conclusion, direct experimental evidences of special characteristics for organic-inorganic hybrid perovskite films are verified. These scientific evidences can provide the ability to interpret the experimental results and even predict them based on proper physical insights. Such process can form a positive feedback for experimental design and fabrication. Second motivation is the understanding the ageing mechanisms for overcoming the poor stability issue in the organic-inorganic hybrid perovskite solar cells. The organic devices, such as display (OLED; organic light emitting diode) and photovoltaics (OPV; organic photovoltaics) have been studied since 1990s. The ageing characteristic is the incorrigible issue for the organic materials and devices. For solving the stability issue, most academic groups still focus on the encapsulation technologies or verifying the ageing mechanisms. I already studied the innovative encapsulation technology for OPV device. [START_REF] Lee | Solution processed encapsulation for organic photovoltaic[END_REF] However, it is not the fundamental solution for solving stability issue. We need more radical understanding in this issue. In the thesis, diverse electrical and optical measurements are used for understanding the ageing mechanisms. This study makes a step forward in the quest of elucidating ageing phenomena usually observed in perovskite-based solar cells. Thesis overview This Thesis is titled 'Electrical transport characteristics and ageing characteristics of Perovskite thin film'. As the title implies, our foremost aims have been on development of fabrication techniques for perovskite solar cells and investigation of perovskite film characteristics for photovoltaic devices using various measurement systems. Here, a brief description of each chapter is given for overview. Chapter 1 Introduction provides short summaries on the most basic physical backgrounds for understanding perovskite thin film and perovskite solar cells devices. The origin of the semiconducting properties in organic-inorganic hybrid materials is briefly described. The parameters affecting hysteresis, and mechanism of origin to hysteresis are described in this chapter. We explain in Chapter 2 (Experimental Methods) the main techniques or methods that have been applied to verify electrical and optical characteristics of perovskite thin film and device. Seven main methods used for analyzing thin film are UV-Vis spectroscopy, X-ray diffraction (XRD), scanning electron microscopy (SEM), ellipsometry, transmission line measurement (TLM), steady state photo-carrier grating (SSPG) and atomic force microscopy (AFM). The two main methods used for analyzing solar cells device are current densityvoltage (J-V) and glow discharge optical emission spectroscopy (GD-OES). As will be seen in the result chapters, all these methods have been complementarily used. An introduction and practical knowledge for each method is given here to understand the results obtained by using each method. In Chapter 3 (Development of Cells Performance by Studying Film Characteristics), we will discuss the development of cells performance with various thin film analyzing techniques. The analysis techniques have been studied for analyzing two different categories of characteristics of the perovskite thin film; electrical and optical characteristics. For verifying electrical characteristics, contact resistance and conductivity are measured by TLM system and diffusion length is estimated by SSPG measurement. With the optical characteristics, we can check the quality of crystallinity (by using XRD and SEM) and energy bandgap (by UV-Vis Spectroscopy). Furthermore, all thin film measurement systems are used for understanding the ageing characteristics of the perovskite thin films. After one and half year efforts, the PCE of perovskite solar cells has risen from about 6 % to 12.7 % with great reproducibility. In Chapter 4 (Ionic migration in CH 3 NH 3 PbI 3-x Cl x based perovskite solar cells), direct experimental evidences of ionic migrations are verified by using GD-OES systems. The applied voltage (from -2.5V to +2.5V) induced the perovskite thin film to generate the ionic migrations. Lead (Pb), Nitrogen (N), Iodine (I), and Chlorine (Cl) are the main atoms for studying ionic transport. We confirm that I ions in the perovskite thin film consist of mobile and fixed ions. In addition, the average length of ionic migration is estimated by the GD-OES system. In Chapter 5 (Hysteresis characteristics in CH 3 NH 3 PbI 3-x Cl x based perovskite solar cells), the J-V hysteresis is studied not only in light but also in dark conditions for understanding the original material characteristics without considering photo-generated (excess) carriers. The initial voltage, the scan rate and the temperature are the parameters to control the hysteresis. In conclusion, we confirm the J-V hysteresis even with inverted planar structure in dark condition. In addition, we found the ionic freezing temperature by studying J-V analysis at low temperature. Chapter 6 (Ageing Characteristics in CH 3 NH 3 PbI 3-x Cl x based Perovskite solar cells) shows that the results of device performance are presented with the film characteristics for understanding their operation mechanism. The J-V curve and GD-OES system are used for studying device performance. TLM, XRD, and UV-Vis spectroscopy are used for studying the film characteristics. Finally, not the variation at the bulk but the significant alternations at the interface between perovskite and PCBM is proposed. In Chapter 7 (Conclusion and Outlook), major results that are found and analyzed in the thesis are summarized with concluding remarks. Limitation encountered during the research and related suggestions for the future work will be also specified. Chapter 2. Experimental methods Introduction In this chapter, descriptions of fabrication details and analysis methods will be presented. The chapter will be separated in three parts; device fabrication, film characterization, and device characterization. In the first part, device fabrication details will be explained, from substrate/solution preparations to all layer deposition processes using different techniques. This part will also include a description of the different materials used in this study. In the second part, the characterization methods of perovskite film will be explained in two types, optical and electrical characteristics. The working principles of used equipment for characterizations are described in this part. Finally, the electrical characteristic measurements of perovskite solar cells and device analysis method will be discussed in the last part. Organic-inorganic hybrid perovskite solar cells device fabrication Substrate and solution preparation Substrate preparation is important to maintain reproducibility of the device performance. According to following preparation process, indium tin oxide (ITO) substrate is prepared. (Figure 2.1.(a)) The ITO coated glass substrates were purchased from Xinyan Technologies (inf. 20ohm/sq). The wet etching process is used to pattern the purchased ITO glass. The substrate was masked with 3M scotch tape and the ITO, uncovered with mask area was etched with acid solution of hydrochloric acid (HCl) and zinc powder (Zn). 1 The tape mask was removed after etching, and substrates were cleaned in sequential ultrasonic baths of detergent (Liqui-Nox Phosphate-Free Liquid Detergent, Alconox, Inc.) 1% diluted in deionized water, pure deionized water and 2-propanol (IPA) (30 min each). Nitrogen gas (N 2 ) was used to dry the substrates after each bath. majority of the ink is flung off the side. Airflow then dries the majority of the solvent leaving a plasticized film before the film fully dries to leave the useful molecules on the substrate. The rotation of the substrate at high speed means that the centripetal force combined with the surface tension of the solution pulls the liquid coating into an even covering. 5 During this time the solvent then evaporates to leave the desired material on the substrate in an even covering. In this study, the spin coating technique was used to deposit PEDOT:PSS (HTL) and PCBM (ETL) in air or N 2 condition. Dynamic dispense (Spin casting) In a dynamic dispense, the substrate is first started spinning and allowed to reach the desired spin speed before the solution is dispensed into the center of the substrate. The centripetal force then rapidly pulls the solution from the middle of the substrate across the entire area before it dries. In general, a dynamic dispense is preferred as it is a more controlled process that gives better substrate to substrate variation. 6 This is because the solvent has less time to evaporate before the start of spinning and the ramp speed and dispense time is less critical (so long as the substrate has been allowed time to reach the desired rpm). A dynamic dispense also used less ink in general although this does depend upon the wetting properties of the surface. The disadvantage of a dynamic dispense is that it becomes increasingly difficult to get complete substrate coverage when using either low spin speeds below 1000 rpm or very viscous solutions. 7 This is because there is insufficient centrifugal force to pull the liquid across the surface, and the lower rotation speed also means that there is increased chance that the ink will be dispensed before the substrate has completed a full rotation. As such, it is generally recommended using a static dispense at 500 rpm or below with either technique a possibility in a region between 500-1000 rpm. For the majority of spin coating above 1000 rpm, it is normally used a dynamic dispense as standard unless there are any special circumstances or difficulties. For more controlled process, the perovskite film was deposited by dynamic dispense process instead of spin coating process in this study. intensity beams by a half-mirrored device. One beam passes through a sample containing a target film onto the transparent substrate. The other beam, the reference, passes through an identical transparent substrate that used at sample. The intensities of these light beams are then measured by electronic detectors and compared. The intensity of the reference beam, which should have suffered little or no light absorption, is defined as I 0 . The UV region scanned is normally from 200 nm to 400 nm, and the visible portion is from 400 to 800 nm. UV-Vis spectroscopy is based on the principle of electronic transition in atoms or molecules upon absorbing suitable energy from an incident light that allows electron to excite from a lower energy state to higher excited energy state. While interaction with infrared light causes molecules to undergo vibrational transitions, the shorter wavelength with higher energy radiations in the UV and visible range of the electromagnetic spectrum causes many atoms/molecules to undergo electronic transitions. If the sample compound does not absorb light of a given wavelength, I=I 0 . However, if the sample compound absorbs light then I is less than I 0 , and this difference may be plotted on a where h is the Planck constant, is the wave frequency and c light speed in vacuum. Experimentally, the optical band gap E opt of the thin film is estimated by linear extrapolation from the absorption feature edge to A =0 and subsequent conversion of the wavelength (nm) into energy value versus vacuum (eV). In conclusion, the E opt can be determined by absorbance spectra. 11 In this study, UV-Vis spectroscopy was used for calculating the E opt value and studying ageing characteristics of the perovskite thin film. Scanning electron microscopy (SEM) Scanning electron microscopes (SEM) use a beam of highly energetic electrons to examine objects on a very fine scale. In a SEM, when an electron beam strikes a sample, a large number of signals are generated. This examination can yield the information such as topography (the surface features of an object), composition (the elements and compounds that the object is composed of and the relative amounts of them) and crystallographic information (how the atoms are arranged in the object). 13 The combination of high magnification, large depth of focus, good resolution, and the ease of observation makes the SEM one of the most widely used equipment. Figure 2.7 shows the schematic illustration of SEM measurement system. Secondary electrons (SE), corresponding to the most intense emission due to electronic impact, are produced when an incident electron excites an electron in the sample and loses some of its energy in the process. 14 The excited electron moves towards the surface of the sample and, if it still has sufficient energy, it excites the surface and is called a secondary electron (non-conductive material can be coated with a conductive material to increase the number of the secondary electrons that will be emitted with energies less than 50 eV). Alternatively, when the electron beam strikes the sample, some of the electrons are scattered (deflected from their original path) by atoms in the specimen in an elastic fashion (no loss of energy). These essentially elastically scattered primary electrons (high-energy electrons) that rebound from the sample surface are called backscattered electrons (BSE). The mean free path length of secondary electrons in many materials is round 10 Ǻ. Thus, although electrons are generated throughout the region excited by the incident beam, only those electrons that originate less than 10 Ǻ deep in the sample escape to be detected as secondary. This volume of production is very small compared with those associated BSE and X-rays. Therefore, the resolution using SE is better than either of these and is effectively the same as the electron beam size. The shallow depth of production of detected secondary electrons makes them ideal for examining topography. The secondary electron yield depends on many factors, and is generally higher for high atomic number targets, and at higher angles of incidence. 15 BSE can be used to generate an image in the microscope that shows the different elements present in a sample. 16 In our study, we used ellipsometry analysis for investigating the crystallinity and ageing characteristics of the perovskite thin film. Electrical characterization Transmission line measurement (TLM) Transmission line measurement (TLM) is a technique used in semiconductor physics and engineering to determine the contact resistance between a metal and a semiconductor. 20 The technique involves making a series of metal-semiconductor contacts separated by various distances. (Figure 2. AFM operation is usually described three modes, according to the nature of the tip motion : contact mode, also called static mode (as opposed to the other two modes, which are called dynamic modes) ; tapping mode, also called intermittent contact, AC mode, or vibrating mode, or, after the detection mechanism, amplitude modulation AFM ; non-contact mode, or, again after the detection mechanism, frequency modulation AFM. In this study, AFM measurement was used for identifying the surface roughness and the ageing mechanisms of the perovskite thin film. Perovskite solar cells device characterization Current density-voltage (J-V) characterization The J-V characteristics of perovskite solar cells were measured in N 2 glove box using a source meter (Keithley 2635) in the dark and under illumination conditions. An AM 1.5 solar simulator SC575PV with 100mW/cm 2 was used as the light source. Key parameters of perovskite solar cells could be extracted by using the methods mentioned in chapter 1.1.2. V OC and J SC were directly obtained from X and Y intercept of J-V characteristics, respectively. The FF was calculated as follows : = × × (2. 3) Where, V max and J max refer to the values when the photovoltaic device generates maximum power. Finally, PCE could be written as : = × = × × (2. 4) Furthermore, based on the equivalent circuit diagram model, series resistance and shunt resistance could be calculated from J-V characteristic. In high forward voltage (V OC region) In 1968, Werner Grimm introduced a glow discharge tube as a light source for spectroscopic analyses investigating the chemical composition of metallic materials.24,25 The so-called Grimm discharge tube is characterized by a special arrangement of the electrodes: The two electrodes of the DC current source are set up of a cylindrical hollow anode and the sample as cathode which is sealing the anode tightly. (Figure 2.12) Since then, this technique and its applications have been continuously refined, an important development relevant for our work has been the introduction of pulsed RF sources. With pulsed RF, not only non-conductive specimen and layers can be measured but also fragile and heat sensitive materials. [26] In 1968, Werner Grimm introduced a glow discharge tube as a light source for spectroscopic analyses investigating the chemical composition of metallic materials. 24,25 The so-called Grimm discharge tube is characterized by a spec. Today, GDOES is one of the most precise methods of elemental analysis and layer thickness measurement. The Glow Discharge Source is generally filled with argon gas under low pressure (0.5 -10 hPa). As shown in figure 2.12 (which describes the simple historical configuration with dc source), a high direct voltage (DC) is applied between the anode and the sample (cathode). Due to the DC voltage, electrons are released from the sample surface and accelerated towards the anode gaining kinetic energy. By inelastic collisions the electrons transfer their kinetic energy to argon atoms, which causes them to dissociate into argon cations and further electrons. This avalanche effect triggers an increase in the charge carrier density making the insulating argon gas conductive. The resulting mixture of neutral argon atoms and free charge carriers (argon cations and electrons) is called plasma. The argon cations are accelerated towards the sample surface because there is a high negative potential. Striking the sample surface the argon cations knock out some sample atoms. This process is referred to as sputtering. The sample surface is ablated in a plane-parallel manner. The knocked out sample atoms diffuse into the plasma where they collide with highenergy electrons. During these collisions, energy is transferred to the sample atoms promoting them to excited energy states. Returning to the ground state, the atoms emit light with a characteristic wavelength spectrum. Passing through the entrance slit, the emitted light reaches a concave grating where it is dispersed into its spectral components. These components are registered by the detection system. The intensity of the lines is proportional to the concentration of the corresponding element in the plasma. In this study, GD-OES analysis was used for investigating the direct experimental evidences of ionic migration in perovskite thin film, which is one of the key factors of current voltage hysteresis and poor stability. evaporated PSCs have been studied since March 2016. The PCE has increased from 0% to 11.1% in 6 months with bad reproducibility. The devices show different performances even though the perovskite thin films are deposited at the same time. Finally, the 1-step spincoating process was chosen in this thesis for the investigation of the perovskite solar cells. Introduction Inorganic-organic hybrid perovskite solar cells have attracted great attention due to its solution-processing and high performance. The hybrid perovskite solar cell was initially discovered in a liquid dye sensitized solar cells (DSSCs). 1 Miyasaka and coworkers were the first to utilize the perovskite (CH 3 NH 3 PbI 3 and CH 3 NH 3 PbBr 3 ) nanocrystal as absorbers in DSSC structure, achieving an efficiency of 3.8 % in 2009. 1 Later, in 2011, Park et al. got 6.5 % by optimizing the processing. 6 However, these devices showed fast degradation due to the decomposition of perovskite by liquid electrolyte. In 2012, Park and Gratzel et al. reported a solid state perovskite solar cell using the solid hole transport layer (Spiro-OMeTAD) to improve stability. 7 After that, several milestones in device performance have achieved using DSSCs structure. 6-16 However, these mesoporous devices need a high temperature sintering that could increase the processing time and cost of cell production. It was found that Methylammonium based perovskites are characterized by large charge carrier diffusion lengths (around 100 nm for CH 3 NH 3 PbI 3 and around 1000 nm for CH 3 NH 3 PbI 3-x Cl x ). 17, 18 Further studies demonstrated that perovskites exhibit ambipolar behavior, indicating that the perovskite materials themselves can transport both electrons and holes between the cell terminals. 17 All of these results indicated that a simple planar structure was feasible. The first successful demonstration of the planar structure can be traced back to the perovskite/fullerene structure reported by Guo, 19 showing a 3.9 % efficiency. The breakthrough of the planar perovskite structure was obtained using a dual-source vapor deposition, providing dense and high-quality perovskite films that achieved 15.4 % efficiency. 20 Recently, the efficiency of the planar structure was pushed over 19 % through perovskite film morphology control and interface engineering. 12 These results showed that the planar structure could achieve similar device performance as the mesoporous structure. The planar structure can be divided into regular (n-i-p) and inverted (p-i-n) structure depending on which selective contact is used on the bottom. The regular n-i-p structure has been extensively studied and was based on dye-sensitized solar cells; while removing the mesoporous layer the p-i-n structure is derived from the organic solar cell, and usually, several charge transport layers used in organic solar cells were successfully transferred into perovskite solar cells. 11 The p-i-n inverted planar of perovskite solar cells showed the advantages of high efficiencies, lower temperature processing and flexibility, and furthermore, negligible J-V hysteresis effects. Due to these various advantages, the only inverted planar structure of PSCs was used in our study. In this chapter, we will focus on the performance development of two kinds of PSCs (2-steps dipping, and 1-step spin coating processed PSCs), including optical and electrical characteristics of their perovskite thin films. Solution processed perovskite solar cells (PSCs) Dipping processed (2-steps) PSCs Solution preparation for PSCs In dipping processed (2-steps) PSCs, the perovskite solution and the [6,6] Device fabrication As explained in chapter 2, the substrates used in this study were prepared in two steps; wet-etching patterning process and gold deposition process (figure 3.2). We did the UV ozone treatment in the substrates just before the deposition processes. The layers excluding the electrodes were deposited layer by layer in N 2 condition by solution processes. The PEDOT:PSS used as hole transport layer (HTL) was deposited by spin-coating process. Firstly, we filtered AI 4083 PEDOT:PSS using a 0.45 µm PVDF filter. Then we dispensed 35 µl of the filtered PEDOT:PSS solution using a pipette onto the ITO substrate spinning at 6000 rpm (so-called dynamic dispense) with total spin-time set to 40s. Methanol was used for patterning the water based PEDOT:PSS. The perovskite deposited on top of gold was wiped for measuring current-voltage (J-V) characteristics of device. The substrate should now be placed onto a hotplate at 120°C during 20 minutes. This process creates a PEDOT:PSS film having a thickness of 50 nm. The complete drying of the PEDOT:PSS layer is important to the perovskite layer due to its poor water stability. The perovskite layer used as active layer was deposited in 2-steps dipping process. First, we used the spin-coating process for depositing the PbI2 layer on top of PEDOT:PSS layer. After dropping 60µl by using a pipette, the thickness of the PbI 2 layer was controlled by the rate per minute (rpm) condition. As PEDOT:PSS depositing process, the perovskite layer was patterned before annealing process by DMF solvent. Lastly, we annealed the PbI 2 layer at 70℃ during 30 minutes. For transforming the PbI 2 into the perovskite (CH 3 NH 3 PbI 3 ), methylammonium iodide The spin-coating process was used for depositing PC 60 BM layer (ETL) on top of perovskite thin film. We used a 0.45 µm PTFE filter for filtering the PCBM solution just before depositing. We control the rpm conditions to determine the ideal thickness of PC60BM layer for performing the roles of hole blocking and electron transporting. This deposition technique is completed by an annealing process at 70℃ during 5 minutes. The aluminum (Al) was deposited as the electrode on top of the PC 60 BM layer by thermal vacuum evaporation technique. The target thickness was 100nm, and the electrode was patterned by special mask for keeping the active area size in 0.28cm 2 . There was no encapsulation process in this study due to that most of the device characteristics can be measured in the glove box (N 2 condition). Figure 3.2c is a photo image of the full PSCs device just before measuring the device performances. Characteristics of perovskite thin film As we talked in the previous chapter, the dipping technique used for depositing the perovskite layer is the sensitive process. Therefore, the analysis study of perovskite thin film was essential for forwarding the cells performances and the reproducibility. In this study, we used optical (such as UV-Vis spectroscopy, SEM, XRD), and electrical (such as TLM, SSPG, J-V characteristic) techniques for analyzing the perovskite thin film. In general, the thickness of the active layer is one of the critical factors in the photovoltaic device performance. The amount of light absorption and the electron-hole pair (EHP) are increased with thick active layer. However, we have to consider the carrier diffusion length for preventing the recombination effect. In this study, the thickness of the thin films was measured by the depth profiler system. The film has to be scratched before measuring this system. A sensitive tip scans the scratched area in contact with the surface during analysis process. The height difference between the scratched film and non-scratched film is defined as the thickness of the thin film. Figure 3.3 shows the result of the depth profiler system measurement, which shows the perovskite thickness difference depending on the IPA dip-cleaning process. This process reduces both the thickness of perovskite thin film from 300 nm to 270 nm and the surface roughness value. Considering the thin thickness of PCBM (~50 nm), which will be deposited on top of perovskite film, the roughness control technique is essential in fabricating the state of the art device. Considering the error range (~10 nm) of this measurement system, the variations by IPA dip-cleaning effect are significant and meaningful. The light absorbance and transmittance of the perovskite thin film is studied by using the UV-vis spectroscopy system. Considering the figure 3.21 and figure 3.22, we investigated that the fast rpm and high annealing temperature induced the large grain size of perovskite thin film. However, XRD result was only changed when the grain size was controlled by annealing temperature. The link between this XRD results and J-V hysteresis will be discussed in chapter 5. Solution deposition engineering of perovskite and electron transport layer (ETL) From now on, the J-V performances of 1-step spin-coated (spin-casted) devices will be discussed. After checking the reproducibility limit of the 2-steps dipping processed PSCs device, the 1-step spin-coating (spin-casting) technique was studied for depositing the perovskite thin film. Figure 3.23 and table 3.4 show the J-V performances of the PSCs device, that we first fabricated by using 1-step spin-coating technique for depositing the perovskite layer. (8nm). The extra ETL layer, such as C60 and BCP layers were deposited by using the thermal evaporation process, onto the PCBM layer to avoid the short circuit. The thermal evaporation technique is less sensitive and better depositing process than the spin-coating technique onto the poor roughness film. With only adding 10nm of C60 and 8nm of BCP layers, the whole J-V performances were increased significantly. Among them, there was remarkable increment in V OC with great increment in R SH . The short circuit generated by the pinholes caused the R SH drop due to their leakage current. Therefore, preventing the short circuit by depositing C60 and BCP layer induced the R SH increment. The R SH is strongly linked with the V OC as explained in chapter 2. In conclusion, the PCE was significantly jumped from 0.3% to 10%. There were no significant differences of J-V performance depending on the C60 thickness between 10nm to 60nm as below. Conclusion The optimized processes of the solution processed PSCs in two different types (2-steps dipping, and 1-step spin-casting techniques) were discussed by using various thin film analysis techniques, in this chapter. The electrical transport characteristics of the perovskite thin films were analyzed by using TLM and SSPG measurement systems. The optical characteristics of the perovskite thin film were studied by using SEM, UV-Vis spectroscopy, and XRD measurement techniques for understanding the J-V performance or optimizing the fabrication conditions. With the 2-steps dipping process it was difficult to achieve great reproducibility by hand due to their various critical experimental conditions such as the dipping speed, the dipping angle, and the pulling speed. Achieving the great device performance or the great reliability was unavailable with poor reproducibility. We switched the perovskite depositing technique from the 2-steps dipping process to the 1-step spin-coating (spin-casting) process for achieving the great reproducibility. The solution dropping moments (spin coating à spin casting) and the perovskite thin film patterning position were controlled for outstanding reproducibility of the device J-V performance. During the optimizing studying, we identified the seriousness of the DMF gas damage in the glove box. Consequently, the PCE of PSCs device reached around 10 % with settled reproducibility. The advanced researches were studied with this 1-step spin-casting processed PSCs for investigating the ionic migration, the J-V hysteresis, and the ageing characteristics. The detail results will be discussed in following chapters (chapter 4, 5, and 6) Chapter 4. Ionic migration in CH 3 NH 3 PbI 3-x Cl x based perovskite solar cells using GD-OES analysis Summary The ionic migration of perovskite thin film is reported as a key factor for explaining the current-voltage (J-V) hysteresis and ageing characteristics. This chapter shows directly the ionic migration of halogen components (I -and Cl -) of CH 3 NH 3 PbI 3-x Cl x perovskite film under an applied bias using glow discharge optical emission spectrometry (GD-OES). Furthermore, no migration of lead and nitrogen ions has been observed. The ratio of fixed to mobile iodide ions is deduced from the evolution of the GD-OES profile lines as a function of the applied bias. The average length of iodide and chloride ion migration has been deduced from the experimental results. Introduction Charged ions, as well as charge carriers, are mobile under the applied electrical field in hybrid perovskite solar cells (PSCs). Although the phenomenon of the ion migration in halide-based perovskite materials has been reported over the last 30 years, 1 the corresponding ion migration in perovskite did not draw considerable attention until the broad observation of current-voltage (J-V) hysteresis problem in PSCs. The J-V hysteresis behavior on PSCs was first reported by Snaith et al. 2 and Hoke et al. 3 with mesoporous structure in 2013, and by Xiao et al. 4 with planar heterojunction structure in 2014. Various mechanisms have been proposed to explain the origins of the J-V hysteresis such as filaments, giant dielectric constant, unbalance between hole and electron mobility, trapping effect, ferroelectricity effect, and the ionic migration effect. [2][3][4][5][6] Among them, we intensively studied the ionic migration to explain the J-V hysteresis in PSCs device. The possible mobile ions in MAPbI 3 crystal include MA + ions, Pb 2+ ions, I-ions [7][8][9] , and other impurities such as hydrogen-related impurities (H + and H -) 10 . Considering the activation energy of ion migration and the distance with its nearest neighbors, it is entirely reasonable to expect that both the MA + ions and I -ions are mobile in the MAPbI 3 films, while the Pb 2+ ions are difficult to move. 7, 11- 13 Furthermore, the I -ions are the most likely (majority) mobile ions in the MAPbI 3 . However, as the migration of MA + ions have been firmly proved, 14 more direct experimental shreds of evidence are needed to find out whether I -ions are mobile. While the I -ion migration under the operation or measurement condition of the perovskite devices at room temperature has not yet been revealed experimentally. Here, we elucidate the mobile ions migrations in CH 3 NH 3 PbI 3-X Cl X based cells by direct measurement of glow discharge optical emission spectrometry (GD-OES). In this study, we show experimentally the migration of ions in hybrid perovskites CH 3 NH 3 PbI 3-x Cl x based solar cells as a function of an applied bias using glow discharge optical emission spectrometry (GD-OES) (Figure 4.1a). Pulsed RF GDOES is a destructive technique, so one measurement per sample is only possible. This is why it was crucially important first to set up a stable process giving the possibility to generate a lot of samples in order to statistically validate the obtained results. The size of the Ag top contact in our cells was slightly larger than the GD anode, therefore a direct analysis was possible. Figure 4.1a (right) shows the sample after GD measurement with the crater visible in the Ag contact. The GDOES allows direct determination of major and trace elements. [17][18][19] The ratio of fixed ions versus mobile ions is deduced by applying electrical bias on the device. These results show directly that halogen ions (I -and Cl -) move through the device while lead and nitrogen ions are immobile. This migration of halogen ions influences the electrical characteristics of PSCs devices and may be responsible for the J-V hysteresis. 1. Table 1 shows a good reproducibility of the devices and we can thus consider that all the samples used for GD-OES experiments have the same characteristics. As shown in table 4.1, the power conversion efficiency (PCE) under 1 sun equivalent illumination is 12.6 % for the best cell (11.6 % in average) with an active area of 0.28 cm2. Figure 4.1b and table 1 represent the J-V characteristics of the best cell scanned in the forward and in the reverse directions. The hysteresis effect is small (less than 2.5 %) in our case. This is consistent with the results in the literature [20][21][22] reporting that the p-i-n architecture does not show hysteresis while the n-i-p architecture shows significant hysteresis. shows symmetry when the perovskite film was annealed at 100 ℃. There is a connection between the profile line of I and Cl, which are anions in perovskite thin film. As reported, Cl is totally removed from perovskite thin film at the end. In our study, Cl is totally removed when the perovskite thin film is annealed at 100 ℃. However, Cl remains when the annealing temperature is 80 ℃. Due to the Cl gas evaporation process, Cl atoms starts to disappear from the top surface (interface between PCBM and Perovskite). On this account, the profile line of Cl is not symmetric and it has a decisive effect on I profile line due to the both negative charges. PSCs device characteristics GD-OES result under applied bias The GD-OES profile lines versus sputtering time under different applied biases (-2. without applied voltage. However, we observe the appearance of a second peaks when bias is applied on the device. The second peak is at 47 s of sputtering time under positive bias (+1.5 Iodide and chloride ion migration V), and at 37 s of sputtering time under negative bias (-1.5 V). We attribute these second peaks to the iodide ionic migration due to the applied bias. These second peaks begin to shrink after removing the applied voltage, and disappear in 2 minutes when the device was under positive bias and in 3 minutes when the device was under negative bias. It signifies a reversibility (slow reaction) of iodide ionic migration. In addition, we observed the same phenomena in the blue solid line, GD-OES profile lines of chloride ions in the perovskite film. (blue solid line in Fig4.15a and 4.15b, respectively) The initial peak is at 50 s of sputtering time before applied voltage. However, we observe the movement of the peaks when bias is applied on the device. The shifted peaks are at 54 s and 42 s of sputtering time under positive bias (+1.5 V) and negative bias (-1.5 V), respectively. We attribute these peaks movement to the chloride ionic migration due to the applied bias. These shifted peaks get back to initial position (sputtering time at 50 s) in 1 minute, which is shorter than that of iodide. It also means a reversibility of chloride ionic migration. These are consistent with the results in the Conclusion In conclusion, this GD-OES study has provided the direct experimental evidence of the ionic (I and Cl) migration in the CH 3 NH 3 PbI 3-x Cl x based perovskite films under applied bias. We show that lead and MA ions are not migrating under the applied bias in the 2 minutes time-scale (Figure 4.16) . (+1.5 V). Considering a short voltage scanning time (few ten seconds) in J-V measurement, the initial applied voltage is one of the critical conditions in halide ionic migration. The detailed discussion will be in chapter 5. Based on GD-OES, this study gives a way for observing directly ionic movements in hybrid perovskite films. It makes a step forward in the quest of elucidating electrical phenomena usually observed in perovskite based solar cells like J-V hysteresis, external electric field screening or interfacial effects with electrodes. fast processes depending only on illumination (not on the voltage scan), any difference in initial collection must be influenced by time of transport and/or time of transfer at the interfaces. Carrier collection depends on the type of conductivity existing in perovskite and of the other layers and on the connectivity at their interfaces. Therefore, the diversity in device structures and fabrication methods together with changes in measurement conditions results in a wide variation of the hysteretic behavior 4 . As a result, the issue of hysteresis becomes too complex to be understood completely. [5][6][7] The reported parameters possibly affecting hysteresis are device structure [8][9][10][11][12][13][14] , process parameters [15][16][17] , measurement and priormeasurement conditions [18][19][20][21][22][23][24][25] . In the recent years, a lot of effort has been made to understand the cause of J-V hysteresis in PSCs and different mechanisms have been proposed. Only few approaches have been successful in reducing/eliminating hysteresis in the devices. The anomalous hysteresis in J-V characteristics of PSCs could be due to ferroelectric polarization 14,19,[25][26][27] , ion migration 13,14 , carrier dynamics at different interfaces or deep trap states in the perovskite layer 14 . Although, at present, there is no single universally accepted mechanism that can explain the phenomenon coherently, the studies done so far have certainly provided deeper insights into the topic. What makes the complexity of the problem is that several factors such as device structure (planar, and mesoporous), perovskite film characteristics, electron collecting layer properties, etc., can possibly influence the J-V curves at the same time. The lack of complete understanding and the inadequacy of direct evidences demand further investigation. Under illumination, not only the original film characteristics but also the effect of photo-generated carriers have to be considered when analyzing the J-V curve. In this work, we focused on the J-V hysteresis under dark conditions to take into account influence of the original film characteristics only. We explain here how the halide ionic migration we reported in chapter 4 influences the electrical characteristics of PSC devices and may be responsible for the J-V hysteresis. In conventional semiconductors like Si, Ge, GaAs, or CdTe, the electrical conductivity concerns the electron and hole populations. When describing a semiconductor in thermal equilibrium, the Fermi level (chemical potential) of each carrier type is everywhere constant throughout the entire crystal, even across a p-n junction. From this requirement for which electron and hole current densities cancel, a relation can be derived between the diffusion constant or diffusivity (D n and D p ) and the mobility ( ) of the electrons and the holes, respectively. These relations are called the Einstein relations: = and = Electron mobility and hole mobility depend upon temperature and dopant concentrations through lattice scattering and impurity scattering. In the ionic crystals comprising alkali halides and the metal oxides, the electrically charged particles are ions (cations and anions) and electrons. The transport of ionic and electronic charge carriers are exposed to chemical and electrical potential gradients, which correspond to electrochemical potential gradients. The hybrid perovskite (for instance CH 3 NH 3 PbI 3 ) may be considered as an inhomogeneous mixture of a conventional semiconductor and an ionic crystal in which the ions are essentially associated to migration of vacancies (see chapter 4). The mobility of electrons and holes is several orders of magnitude larger than that of ions. When the perovskite is working under dark conditions, the intrinsic carrier concentration of electrons and holes defined by the band gap value (1.58 eV) is smaller than that of ionic carrier defects (around 10 17 cm -3 ). The point will be detailed thereafter in section 5.3.3. Nevertheless, the hybrid perovskite may still be considered as an electronic conductor depending upon temperature. In thermal equilibrium, the electrochemical potential of each carrier type through the entire crystal of the perovskite must be kept constant even across a heterojunction. From this requirement, more complex relations can be derived between the diffusivity of the particles and the mobility of the particles. These are the Nernst-Einstein relations that intervenes in the migration of species in crystalline solids, when species are subjected to a force. Considering the electrical force ( = , where Z i is the atomic number, e is an electronic charge, and E is the electric field), we can define the electrical mobility as velocity per unit electric field: = = , where V i is the velocity, and k is the Boltzman constant. The electrical conductivity (S i ) is defined as charge flux per unit electric field with units (S/m). It can be expressed as = , whew C i is the concentration. For ionic species, we can apply Nernst-Einstein equation. = = thickness. In conclusion, the perovskite thickness variation doesn't influence its electrical transport characteristics. between 450 nm (Fig. 5. 5a and 5d), 390 nm (Fig. 5. 5b and 5e), and 350 nm (Fig. 5. 5c and 5f) by rpm conditions in the spin-casting process. The J-V performance is always higher when the applied voltage is scanned in reverse direction (+1 V à -1.5 V) than when the applied voltage is scanned from forward direction (-1.5 V à +1 V). As the perovskite thickness increased, the J-V hysteresis became stronger both in light and dark conditions. We can observe the increment of open circuit voltage (V OC ) and fill factor (FF), but no difference for the short circuit current density (J SC ) (Table 5.1). Reducing the thickness of the perovskite films because of short circuit current density reduction decreases VOC. Moreover, by reducing the perovskite thickness also decreases the VOC difference between forward and reverse bias. In dark condition, the J-V hysteresis tendency in a is more observable way than under illumination. It is because the effect of the ion migration is less important compared with the applied electric field in thinner perovskite layer for which the total number of point defects is decreased. As shown in Fig. 5.12c, V OC_dark is fixed to zero when the voltage scanning direction is forward (initial voltage = negative; -3, -2, -1.5, -1, and -0.5V). However, V 0_dark is increased to 0.16V when the initial voltage is positive (the voltage scanning direction = backward; 1, 1.5, 2, 2.5, and 3V). This shift of V 0_dark can be explained using our model discussed above in Fig. 5. We already experimentally evidenced the migration of halide ions (iodide and chloride ions) under an applied bias in this 'hysteresis free' p-i-n structures, using glow discharge optical emission spectrometry (GD-OES). When the bias is negative, we found as expected that the mobile iodide ions move toward the PCBM side, and that when the bias is positive, the mobile iodide ions are shifted toward the PEDOT:PSS side. The influence of iodide ions migration on energy bands of the perovskite thin film device is described in fig. 5.13. Considering the intrinsic characteristics of the perovskite layer, the increase and decrease of the iodide ions' concentration close to the interfaces can be viewed equivalent to N a -depleted region and N d + depleted region, respectively. The resulting band bending within the perovskite thin film (due to ionic migration) directly impact carrier injection as well as the leakage current. We suggest this model can explain the current -voltage (J-V) hysteresis observed under dark conditions. called leakage current density versus the initial applied voltage. We choose -1 V as a reference for the leakage current density. The leakage current density is almost fixed to 0.007 mA/cm 2 when the initial voltage is positive. A light increase of the leakage current is induced by the electron created on the PEDOT:PSS side. When the initial voltage is negative, the leakage current density is this time increased to 0.012 mA/cm 2 . The tilted band energy under negative initial applied voltage induces the increment of drifted leakage current potential as J-V hysteresis versus measuring temperature For concluding our study of electrical transport characteristics in hybrid perovskite thin film, we analyzed current-voltage (J-V) measurement at low temperature (135 ~ 370 K) by using photovoltaic (PV) device and transmission line measurement (TLM) device. Figure 5.14a and Figure 5.14b show a PV device and a TLM device onto the sample stage for measuring J-V at low temperature. The sample stage is located at the vacuum chamber. There are 4 mobile tips for measuring J-V at various positions keeping the vacuum condition (10 - 4 ~10 -2 Torr). All the J-V performances have been measured in dark condition due to studying electrical characteristics without considering photo-generated carriers. migration-dominated conduction is 264 K, which agrees well with the value that we speculated as the ionic freezing temperature measured by PV device (Figure 5.17) and with the following article 28 . Conclusion In this chapter, we studied the J-V hysteresis versus the structure variation and the J-V measurement conditions. The perovskite layer thickness and the type of cathode (Al or Ag) are considered as the parameters of the structure variation. The voltage scanning rate, the initial applied voltage, and the measuring temperature are considered as the parameters of measurement conditions for understanding the J-V hysteresis. Especially, the dark J-V curves study together with complimentary GD-OES measurements provides a deeper understanding of the relation between halide (iodide and chloride) migration and J-V performance in CH 3 NH 3 PbI 3-x Cl x based PSCs. We verified that the halide migration is slow (1 -3 minutes) and reversible. Even with a 'quasi-hysteresis free' structure (p-i-n), we were able to evidence a J-V hysteresis under dark conditions, versus initial applied voltage, voltage scanning direction, and measuring temperature. The effect of halide migration on the J-V performance is more visible with the absence of photo-generated carriers. It is due to that the ion migration-related phenomena, including photocurrent hysteresis, such as switchable photovoltaics, photo-induced poling effect, light induced phase separation, giant dielectric constant, and self-healing, will be unlikely to occur with excess carriers. The V 0_dark value shifts only under reverse scanning direction due to the electron barrier created by the halide migrations at the interfaces. The leakage current density under forward scanning direction is always higher than that under backward scanning direction. The maximum leakage current density, is obtained at 343 K, which is consistent with the phase transition temperature. The 263 K, the transition temperature for electronic-to ion migrationdominated conduction, is the J-V hysteresis generating point in dark condition. The activation energy for ionic conduction is 0.253 eV and for electronic conduction is 0.112 eV. On the basis of GD-OES, this study gives a framework for observing directly ionic movement in hybrid perovskite films. And the dark J-V curve study provides a step forward in the quest of elucidating a link between halide migration and J-V hysteresis performance. This behaviour has major repercussions for understanding PSCs device performance and for the design of future architectures. of the other layers of the solar cell stack. For instance, the organic hole transporting material (HTM) is unstable when in contact with water. This can be partially limited by proper device encapsulation [12][13][14] using buffer layers between perovskite and HTM 15 or moisture-blocking HTM 16 such as NiO X delivering in this case, up to 1,000h stability at room temperature 17 . However, this approach increases the device complexity, and the cost of materials and processing. It is also worth to mention that most of the device stability measurements reported in literature are often done under arbitrary conditions far from the required standards 18 such as not performed under continuous light illumination 17 , measured at an undefined temperature, or leaving the device under uncontrolled light and humidity conditions 19 . This makes a proper comparison among the different strategies used challenging. Thin film analysis of CH 3 NH 3 PbI 3-x Cl x based perovskite In this study, optical, electrical thin film analyzing techniques and surface analyzing techniques are used for verifying the ageing mechanisms of the CH 3 NH 3 PbI 3-x Cl x based perovskite thin film. UV-Vis spectroscopy and XRD are used for investigating the energy band gap and crystallinity variation during ageing progress, respectively. The TLM is used for verifying the variation of contact resistance (R C ) and sheet resistance (R Sheet ). Finally, AFM is used for analyzing the surface variation versus ageing effects. indicates that the MAI is removed as gas and the PbI 2 remains in the perovskite thin film. Optical thin film analysis observed at the surface is considered as the proof of ageing effect. This indicates that the significant variation occurred at the surface of the perovskite film. Figure 6.7 shows the SEM image of the fresh perovskite film. We can observe that the surface exhibits a dense-grained uniform morphology with grain sizes in the range 100-700 nm. The entire film is composed of a homogeneous, well-crystallized perovskite layer. Among AFM topographies (figure 6.6a, 6.6c, and 6.6e), the observed grain size become similar to that of fresh sample measured by SEM by ageing process. The grain size observed in figure 6.6 e is around 500 -700 nm. It means that the grain bulk is well fixed during ageing process. In summary, we concluded that the interface variation is more critical than the bulk variation in the perovskite thin film during the first 1 week of ageing. The following three points can be drawn. First, we observed that the absorbance variation is significant only at low wavelength part (between 440 nm and 520 nm) in UV-Vis spectroscopy results. Second, we observed that the contact resistance variation is more significant than the sheet resistance variation in TLM results. Third, the RMS values in AFM results show remarkable differences in first week of perovskite ageing. the reason why we observe more dynamic variation of J SC and R S than the variation of V OC and R Sh during ageing process. Of course, there are various process and reasons for poor stability of the PSCs. However, the iodide ionic diffusion must be one of them. Specially, on the basis of GD-OES, this study gives a framework of observing directly the iodide ionic diffusion in hybrid perovskite film. And this study provides a step forward in the quest of elucidating a link between the interface variation and J-V ageing performance. This behavior will be the great guideline for understanding PSCs device ageing performance and for the design of future stable PSCs device. Chapter7. Conclusion and Outlook In this thesis, results on physics-based thin film analyses of CH 3 NH 3 PbI 3-x Clx based perovskite thin film and solar cells have been presented. The current-voltage (J-V) hysteresis and the cause for the ageing process have been investigated with multiple approaches. Here, the major findings and related conclusions are summarized and some ideas on further works are suggested. At first, we optimized the fabricating processes of the perovskite solar cells. The perovskite thin film deposition is sensitive process and critical techniques of cells performances. Among various materials and depositing techniques, we decided to study with Finally, the power conversion efficiency was reached at 12.7 %, indicating the state of the art device considering the structure and active area (0.28 cm 2 ). Our optimized deposition technique is very simple, so even those who are new to this method can produce the perovskite solar cells with efficiencies of more than 9 %. At second, the GDOES analysis technique was first tried for getting the direct experimental evidence of the ionic (I and Cl) migration in the CH 3 NH 3 PbI 3-X Cl X based perovskite films under applied bias. We verified that lead and MA ions are not migrating under the applied bias. We found that the ratio of fixed to mobile iodine saturates at 35 % and the average length of iodine migration is around 120 nm. In addition, we observed that the halide ionic migration is reversible and slow reaction both in positive and in negative applied bias.It takes 1 min (chloride ions) and 3 min (iodide ions) to come back from migrated position after stopping the negative voltage applied (-1.5 V). On the other hand, it takes 1 min (chloride ions) and 2 min (iodide ions) after applied positive voltage (+1.5 V). Based on GDOES, this study gives a way for observing directly ionic movements in hybrid perovskite films. It makes a step forward in the quest of elucidating electrical phenomena usually observed in perovskite based solar cells like J-V hysteresis, external electric field screening or interfacial effects with electrode. At third, the J-V hysteresis, which is a special characteristic of the perovskite solar cells, has been studied in this work versus the structure variation and the J-V measurement conditions. The perovskite layer thickness and the type of cathode (Al or Ag) are considered as the parameters of the structure variation. The voltage scanning rate, the initial applied voltage, and the measuring temperature are considered as the parameters of measurement conditions. Especially, the dark J-V curves study together with complimentary GDOES measurements provides a deeper understanding of the relation between halide (iodide and chloride) migration and J-V performance. The effect of halide migration on the J-V performance is more visible with the absence of photo-generated carrier. The V 0_dark value shifts only under reverse scanning direction due to the electron barrier created by the halide migrations. The leakage current density under forward scanning direction is always higher than that under backward scanning direction. In addition, we studied the J-V performance at low temperature with PSCs and TLM devices for checking the J-V hysteresis versus the measuring temperature. The maximum leakage current density, is obtained at 343 K, which is consistent with the phase transition temperature. The 263 K, the transition temperature for electronic-to ion migration-dominated conduction, is the J-V hysteresis generating point in dark condition. The activation energy for ionic conduction is 0.253 eV and for electronic conduction is 0.112 eV. On the basis of the dark J-V analysis, this study provides a step forward in the quest of elucidating a link between halide migration and J-V hysteresis performance. This behavior has major repercussions for understanding PSCs device performance and for the design of future architectures. In addition, studying the effect of electron or hole affinity difference at the both interface between HTL-perovskite layer and between ETL-perovskite layer on the J-V hysteresis can be an informative topic for the future work. Finally, we studied the ageing process of the CH 3 NH 3 PbI 3-X Cl X based perovskite thin film by studying optical (XRD, UV-Vis spectroscopy, and AFM) and electrical (TLM, J-V performance) thin film analysis techniques. And we could speculate that the perovskite interface variation is more critical than the variation in bulk. The GD-OES analysis gave the direct experimental evidence of iodide ionic diffusion toward silver electrode during ageing process. Finally, we understood the reason why we observe more dynamic variation of J SC and R S than the variation of V OC and R Sh during ageing process. Of course, there are various process and reasons for poor stability of the PSCs. The iodide ionic diffusion, which we Chapter 3 .Chapter 5 .Chapter 7 . 357 Development of cells performance by studying film characteristics ................................................................................................... Summary .............................................................................................................................. 3.1. Introduction ................................................................................................................... 3.2. Solution processed perovskite solar cells (PSCs) ......................................................... 3.2.1. Dipping processed (2-steps) PSCs .......................................................................... 3.2.2. Spin-casting processed (1-Step) PSCs .................................................................... 3.3. Conclusion ..................................................................................................................... Reference .............................................................................................................................. Chapter 4. Ionic migration in CH 3 NH 3 PbI 3-x Cl x based perovskite solar cells using GD-OES analysis ................................................................................... Summary .............................................................................................................................. 4.1. Introduction ................................................................................................................... 4.2. PSCs device characteristics ........................................................................................... 4.3. GDOES optimization process for perovskite thin film ............................................... 4.4. GD-OES analysis without applied bias ....................................................................... 4.5. GD-OES result under applied bias .............................................................................. 4.5.1. Iodide and chloride ion migration ........................................................................ 4.5.2. Lead and MA ion migration ................................................................................. 4.6. Conclusion ................................................................................................................... Reference ............................................................................................................................ Hysteresis characteristics in CH 3 NH 3 PbI 3-x Cl x based inverted solar cells ......................................................................................................... Summary ............................................................................................................................ 5.1. Introduction ................................................................................................................. 5.2. J-V Hysteresis depending on device structure ............................................................ 5.2.1. J-V Hysteresis versus perovskite thickness and Al as cathode ............................ 5.2.2. J-V Hysteresis versus perovskite thickness and Ag as cathode ............................ 5.3. J-V Hysteresis depending on measurement conditions ............................................... 5.3.1. J-V hysteresis versus voltage scanning rate ......................................................... 5.3.2. J-V hysteresis versus initial applied voltage ......................................................... 5.3.3. J-V hysteresis versus measuring temperature ....................................................... 5.4. Conclusion ................................................................................................................... Reference ............................................................................................................................ Chapter 6. Ageing study in CH 3 NH 3 PbI 3-x Cl x based inverted perovskite solar cells ......................................................................................................... Summary ............................................................................................................................ 6.1. Introduction ................................................................................................................. 6.2. Thin film analysis of CH 3 NH 3 PbI 3-x Cl x based perovskite .......................................... 6.2.1. Optical thin film analysis ...................................................................................... 6.2.2. Electrical thin film analysis .................................................................................. 6.2.3. Surface analysis .................................................................................................... 6.3. J-V characteristics of CH 3 NH 3 PbI 3-X Cl X based PSCs ................................................. 6.4. Ionic diffusion ............................................................................................................. 6.5. Conclusion ................................................................................................................... Reference ............................................................................................................................ Conclusion and outlook 오류! 책갈피가 정의되지 않았습니다. Figure 1 . 1 Figure 1.2 shows the summary of the best research cell efficiencies from different types of photovoltaic devices throughout the timeline. The first practical photovoltaic devices were demonstrated in the 1950s. Research and development of photovoltaics received its first major boost from the space industry in the 1970s which required a power supply separate from "grid" power for satellite applications. And there was the oil crisis in the 1970s to focus world attention on the desirability of alternate energy sources for terrestrial use, which in turn promoted the investigation of photovoltaics as a means of generating terrestrial power. In the 1980s research into silicon solar cells paid off and solar cells began to increase their efficiency. In 1985 silicon solar cells achieved the milestone of 20% efficiency. Over the next decade, the photovoltaic industry experienced steady growth rates of between 15% and 20%, largely promoted by the remote power supply market. Furthermore, researchers began studying the various solar cells such as dye-sensitized cell in 1991, organic photovoltaics devices (OPV) in 2001, and quantum dot cells in 2010. occurs at zero current. The open-circuit voltage corresponds to the amount of forward bias on the solar cell due to the bias of the solar cell junction with the light-generated current. The open-circuit voltage is shown in figure 1.3.b. The equation for V OC is found by setting the net current equal to zero in the solar cell equation to give: 4 .Figure 1 . 4 . 414 Figure 1. 4. J-V curve showing 2 different resistances Figure 1 . 1 Figure 1. 16. J-V characteristics of (a) planar heterojunction PbI2 and (b) CH 3 NH 3 PbI 3-X Cl X perovskite solar cells.[START_REF] Jena | The interface between FTO and the TiO2 compact layer can be one of the origins to hysteresis in planar heterojunction perovskite solar cells[END_REF] have recently confirmed iodide migration to positive electrode, leaving iodide vacancies at the negative electrode through DC dependent electroabsorption (EA) spectra, temperature dependent electrical measurement and XPS characterization. According to the authors, accumulation of iodide ions at one interface and the corresponding vacancies at the other creates barriers for carrier extraction. Modulation of such interfacial barriers at CH 3 NH 3 PbI 3-x Cl x / Spiro-OMeTAD and TiO2/ CH 3 NH 3 PbI 3-x Cl x , caused by the migration of iodide ions/interstitials driven by an external electrical bias leads to J-V hysteresis in planar (FTO/TiO2 CL/ CH 3 NH 3 PbI 3-x Cl x /Spiro-OMeTAD/Au) perovskite solar cells. Based on temperature dependence of hysteretic change in current density, Grätzel et al. 70 estimated activation energy for diffusion of different ions in MAPbI3 and found that the iodide ions have the highest mobility with lowest activation energy. Hence, it is the halide anions (I-) not the MA+ ions which migrate more easily in perovskite, causing polarization and charge accumulation at interfaces under voltage scans and eventually creates hysteresis 70 . Figure 1 . 1 Figure 1. 18. (a) device structure and (b) forward and backward J-V curves of perovskite cells without and with PCBM (thermally annealed for 15 and 45min)[START_REF] Shao | Origin and elimination of photocurrent hysteresis by fullerene passivation in CH3NH3PbI3 planar heterojunction solar cells[END_REF] Figure 1 . 19 . 119 Figure 1. 19. Schematics showing shallow and deep trap states, and their role in causing hysteresis.[START_REF] Li | Hystersis mechanism in perovskite photovoltaic devices and its potential application for multi-bit memory devices[END_REF] Figure 2 . 2 1(b) shows the ITO substrate after etching process. graph versus wavelength. Absorption may be presented as transmittance (T=I/I 0 ) or absorbance (A=logI 0 /I). If no absorption has occurred, T=1.0 and A=0. Most spectrometers display absorbance on the vertical axis, and the commonly observed range is from 0 (100% transmittance) to 2 (1% transmittance). The wavelength of maximum absorbance is a characteristic value, designated as λ . The optical band gap (E opt ), expressed in electronvolt, depends on the incident photon wavelength by means of the Planck relation 2 ) 2 All elements have different sized nuclei and as the size of the atom the light incident upon the sample may be decomposed into s and p component (the scomponent of the electric field is oscillating parallel to the sample surface and vertical to plane of incidence, the p-component is of the electric field oscillating parallel to the plane of incidence). The reflection coefficients of the s and p component, after reflection are denoted by Rs and Rp. The fundamental equation of ellipsometry is then written: Thus, tanΨ is the amplitude ratio upon reflection, and Δ is the phase shift. Since ellipsometry is measuring the ratio of two values (rather than the absolute value of either), it is very robust, accurate (can achieve angstrom resolution) and reproducible. For instance, it is insensitive to scatter and fluctuations, and requires no standard or calibration. Experimental, the advantages of ellipsometry measurement are -Non-destructive and non-contact technique -No sample preparation -Solid and liquid samples -Fast thin film thickness mapping -Single and multi layer samples -Accurate measurement of ultra-thin films of thickness < 10nm 9 ) 9 Probes are applied to a pair of contacts and the resistance is measured by applying a voltage across these contacts and measuring the resulting current. The current flows from the first probe into the metal contact, across the metal-semiconductor junction, through the sheet of semiconductor, across the metal-semiconductor junction again (except probe lithography and local stimulation of cells. Simultaneous with the acquisition of topographical images, other properties of the sample can be measured locally and displayed as an image, ofen with similarly high resolution. Examples of such properties are mechanical properties like stiffness or adhesion strength and electrical properties such as conductivity or surface potential. In fact, the majority of SPM techniques are extensions of AFM that use this modality. Figure 3 . 2 . 32 Figure 3. 2. The photo images of the perovskite dipping-process (a) dipping in MAI solution, (b) Just after dipping process, (c) full device after electrode deposition, and (d) annealing process ( MAI) has to be penetrated into PbI 2 film during dipping process in MAI solution. First, we dipped the PbI 2 film into isopropyl alcohol (IPA) solvent during 10-15 seconds for cleaning.Thereafter, we made progress the dipping process in MAI solution during 30-40 seconds as shown in figure3.2a. We could see the change of the color of the film from yellow to dark brown upon dipping time. The IPA dip-cleaning progressed again during 10-15 seconds, immediately after the MAI dipping process. Finally, we did the spin-coating process in 4000 rpm during 30 seconds and annealing at 70℃ during 45 minutes for removing IPA solvent remaining in the perovskite film. The dipping technique is a sensitive process. There are numerous experimental conditions effecting the crystallinity of the perovskite thin films such as the dipping angle, dipping speed, pulling speed of dipped samples. However, we decided to study the dipping process due to the high performance reported among the solution processed PSCs at that time. Figure 3 . 5 and 35 Figure 3.6 show the light absorbance and transmittance characteristics of PbI 2 and perovskite film in the range of 300 to 800 nm wavelengths, respectively. First, the PEDOT:PSS layer, well-known as transparent hole transport layer (HTL), has zero absorption in a range of 300 to 800 nm wavelengths (red solid line in figure3.5 and 3.6). As depicted in figure3.5, the slower PbI 2 solution spun, the more light were absorbed due to the thicker thickness. However, the energy band-gaps of the PbI 2 films were fixed around 2 eV in spite of different thickness. In general, the energy band gap of thin film can be defined by the maximum wavelength of absorption range. Figure 3 . 5 . 35 Figure 3. 5. The (a) transmittance, and (b) absorbance graphs of the PbI 2 film in different thickness controlled by rpm conditions. Figure 3 . 3 . 7 . 337 Figure 3.6 shows the absorbance and transmittance characteristics of perovskite thin films depending on different dipping times in MAI solution. The absorbance of the film advanced by increasing the dipping time until 50 seconds and dropped over than 50 seconds. The crystallinity of the perovskite thin film collapsed due to the MAI solution damage from overtime-dipping process. We can confirm the damage by SEM measurement as shown in figure 3.7. The perovskite thin film dipped in ideal time has outstanding crystalline quality with 100 nm of grain size (figure 3.7a). However, the film crystallinity collapsed immediately over than 50 seconds dipping time (figure 3.6b). Figure 3 . 3 figure 3.23a, the color of perovskite thin film was completely dark brown with nice uniformity comparing to the photos in figure3.12 (the perovskite thin film deposited by using 2-step dipping technique). Figure 4 . 4 Figure 4.1a shows the inverted (or p-i-n) CH 3 NH 3 PbI 3-x Cl x based planar structure PSCs devices used in this work (ITO = anode, Ag = cathode). The photovoltaic performance of a series of 10 samples is reported in table1. Table1shows a good reproducibility of the Figure 4 . 1 . 41 Figure 4. 1. (a) Detailed scheme of the perovskite planar solar cell architecture facing the plasma of GD-OES, (b) Measured J-V curves of the best CH 3 NH 3 PbI 3-x Cl x solar cell under 1 sun illumination scanned in the forward (blue line) and reverse (red line) directions, (c) pictures of the perovskite solar cells before and after GD-OES measurement with GD crater visible. 5 V to +2.5 V vs. anode) are shown in figure 4.7 and figure 4.8 for iodide, chloride, lead and nitrogen ions. For clarity, the intensity values in figure 4.8 are shifted vertically as compared to intensity values in figure 4.5 in order to highlight the ionic migration. In the GD-OES line profiles, the initial sputtering time (30 s) corresponds to the PCBM-perovskite interface and the final sputtering time (60 s) to the PEDOT:PSS-perovskite interface. As shown in figure 4.7 and figure 4.8, two different behaviors are obtained: on one hand, the iodide and chloride ions (Figure 4.8a and 4.8b respectively) move according to the sign of the voltage and on the other hand lead and nitrogen do not move at all (Figure 4.8c and 4.8d respectively). Figures 4 . 4 Figures 4.7aand 4.8a display GD-OES profile lines of iodide ion versus sputtering time Figures 4.7aand 4.8a display GD-OES profile lines of iodide ion versus sputtering time for different biases showing directly the iodide ion movements. For a negative bias, I -ions move towards PCBM layer and for the positive bias I -ions move towards PEDOT:PSS layer. Figure 4 . 4 Figure 4.15 shows GD-OES profile lines of iodide and chloride ions versus sputtering time for different recovery time (from 1 to 3 minutes after stopping the applied voltage) showing directly the reversibility of ionic migration. By considering the plasma etching direction (from silver to ITO), the shorter sputtering time is silver cathode side and the longer sputtering time is the ITO anode side. For generating the ionic migration, 1.5 V or -1.5 V were applied in the PSCs device during 30 seconds (Figure 4.15). In comparison with the GD-OES profile lines measured before applying bias, both iodide and chloride ions are shifted towards the PEDOT:PSS side (longer sputtering time) under the positive bias and both iodide and chloride ions are shifted toward the silver cathode side (shorter sputtering time) under the negative bias. These results confirm that the iodide and chloride ions are negatively charged species. The GD-OES profile lines of iodide ions in the perovskite film (red solid lines in Fig 4.15a and 4.15b) show only one peak (sputtering time around 40 s) Figure 5 . 4 . 54 Figure 5. 4. Transmission line measurement (TLM) versus perovskite thickness on top of glass; (a) 450 nm, (b) 390 nm, and (c) 350 nm Fig. 5 . 5 Fig. 5.13b, while the slope generated under positive initial applied voltage induces the decrement of leakage current potential as Fig. 5.13d. Both V 0 and leakage current density versus initial applied voltage represent an interpretation of the energy band model considering the halide ionic migration. Fig. 6 . 6 Fig.6.1 represents the photos of the perovskite sample showing the color variation during ageing progress. The sample structure is glass/ITO/PEDOT:PSS/perovskite for confirming the identical conditions of bottom layers as PSCs device. The sample is stored in the glove box (N 2 condition) without illumination at room temperature. It is for removing the ageing effects due to thermal, water, moisture and illumination. During the 1 st week, the color of the perovskite thin film fixed in dark brown as initial color. However, the film color becomes light brown between the 2 nd week and the 4 th week. Finally, it takes over 6 weeks to change color to full yellow. As shown in Figure6.1 for the 5 th week aged sample, the perovskite thin film turns yellow from the edge of the sample due to the exposed area difference to air. The color variation of the perovskite thin film (dark brown à yellow) CH 3 NH 3 33 PbI 3-x Clx based perovskite thin film deposited by 1-step spin-casting process. The PEDOT:PSS and PC 60 BM layers are deposited by spin-coating process as HTL and ETL, respectively. The electrical transport characteristics (conductivity, resistivity, contact resistance, and carrier diffusion length) of the film are measured by TLM and SSPG techniques. The crystallinity of the film is optimized with the results of SEM and XRD. Real space carrier distribution measurement with the help of Kelvin Probe Force Microscopy (KPFM) also shows unbalanced hole and electron extraction rate in perovskite devices using TiO2 and spiro-OMeTAD as electron collector and HTM, respectively[START_REF] Tao | 17.6 % steady state efficiency in low temperature processed planar perovskite solar cells[END_REF] . It is known that structural defects and/or mismatching at any heterogeneous interface can develop a potential barrier for carrier extraction and thus, results in accumulation of these carries at the interfaces. Therefore, such interfacial defects lead to unbalanced carrier extraction. . Unbalanced carrier extraction (hole extraction rate ≠ electron extraction rate) caused by imperfect matching of the properties of layers is believed to result in J-V hysteresis. Heo et al. [START_REF] Im | 18.1 % hysteresis-less inverted CH3NH3PbI3 planar perovskite hybrid solar cells[END_REF] found that perovskite cells of inverted architecture using PCBM as electron collector do not show hysteresis. PCBM being more conductive (0.16 mS/cm) than the widely used TiO2 (6 × 10-6 mS/cm) collects/separates the electron more efficiently from perovskite (CH 3 NH 3 PbI 3 ), resulting in balanced carrier extraction (hole extraction rate = electron extraction rate), which in consequence, eliminated hysteresis. Table 1 . 1 . 11 Several representative devices performances of inverted planar structured PSCs Perovskite Processing HTL ETL V OC (V) J SC (mA/cm 2 ) FF (%) PCE (%) Stability # One-Step PEDOT PC 61 BM/BCP 0.60 10.32 63 3.9 - Two-Step PEDOT PC 61 BM 0.91 10.8 76 7.4 - One-Step (Cl) PEDOT PC 61 BM 0.87 18.5 72 11.5 - One-Step (Cl) PEDOT PC 61 BM/TiO X 0.94 15.8 66 9.8 - Solvent Engineering PEDOT PC 61 BM/LiF 0.87 20.7 78.3 14.1 - One-Step (Moisture, Cl) PEDOT PC 61 BM/PFN 1.05 20.3 80.2 17.1 - One-Step (Hot-casting, PEDOT PC 61 BM 0.94 22.4 83 17.4 - Cl) One-Step (HI additive) PEDOT PC 61 BM 1.1 20.9 79 18.2 - Co-Evap PEDOT/Poly-TPD PC 61 BM 1.05 16.12 67 12.04 - Co-Evap PEDOT /PCDTBT PC 61 BM/LiF 1.05 21.9 72 16.5 - Two-Step Spin-Coating PTAA PC 61 BM/ C 60 /BCP 1.07 22.0 76.8 18.1 - One-Step Solvent PEDOT C 60 0.92 21.07 80 15.44 - One-Step(Cl) PEDOT PC 61 BM/ZnO 0.97 20.5 80.1 15.9 140 h One-Step(Cl) PEDOT PC 61 BM/ZnO 1.02 22.0 74.2 16.8 60 days One-Step NiO X PC 61 BM/BCP 0.92 12.43 68 7.8 - One-Step NiO X PC 61 BM/C 60 1.11 19.01 73 15.4 244 h Solvent Engineering NiO X PC 61 BM/LiF 1.06 20.2 81.3 17.3 - Two-Step NiO X ZnO 1.01 21.0 76 16.1 > 60days Solvent Engineering NiLiMgO PC 61 BM /TiO 2 :Nb 1.07 20.62 74.8 16.2 1000 h (Sealed) 1,2 Its owns ABX 3 crystal structures, where A,B, and X are organic cation, metal cation, and halide anion, respectively. The bandgap can be tuned from the ultraviolet to infrared region through varying these components. [2][3][4] This family of materials exhibits a myriad of properties ideal for PV such as high dual electron and hole mobility, large absorption coefficients resulting from s-p antibonding coupling, a favorable band gap, a strong defect tolerance and shallow point defects, benign grain boundary recombination effects and reduced surface recombination. 5 After 7 years efforts, the power conversion efficiency (PCE) of perovskite solar cells has risen from about 3% to 22%. 1,6-16 The PC 60 BM was used as electron transport layer (ETL). The highest occupied molecular orbital (HOMO) level of PC 60 BM (-6.3eV) is enough to block the hole generated in the perovskite film (the valence band of perovskite is at -5.4eV). PC 60 BM has same lowest unoccupied molecular orbital (LUMO) level as perovskite film (-3.9eV). It makes that electron can pass through well from perovskite to Al electrode (-4.2eV). Therefore, PC 60 BM is suitable material as ETL. The PC 60 BM is diluted in Chlorobenzene (CB) in 2wt% and then it was stored in glove box (N 2 condition) at room temperature. -phenyl-C60- butyric acid methyl ester (PC 60 BM) solution have to be prepared one day before the deposition process for dissolving sufficiently. First, the CH 3 NH 3 PbI 3 film was used as the active layer in perovskite solar cells. PbI 2 solution and CH 3 NH 3 I solution have to be dissolved at least 16 hours before deposition process. For PbI 2 solution, PbI 2 was diluted in N,N-Dimethylformamide (DMF) solvent in 33wt%. It was stored in N 2 condition with annealing at 80°C. For CH 3 NH 3 I (MAI) solution, MAI was diluted in 2-propanol (IPA) solvent in 10mg/ml. The solution was stored in glove box at room temperature. Table 4 . 1 . 41 Photovoltaic performance of perovskite solar cells used in this study. J SC (mA/cm 2 ) V OC (V) FF (%) PCE (%) Average performance (10 samples) 19.9 (±2) 0.92 (±0.02) 64.2 (±4) 11.7 (±0.8) Best performance Forward scan 19.9 0.92 67.0 12.3 Reverse scan 20.2 0.93 67.0 12.6 Table 4 . 2 . 42 The performance parameters of the perovskite solar cells fabricated for direct measurement of ion migration using GD-OES analysis. Figure 4.6 is the GD-OES results of I and Cl in the perovskite thin film without applied voltage. As shown in figure 4.6a, the initial I profile line (before applied voltage) is changed depending on annealing temperature of perovskite thin film. It is not symmetric (little shifted toward PCBM) when the perovskite film was annealed at 80 ℃ in N 2 conditions. However, it Device (#) V OC (V) J SC (mA/cm 2 ) FF (%) PCE (%) 1 0.94 20.2 60 11.5 2 0.92 19.8 63 11.5 3 0.91 20.0 63 11.5 4 0.92 19.1 67 11.7 5 0.92 19.9 64 11.7 6 0.91 18.7 67 11.3 7 0.93 18.4 67 11.3 8 0.94 21.9 62 12.6 9 0.91 20.8 62 11.6 10 0.93 20.2 67 12.6 Average 0.92 19.9 64.2 11.7 4.5.2. Lead and MA ion migration GD-OES profile lines of Pb and N show no ionic migration under an applied bias (Figure 4.7c, 4.7d, 4.8c, and 4.8d). These results are in accordance with the fact that the migration activation energies of Pb (2.31 eV) and MA (0.84 eV) ions vacancies are higher than the value of I ions vacancies (0.58 eV) as reported by Eames et al. 7 Reference .............................................................................................................................. Acknowledgements Chapter 5. Hysteresis characteristics in CH 3 NH 3 PbI 3-x Cl x based inverted solar cells Summary The current voltage (J-V) hysteresis observed in the perovskite solar cells (PSCs) is reported as a key issue. This chapter shows the J-V hysteresis versus structure variation (thickness of perovskite layer and type of cathode) and versus measurement conditions (scanning rate, initially applied voltage, and measuring temperature). Not only the J-V curves under illumination but also the J-V curves in dark condition were analyzed for understanding the ionic migration effect on J-V performance. It's because that we need to consider the J-V hysteresis without photo-generated excess carriers. Introduction The continuous and skyrocketing rise in power conversion efficiency (PCE) of hybrid organic-inorganic perovskite materials (HOIPs) based solar cells [1][2][3] has attracted enormous attention among the photovoltaics community. These materials have become an utmost interest to all working on photovoltaic technologies because of its high absorption coefficient and long range carrier diffusion with minimal recombination, which are the main factors usually used to explain the large current density, high open circuit voltage, and thus high PCE of perovskite solar cells (PSCs). However, there are several unusual issues necessary to tackle in order to further improve their efficiency. Among them are the hysteresis observed in current-voltage curves, side distribution in performance, difficulties in reproducing the results, etc. These issues require deeper scientific understanding and demand serious attention. Among all the above issues, hysteresis has been considered by the community as crucial. It has been widely observed that perovskite solar cells show substantial mismatch between the current density -voltage curve (J-V curve) measured on forward scan (from negative to positive bias) and backward scan (from positive to negative bias). Hysteretic J-V curves imply that there is a clear difference in transient carrier collection at a given voltage whether it is measured during a forward or a backward scan. In general, carrier collection in the device depends on carrier generation, separation, and its transport from the bulk across the different interfaces of the device. As carrier generation and separation are considered as We must notice that the temperature dependence of diffusivity shows two distinct regions: extrinsic region (low temperature) and intrinsic region (high temperature). For the same reasons, temperature dependence of ionic conductivity also exhibit intrinsic and extrinsic regions. Now, we can develop a theory for ionic conduction using Boltzmann statistics ( = - , where is the accommodation coefficient, is the vibrational frequency of the ions, and E a can be considered as activation energy or free energy change per atom.) and see whether we arrive at the same formalism as the ionic conductivity versus diffusivity. We can write an expression for conductivity as This expression has similar form, which was shown above by using diffusivity and mobility, and it shows the similar dependence on temperature as well as activation energy for migration. In conclusion, the activation energy of ionic conductivity can quantitatively characterize the rate of ion migration, which can be extracted from the temperaturedependent electrical conductivity by the Nernst-Einstein relation. UV exposed ITO was used as the anode. Al or Ag was deposited as the cathode on top of PCBM layer. Two square (dashed line) represents the parameters of the device structure to control the J-V hysteresis; the thickness of the active layer (perovskite thin film) and cathode (Al / Ag). The thickness of the active layer was controlled by rpm conditions in the spincasting process and the cathode layer was deposited by thermal vacuum evaporation process. J-V Hysteresis depending on device structure J-V Hysteresis depending on measurement conditions J-V hysteresis versus voltage scanning rate The measurement condition for studying J-V hysteresis is the voltage scanning rate in the range between 25 mV/s and 25,000 mV/s. We already discussed reversibility of the halide ionic migration and its recovering time (around 2 -3 minute) by using GD-OES analysis as figure 4.15. This recovering time is longer than the voltage scanning time in J-V curve measurement (less than 1 minute). Therefore the halide ionic migration can be sustained during J-V measurement. And the amount of halide ionic migration can be changed by the voltage scanning rate. Fig. 5.7 and Fig. 5.8 show the J-V performances under illumination versus voltage scanning rate. As the scanning rate increased, the J-V hysteresis became stronger. It is due to that the ionic migration, induced by initial applied voltage, is kept well when the voltage-scanning rate is high. The short circuit current (J SC ) in reverse bias measurement is higher than that in forward bias measurement in fast voltage scanning rate. As the scanning rate is reduced, not only J SC but also V OC , FF, and PCE approach a similar value (0.912 V, 65%, and 11.4%), indicating that the J-V hysteresis becomes weaker. When the voltage scanning direction is forward, the V OC value is constant at 0.913 V in the whole range of scanning rates (25 -25,000 mV/s). However, the V OC value increased as a function of voltage scanning rate decrement in reverse voltage scanning direction. Both in forward and reverse scanning direction, fill factor (FF) is increased by decreasing the voltagescanning rate. Chapter 6. Ageing study in CH 3 NH 3 PbI 3-x Cl x based inverted perovskite solar cells Summary The poor stability is reported as one of the key issues in the perovskite solar cells (PSCs). In this study, we used various thin film analysis techniques for investigating the ageing process of the CH 3 NH 3 PbI 3-x Cl x based perovskite thin film such as optical thin film analysis (UV-Vis spectroscopy and X-ray diffraction (XRD)), electrical thin film analysis (transmission line measurement (TLM)), surface thin film analysis (atomic force microscopy (AFM)). We investigated that the surface variation is more critical than the bulk variation during ageing process. The direct experimental evidence of iodide ionic diffusion toward silver electrode during ageing process is observed by studying the glow discharge optical emission spectroscopy (GD-OES) analysis. Introduction With power conversion efficiencies (PCE) beyond 22%, organic-lead-halide perovskite solar cells (PSCs) are stimulating the photovoltaic research scene. However, despite the big excitement, the unacceptably low-device stability under operative conditions currently represents an apparently unbearable barrier for their market uptake. [1][2][3][4][5] Notably, a marketable product requires a warranty for 20-25 years with <10% drop in performance. This corresponds, on standard accelerated ageing tests, to having <10% drop in PCE for at least 1,000h. Hybrid perovskite solar cells are still struggling to reach this goal. Perovskite are sensitive to water and moisture, ultraviolet light and thermal stress. [6][7][8] When exposed to moisture, the perovskite structure tend to hydrolyse, 6 undergoing irreversible degradation and decomposing back into the precursors, for example, the highly hygroscopic CH 3 NH 3 X and CH(NH 2 ) 2 X salts and PbX 2 , with X=halide, a process that can be dramatically accelerated by heat, electric field and ultraviolet exposure. 7,8 Material instability can be controlled to a certain extent using cross-linking additives 9 or by compositional engineering 10 , that is, adding a combination of Pb(CH 3 CO 2 ) 2 •3H 2 O and PbCl 2 in the precursors 11 or using cation cascade, including Cs and Rb cations, as recently demonstrated, 2,3 to reduce the material photoinstability and/or optimize the film morphology. However, solar cell degradation is not only due to the poor stability of the perovskite layers, but can be also accelerated by the instability figure 3.19 (chapter 3). Though the contact resistance is high, we get great reproducibility of TLM results. After 3 days, the contact resistance is increased to 6E+10 Ω, which is around 130 times higher than the initial value. The resistivity is 3.4 E+6 Ω•cm after 3 days. This value is 3 times higher than the initial resistivity. We can conclude that the variation of contact resistance is more significant than the variation of resistivity. The ageing effect of the perovskite thin film is more critical from surface than from bulk of the perovskite thin film. These TLM results are in agreement with the results that we speculated with the UV-Vis spectroscopy analysis (figure 6.2). However, after 3 days, the TLM method is no more available due to the too low current resulting from the ageing process. The total resistance becomes over than 5E+12 Ω and linear increment versus electrode length is no longer observed. Surface analysis As we already discussed with the UV-Vis spectroscopy results (figure 6.2) and TLM analysis (figure 6.5), we observed that the ageing effect is more critical from surface than from bulk of the perovskite thin film. In this paragraph, we used AFM and SEM analysis for investigating the surface variation during ageing process. For preparing the AFM and SEM samples, the CH 3 NH 3 PbI 3-x Clx based perovskite thin film is deposited onto the glass/ITO/PEDOT:PSS by using spin-casting process in the glove box (N 2 condition). We tried to fabricate the perovskite thin film at the same conditions as the PSCs. The AFM and SEM samples are stored in the N 2 condition and in dark. found, must be one of them. This behavior will be the great guideline for understanding PSCs device ageing performance and for the design of future stable PSCs device.
128,828
[ "781966" ]
[ "1165" ]
00175525
en
[ "sdv" ]
2024/03/05 22:32:10
2005
https://ens-lyon.hal.science/ensl-00175525/file/PNAS-Origins.pdf
Marie Touchon Samuel Nicolay Benjamin Audit Edward-Benedict Brodie Of Brodie Yves D'aubenton-Carafa Alain Arneodo Claude Thermes Minor category: EVOLUTION Replication-associated strand asymmetries in mammalian genomes: towards detection of replication origins In the course of evolution, mutations do not affect both strands of genomic DNA equally. This mainly results from asymmetric DNA mutation and repair processes associated with replication and transcription. In prokaryotes, prevalence of G over C and T over A is frequently observed in the leading strand. The sign of the resulting TA and GC skews changes abruptly when crossing replication origin and termination sites, producing characteristic step-like transitions. In mammals, transcriptioncoupled skews have been detected, but so far, no bias has been associated with replication. Here, analysis of intergenic and transcribed regions flanking experimentally-identified human replication origins and the corresponding mouse and dog syntenic regions demonstrates the existence of compositional strand asymmetries associated with replication. Multi-scale analysis of human genome skew profiles reveals numerous transitions that allow us to identify a set of one thousand putative replication initiation zones. Around these putative origins, the skew profile displays a characteristic jagged pattern also observed in mouse and dog genomes. We therefore propose that in mammalian cells, replication termination sites are randomly distributed between adjacent origins. Altogether, these analyses constitute a step toward genome-wide studies of replication mechanisms. INTRODUCTION Comprehensive knowledge of genome evolution relies on understanding mutational processes that shape DNA sequences. Nucleotide substitutions do not occur at similar rates and in particular, owing to strand asymmetries of the DNA mutation and repair processes, they can affect each of the two DNA strands differently. Asymmetries of substitution rates coupled to transcription have been observed in prokaryotes (1)(2)(3) and in eukaryotes (4)(5)(6). Strand asymmetries (i.e. G ≠ C and T ≠ A) associated with the polarity of replication have been found in bacterial, mitochondrial and viral genomes where they have been used to detect replication origins (7)[START_REF] Mrazek | Proc. Natl. Acad. Sci[END_REF][START_REF] Tillier | [END_REF]. In most cases, the leading replicating strand presents an excess of G over C and of T over A. Along one DNA strand, the sign of this bias changes abruptly at the replication origin and at the terminus. In eukaryotes, the situation is unclear. Several studies failed to show compositional biases related to replication and analyses of nucleotide substitutions in the region of the ß-globin replication origin did not support the existence of mutational bias between the leading and the lagging strands [START_REF] Mrazek | Proc. Natl. Acad. Sci[END_REF]10,11). In contrast, strand asymmetries associated with replication were observed in the subtelomeric regions of Saccharomyces cerevisiae chromosomes, supporting the existence of replication-coupled asymmetric mutational pressure in this organism (12). We present here analyses of strand asymmetries flanking experimentally-determined human replication origins, as well as the corresponding mouse and dog syntenic regions. Our results demonstrate the existence of replication-coupled strand asymmetries in mammalian genomes. Multiscale analysis of skew profiles of the human genome using the wavelet transform methodology, reveals the existence of numerous putative replication origins associated with randomly distributed termination sites. Data and Methods Human replication origins. Nine replication origins were examined, namely those situated near the genes MCM4 (13), HSPA4 (14), TOP1 (15), MYC (16), SCA-7 (17), AR (17), DNMT1 (18), LaminB2 [START_REF] Giacca | Proc. Natl. Acad. Sci[END_REF] and ß-globin [START_REF] Kitsberg | [END_REF]. Sequences. Sequence and annotation data were retrieved from the Genome Browser of the University of California Santa Cruz (UCSC) for the human (May 2004), mouse (May 2004) and dog (July 2004) genomes. To delineate the most reliable intergenic regions, transcribed regions were retrieved from "all_mrna", one of the largest sets of annotated transcripts. To obtain intronic sequences, we used the KnownGene annotation (containing only protein-coding transcripts); when several transcripts presented common exonic regions, only common intronic sequences were retained. For the dog genome, only preliminary gene annotations were available, precluding the analysis of intergenic and intronic sequences. To avoid biases intrinsic to repeated elements, all sequences were masked with RepeatMasker, leading to 40-50% sequence reduction. Strand asymmetries. The TA and GC skews were calculated as S TA = (T -A) /(T + A), S GC = (G -C) /(G + C) and the total skew as S = S TA + S GC , in non-overlapping 1 kbp windows (all values are given in percent). The cumulated skew profiles Σ TA and Σ GC were obtained by cumulative addition of the values of the skews along the sequences. To calculate the skews in transcribed regions, only central regions of introns were considered (after removal of 530 nt from each extremity) in order to avoid the skews associated with splicing signals (6). To calculate the skews in intergenic regions, only windows that did not contain any transcribed region were retained. To eliminate the skews associated with promoter signals and with transcription downstream of polyA sites, transcribed sequences were extended by 0.5 kbp and 2 kbp at 5' and 3' extremities, respectively (6). Sequence alignments. Mouse and dog regions syntenic to the six human regions shown in Fig. 1 were retrieved from UCSC (Human Synteny). Mouse intergenic sequences were individually aligned using PipMaker (21) leading to a total of 150 conserved segments larger than 100 bp (> 70% identity) corresponding to a total of 26 kbp (5.3% of intergenic sequences). Wavelet-based analysis of the human genome. The wavelet transform (WT) methodology is a multiscale discontinuities tracking technique [START_REF] Arneodo | The Science of Disaster[END_REF][START_REF] Nicolay | [END_REF] (for details, see Supplementary material). The main steps involved in detection of jumps were the following. We selected the extrema of the first derivative ′ S of the skew profile S smoothened at large scale (i.e. computed in large windows). The scale 200 kbp was chosen as being just large enough to reduce the contribution of discontinuities associated with transcription (i.e. larger than most human genes (24)), yet as small as possible so as to capture most of the contributions associated with replication. In order to delineate the position corresponding to the jumps in the skew S at smaller scale, we then progressively decreased the size of the analyzing window and followed the positions of the extrema of ′ S across the whole range of scales down to the shortest scale analyzed (the precision was limited by the noisy background fluctuations in the skew profile). As expected, the set of extrema detected by this methodology corresponded to similar numbers of upward and downward jumps. The putative replication origins were then selected among the set of upward jumps on the basis of their ΔS amplitude (see text). RESULTS AND DISCUSSION Strand asymmetries associated with replication. We examined the nucleotide strand asymmetries around 9 replication origins experimentally-determined in the human genome (Data and Methods). For most of them, the S skew measured in the regions situated 5' to the origins on the Watson strand (lagging strand) presented negative values that shifted abruptly (over few kbp) to positive values in regions situated 3' to the origins (leading strand), displaying sharp upward transitions with large ΔS amplitudes as observed in bacterial genomes (7-9) (Fig. 1a). This was particularly clear with the cumulated TA and GC skews that presented decreasing (increasing) profiles in regions situated 5' (3') to the origins, displaying characteristic V-shapes pointing to the initiation zones. These profiles could, at least in part, result from transcription, as shown in previous work (6). To measure compositional asymmetries that would result only from replication, we calculated the skews in intergenic regions on both sides of the origins. The mean intergenic skews shifted from negative to positive values when crossing the origins (Fig. 2). This result strongly suggested the existence of mutational pressure associated with replication, leading to the mean compositional biases S TA = 4.0 ± 0.4% and S GC = 3.0 ± 0.5% (note that the value of the skew could vary from one origin to another, possibly reflecting different initiation efficiencies) (Table 1). In transcribed regions, the S bias presented large values when transcription was co-oriented with replication fork progression ((+) genes on the right, (-) genes on the left), and close to zero values in the opposite situation (Fig. 2). In these regions, the biases associated with transcription and replication added to each other when transcription was co-oriented with replication fork progression, giving the skew S Lead ; they subtracted from each other in the opposite situation, giving the skew S lag (Table 1). We could estimate the mean skews associated with transcription by subtracting intergenic skews from S Lead values, giving S TA = 3.6 ± 0.7% and S GC = 3.8 ± 0.9%. These estimations were consistent with those obtained with a large set of human introns S TA = 4.49 ± 0.01% and S GC = 3.29 ± 0.01% in ref. (6), further supporting the existence of replication- coupled strand asymmetries. 1. Strand asymmetries associated with human replication origins. The skews were calculated in the regions flanking the six human replication origins (Fig. 1a) and in the corresponding syntenic regions of the mouse genome. Intergenic sequences were always considered in the direction of replication fork progression (leading strand); they were considered in totality (all) or after elimination of conserved regions (ncr.) between human (H.s.) and mouse (M.m.) (see Data and Methods). To calculate the mean skew in introns, the sequences were considered on the non-transcribed strand: S S Lead , the orientation of transcription was the same as the replication fork progression; S lag , opposite situation. The mean values of the skews S TA , S GC and S are given in % (± SEM); l, total sequence length in kbp. represents the distance (kbp) of a sequence window to the corresponding origin; ordinate represents the values of S given in percent; red, (+) genes (coding strand identical to the Watson strand); blue, (-) genes (coding strand opposite to the Watson strand); black, intergenic regions; in (c) genes are not represented. Could the biases observed in intergenic regions result from the presence of as yet undetected genes? Two reasons argued against this possibility. First, we retained as transcribed regions one of the largest sets of transcripts available, resulting in a stringent definition of intergenic regions. Second, several studies have demonstrated the existence of hitherto unknown transcripts in regions where no protein coding genes had been previously identified (25)[START_REF] Chen | Proc. Natl. Acad. Sci[END_REF][START_REF] Rinn | [END_REF](28). Taking advantage of the set of non-proteincoding RNAs identified in the "H-Inv" database (29), we checked that none of them was present in the intergenic regions studied here. Another possibility was that the skews observed in intergenic regions result from conserved DNA segments. Indeed, comparative analyses have shown the presence of nongenic sequences conserved in human and mouse (30). These could present biased sequences, possibly contributing to the observed intergenic skews. We examined the mouse genome regions syntenic to the six human replication zones (Fig. 1b). Alignment of corresponding intergenic regions revealed the presence of homologous segments, but these accounted for only 5.3 % of all intergenic sequences. Removal of these segments did not change significantly the skew in intergenic regions, therefore eliminating the possibility that intergenic skews are due to conserved sequence elements (Table 1). Fig. 2. Skew S in regions situated on both sides of human replication origins. The mean values of S were calculated in intergenic regions and in intronic regions situated 5' (left) and 3' (right) of the six origins analyzed in Fig. 1a; colors are as in Fig. 1; mean values are in percent ± SEM. Conservation of replication-coupled strand asymmetries in mammalian genomes. We analyzed the skew profiles in DNA regions of mammalian genomes syntenic to the six human origins (Fig. 1). The human, mouse and dog profiles were strikingly similar to each other, suggesting that in mouse and dog, these regions also corresponded to replication initiation zones (indeed, they were very similar in primate genomes). Examination of mouse intergenic regions showed, as for human, significant skew S values with opposite signs on each side of these putative origins, suggesting the existence of a compositional bias associated with replication S = 5.8 ± 0.5% (Table 1). Human and mouse intergenic sequences situated at these homologous loci presented significant skews, even though they presented almost no conserved sequence elements. This presence of strand asymmetry in regions that strongly diverged from each other during evolution further supported the existence of compositional bias associated with replication in both organisms: in the absence of such process, intergenic sequences would have lost a significant fraction of their strand asymmetry. Altogether, these results establish, in mammals, the existence of strand asymmetries associated with replication in germ-line cells. They determine that most replication origins experimentally-detected in somatic cells coincide with sharp upward transitions of the skew profiles. The results also imply that for the majority of experimentally-determined origins, the positions of initiation zones are conserved in mammalian genomes (a recent work confirmed the presence of a replication origin in the mouse MYC locus (31)). Among nine human origins examined, three do not present typical V-type cumulated profiles. For the first one (DNMT1), the central part of the V-profile is replaced by a large horizontal plateau (several tens of kbp) possibly reflecting the presence of several origins dispersed over the whole plateau. Dispersed origins have been observed for example in the hamster DHFR initiation zone (32). By contrast, the skew profiles of the LaminB2 and ß-globin origins present no upward transition suggesting that they might be inactive in germ-line cells, or less active than neighboring origins (data not shown). Detection of putative replication origins. Human experimentally-determined replication origins coincided with large amplitude upward transitions of skew profiles. The corresponding ΔS ranged between 14% and 38% owing to possible different replication initiation efficiencies and/or different contributions of transcriptional biases (Fig. 1a). Are such discontinuities frequent in human sequences, and can they be considered as diagnostic of replication initiation zones? In particular, can they be distinguished from the transitions associated with transcription only? Indeed, strand asymmetries associated with transcription can generate sharp transitions in the skew profile at both gene extremities. These jumps are of same amplitude and of opposite signs, e.g. upward (downward) jumps at 5' (3') extremities of (+) genes (6). Upward jumps resulting from transcription only, might thus be confused with upward jumps associated with replication origins. To address these questions, systematic detection of discontinuities in the S profile was performed with the wavelet transform methodology, leading to a set of 2415 upward jumps and, as expected, to a similar number of downward jumps (see Data and Methods). The distributions of the ΔS amplitude of these jumps were then examined, showing strong differences between upward and downward jumps. For large ΔS values, the number of upward jumps exceeded by far the number of downward jumps (Fig. 3). This excess likely resulted from the fact that, contrasting with prokaryotes where downward jumps result from precisely positioned replication termination, in eukaryotes, termination appears not to occur at specific positions but to be randomly distributed (this point will be detailed in the last section) (33,34). Accordingly, the small number of downward jumps with large ΔS resulted from transcription, not replication. These jumps were due to highly biased genes that also generated a small number of large amplitude upward jumps, giving rise to false positive candidate replication origins. The number of large downward jumps was thus taken as an estimation of the number of false positives. In a first step, we retained as acceptable a proportion of 33% of false positives. This value resulted from the selection of upward and downward jumps presenting an amplitude ΔS ≥ 12.5%, corresponding to a ratio of downward jumps over upward jumps r = 0.33. The values of this ratio r were highly variable along the chromosomes (Fig. 3). In G+C-poor regions (G+C < 37%) we observed the smallest r values (r = 0.15). In regions with 37% ≤ G+C ≤ 42%, we obtained r = 0.24, contrasting with r = 0.53 in regions with G+C > 42%. In these latter regions (accounting for about 40% of the genome) with high gene density and small gene length (24), the skew profiles oscillated rapidly with large upward and downward amplitudes (Fig. 5d) resulting in a too large number of false positives (53%). In a final step, we retained as putative origins upward jumps (with ΔS ≥ 12.5%) detected in regions with G+C ≤ 42%. This led to a set of 1012 candidates among which we could estimate the proportion of true replication origins to 79% (r = 0.21, Fig. 3a). The mean amplitude of the jumps associated with the 1012 putative origins was 18%, consistent with the range of values observed for the six origins in Fig. 1. Note that these origins were all found in the detection process. In close vicinity of the 1012 putative origins (± 20kbp) most DNA sequences (55 % of the analyzing windows) are transcribed in the same direction as the progression of the replication fork. By contrast, only 7% of sequences are transcribed in the opposite direction (38% are intergenic). These results show that the ΔS amplitude at putative origins mostly results from superposition of biases (i) associated with replication and (ii) with transcription of the gene proximal to the origin. Whether transcription is co-oriented with replication at larger distances will require further studies. We then determined the skews of intergenic regions on both sides of these putative origins. As shown in Fig. 4, the mean skew profile calculated in intergenic windows shift abruptly from negative to positive values when crossing the jump positions. To avoid the skews that could result from incompletely annotated gene extremities (e.g. 5' and 3' UTRs), 10 kbp sequences were removed at both ends of all annotated transcripts. The removal of these intergenic sequences did not significantly modify the skew profiles indicating that the observed values do not result from transcription. On both sides of the jump, we observed a steady decrease of the bias, with some flattening of the profile close to the transition point. Note that, due to (i) the potential presence of signals implicated in replication initiation, and (ii) the possible existence of dispersed origins (32), one might question the meaningfulness of this flattening that leads to a significant underestimate of the jump amplitude. As shown in Fig. 4, extrapolating the linear behavior observed at distance from the jump would lead to a skew of 5.3%, a value consistent with the skew measured in intergenic regions around the six origins (7.0 ± 0.5%, Table 1). Overall, the detection of upward jumps with characteristics similar to those of experimentally-determined replication origins and with no downward counterpart, further support the existence, in human chromosomes, of replication-coupled strand asymmetries, leading to the identification of numerous putative replication origins active in germ-line cells. Fig. 3. Histograms of the ΔS amplitudes of the jumps in the S profile. Using the wavelet transform, a set of 5101 discontinuities was detected (2415 upward jumps and 2686 downward jumps, Data and Methods). The ΔS amplitude was calculated as in Fig. 1a. (a) ΔS distributions of the jumps presenting G+C < 42%, corresponding to 1647 upward jumps and 1755 downward jumps; the threshold ΔS ≥ 12.5% (vertical line) corresponded to 1012 upward jumps that were retained as putative replication origins, and to 211 downward jumps (r = 0.21). (b) ΔS distributions of the jumps presenting G+C > 42%, ΔS ≥ 12.5% corresponding to 528 upward jumps and 280 downward jumps (r = 0.53). The G+C content was measured in the 100 kbp window surrounding the jump position. Upward jumps (black); downward jumps (dots); abscissa represents the values of the ΔS amplitudes calculated in percent. Random replication termination in mammalian cells. In bacterial genomes, the skew profiles present upward and downward jumps at origin and termination positions, respectively, separated by constant S values (7-9). Contrasting with this step-like shape, the S profiles of intergenic regions surrounding putative origins did not present downward transitions, but decreased progressively in the 5' to 3' direction on both sides of the upward jump (Fig. 4). This pattern was typically found along S profiles of large genome regions showing sharp upward jumps connected to each other by segments of steadily decreasing skew (Fig. 5 a-c). The succession of these segments, presenting variable lengths, displayed a jagged motif reminiscent of the shape of "factory roofs" which was observed around the experimentally-determined human origins (Fig. 5a and data not shown), as well as around a number of putative origins (Fig. 5 b,c). Some of these segments were entirely intergenic (Fig. 5 a,c), clearly illustrating the particular profile of a strand bias resulting solely from replication. In most other cases, we observed the superposition of this replication profile and of the transcription profile of (+) and (-) genes, appearing as upward and downward blocks standing out from the replication pattern (Fig. 5c). Overall, this jagged pattern could not be explained by transcription only, but was perfectly explained by termination sites more or less homogeneously distributed between successive origins. Although some replication terminations have been found at specific sites in S. pombe (35), they occur randomly between active origins in S. cerevisiae and in Xenopus egg extracts (33,34). Our results indicate that this property can be extended to replication in human germ-line cells. According to our results, we propose a scenario of replication termination relying on the existence of numerous termination sites distributed along the sequence (Fig. 6). For each termination site (used in a small proportion of cell cycles), strand asymmetries associated with replication will generate a skew profile with a downward jump at the position of termination and upward jumps at the positions of the adjacent origins, separated by constant values (as in bacteria). Various termination positions will correspond to elementary skew profiles (Fig. 6, first column). Addition of these profiles will generate the intermediate profile (second column) and further addition of many elementary skews will generate the final profile (third column). In a simple picture, we can suppose that termination occurs with constant probability at any position on the sequence. This can result from the binding of some termination factor at any position between successive origins, leading to a homogeneous distribution of termination sites during successive cell cycles. The final skew profile is then a linear segment decreasing between successive origins (Fig. 6, third column, black line). In a more elaborate scenario, termination would take place when two replication forks collide. This would also lead to various termination sites, but the probability of termination would then be maximum at the middle of the segment separating neighboring origins, and decrease towards extremities. Considering that firing of replication origins occurs during time intervals of the S phase [START_REF] White | Proc. Natl. Acad. Sci. USA[END_REF] could result in some flattening of the skew profile at the origins, as sketched in Fig. 6 (third column, grey curve). In the present state, our results clearly support the hypothesis of random replication termination in mammalian cells, but further analyses will be necessary to determine what scenario is precisely at work. Importantly, the "factory roof" pattern was not specific to human sequences, but it was also observed in numerous regions of the mouse and dog genomes (e.g. Fig. 5 e,f)) indicating that random replication termination is a common feature of mammalian germ-line cells. Moreover, this pattern was displayed by a set of one thousand upward transitions, each flanked on each side by DNA segments of appriximately 300 kbp (without repeats), which can be roughly estimated to correspond to 20-30% of the human genome. In these regions, characterized by low and medium G+C contents, the skew profiles revealed a portrait of germ-line replication, consisting of putative origins separated by long DNA segments of about 1-2 Mbp long. Although such segments are much larger than could be expected from the classical view of ≈ 50-300 kbp long replicons (37), they are not incompatible with estimations showing that replicon size can reach up to 1 Mbp (38,39) and that replicating units in meiotic chromosomes are much longer than those engaged in somatic cells [START_REF] Callan | Proc. R. Soc. Lond[END_REF]. Finally, it is not unlikely that in G+C-rich (gene-rich) regions, replication origins would be closer to each other than in other regions, further explaining the greater difficulty in detecting origins in these regions. replication termination is more likely to occur at lower rates close to the origins, leading to a flattening of the profile (3 rd column, grey line). In conclusion, analyses of strand asymmetries demonstrate the existence of mutational pressure acting asymmetrically on the leading and lagging strands during successive replicative cycles of mammalian germ-line cells. Analyses of the sequences of human replication origins show that most of these origins, determined experimentally in somatic cells, are likely to be active also in germ-line cells. In addition, the results reveal that the positions of these origins are conserved in mammalian genomes. Finally, multi-scale studies of skew profiles allow us to identify a large number (1012) of putative replication initiation zones and provide a genome-wide picture of replication initiation and termination in germ-line cells. Fig. 1 . 1 Fig.1. TA and GC skew profiles around experimentally-determined human replication origins. (a) The skew profiles were determined in 1 kbp windows in regions surrounding (± 100 kbp without repeats) experimentally-determined human replication origins (Data and Methods). First row, TA and GC cumulated skew profiles Σ TA (thick line) and Σ GC (thin line). Second row, skew S calculated in the same regions. The ΔS amplitude associated with these origins, calculated as the difference of the skews measured in 20 kbp windows on both sides of the origins, are: MCM4 (31%), HSPA4 (29%), TOP1 (18%), MYC (14%), SCA7 (38%), AR (14%). (b) Cumulated skew profiles calculated in the 6 regions of the mouse genome syntenic to the human regions figured in (a). (c) Cumulated skew profiles in the 6 regions of the dog genome syntenic to human regions figured in (a). Abscissa (x) represents the distance (kbp) of a sequence window to the corresponding origin; ordinate represents the values of S given in percent; red, (+) genes (coding strand identical to the Watson strand); blue, (-) genes (coding strand opposite to the Watson strand); black, intergenic regions; in (c) genes are not represented. Fig. 4 . 4 Fig.4. Mean skew profile of intergenic regions around putative replication origins. The skew S was calculated in 1 kbp windows (Watson strand) around the position (± 300 kbp without repeats) of the 1012 upward jumps (Fig.3); 5' and 3' transcript extremities were extended by 0.5 and 2 kbp, respectively (full circles) or by 10 kbp at both ends (stars) (Data and Methods). Abscissa represents the distance (kbp) to the corresponding origin; ordinate represents the skews calculated for the windows situated in intergenic regions (mean values for all discontinuities and for ten consecutive 1 kbp window positions); the skews are given in percent (vertical bars, SEM). The lines correspond to linear fits of the values of the skew (stars) for x < -100 kbp and x > 100 kbp. Fig. 5 . 5 Fig. 5. S profiles along mammalian genome fragments. (a) Fragment of chr. 20 including the TOP1 origin (red vertical line); (b), (c), chr. 4 and chr. 9 fragments, respectively, with low G+C content (36%); (d) chr. 22 fragment with larger G+C content (48%). In (a) and (b), vertical lines correspond to selected putative origins; yellow lines, linear fits of the S values between successive putative origins. Black, intergenic regions; red, (+) genes; blue, (-) genes; note the fully intergenic regions upstream of TOP1 in (a) and from positions 5290 to 6850 kbp in (c). (e) Fragment of mouse chr. 4 syntenic to the human fragment shown in (c); (f) fragment of dog chr. 5 syntenic to the human fragment shown in (c); in (e) and (f), genes are nor represented. Fig. 6 . 6 Fig. 6. Model of replication termination. Schematic representation of the skew profiles associated with three replication origins O 1 , O 2 , O 3 ; we suppose that these are adjacent, bidirectional origins with similar replication efficiency; abscissa represent the sequence position; ordinate represent the S values (arbitrary units); upward (downward) steps correspond to origin (termination) positions; for convenience the termination sites are symmetric relative to O 2 . First column, three different termination positions T i , T j , T k , leading to elementary skew profiles S i , S j , S k ; second column, superposition of these 3 profiles; third column, superposition of a large number of elementary profiles leading to the final "factory roof" pattern. Simple model: termination occurs with equal probability on both sides of the origins leading to the linear profile (3 rd column, thick line). Alternative model: Figure S1 : S1 Figure S1: (top) skew profile of a fragment of human chromosome 12; (middle), WT of S using is coded from black (min) to red (max); three cuts of the WT at constant scale a = a* = 200 kbp, 70 kbp and 20 kbp are superimposed together with five maxima lines identified as pointing to upward jumps in the skew profile; (bottom) WT skeleton defined by the maxima lines in black (resp. red) when corresponding to positive (resp. negative) values of the WT. Acknowledgements This work was supported by the ACI IMPBIO 2004, the Centre National de la Recherche Scientifique (CNRS), the French Ministère de l'Education et de la Recherche and the PAI Tournesol. We thank O. Hyrien for very helpful discussions. Supplementary material Detection of jumps in skew profiles using the continuous wavelet transform. For effective detection of jumps or discontinuities, the simple intuitive idea is that these jumps are points of strong variation in the signal that can be detected as maxima of the modulus of the (regularized) first derivative of the signal. In order to avoid confusion between "true" maxima of the modulus and maxima induced by the presence of a noisy background, the rate of signal variation has to be estimated using a sufficiently large number of signal samples. This can be achieved using the continuous wavelet transform (WT) that provides a powerful framework for the estimation of signal variations over different length scales. The WT is a space-scale analysis which consists in expanding signals in terms of wavelets that are constructed from a single function, the analyzing wavelet, by means of dilations and translations [START_REF] Arneodo | The Science of Disaster[END_REF][START_REF] Nicolay | [END_REF]. When using the first derivative of the Gaussian function, namely , with , then the WT of the skew profile S takes the following expression: where x and a (> 0) are the space and scale parameters, respectively. Equation (1) shows that the WT computed with is the derivative of the signal S smoothened by a dilated version of the Gaussian function. This property is at the heart of various applications of the WT microscope as a very efficient multi-scale singularity tracking technique [START_REF] Arneodo | The Science of Disaster[END_REF][START_REF] Nicolay | [END_REF]. The basic principle of the detection of jumps in the skew profiles with the WT is illustrated in Figure S1. From equation (1), it is obvious that at any fixed scale a, a large value of the modulus of the WT coefficient corresponds to a large value of the derivative of the skew profile smoothened at that scale. In particular, jumps manifest as local maxima of the WT modulus as illustrated for three different scales in Figure S1 (middle). The main issue when dealing with noisy signals like the skew profile in Figure S1 (top) is to distinguish between the local WT modulus maxima (WTMM) associated with the jumps and those induced by the noise. In this respect, the freedom in the choice of the smoothing scale a is fundamental, since the noise amplitude is reduced when increasing the smoothing scale, while an isolated jump contributes equally at all scales. As shown in Figure S1 (bottom), our methodology consists in computing the WT skeleton defined by the set of maxima lines obtained by connecting the WTMM across scales. Then, we select a scale a large enough to reduce the effect of the noise, yet small enough to take into account the typical distance between jumps. The maxima lines that exist at that scale are likely to point to jump positions at small scale. The detected jump locations are estimated as the positions at scale 20 kbp of the so-selected maxima lines. According to equation (1), upward (resp. downward) jumps are identified by the maxima lines corresponding to positive (resp. negative) values of the WT as illustrated in Figure S1 (bottom) by the black (resp. red) lines. For the considered fragment of human chromosome 12, we have thus identified 7 upward and 8 downward jumps. The amplitude of the WTMM actually measures the relative importance of the jumps compared to the overall signal. The black dots in Figure S1 (middle) correspond to the 5 WTMM of largest amplitude ( ΔS ≥ 12.5%); it is clear that the associated maxima lines point to the 5 major jumps in the skew profile. Note that these are 5 upward jumps with no downward counterpart and that they have been reported as 5 putative replication origins.
35,572
[ "178489", "13923", "757417" ]
[ "433", "13", "19271", "13", "19271", "433", "13", "19271", "433" ]
01755329
en
[ "sdv", "sdu" ]
2024/03/05 22:32:10
2017
https://hal.sorbonne-universite.fr/hal-01755329/file/K%C3%BChl%20Lourenco%202017_sans%20marque.pdf
Gabriele Ku ¨hl Wilson R Lourenc email: wilson.lourenco@mnhn.fr Pesciara Von Monte Bolca Á Euscorpiidae Unteres Eoza Gabriele Ku A new genus and species of fossil scorpion (?Euscorpiidae) from the Early-Middle Eocene of Pesciara (Bolca, Italy) Keywords: Pesciara of Bolca, Euscorpiidae, Lower Eocene, Scorpions Pseudoscorpion, ist der hier beschriebene fossile Skorpion erst die zweite beschriebene Arachnidenart, die von der Bolca-Lagersta ¨tte bekannt ist Fossil scorpions are among the oldest terrestrial arthropods known from the fossil record. They have a worldwide distribution and a rich fossil record, especially for the Paleozoic. Fossil scorpions from Mesozoic and Cenozoic deposits are usually rare (except in amber-deposits). Here, we describe the only fossil scorpion from the Early to Middle Eocene Pesciara Lagersta ¨tte in Italy. Eoeuscorpius ceratoi gen. et sp. nov. is probably a genus and species within the family Euscorpiidae. This may be the first fossil record of the Euscorpiidae, which are so far only known from four extant genera. Eoeuscorpius ceratoi gen. et sp. nov. was found in the ''Lower Part'' of the Pesciara Limestone, which is actually dated Late Ypresian stage (between 49.5 and 49.7 Ma). Besides a possible pseudoscorpion, the here-described fossil scorpion is the second arachnid species known from the Bolca Locality. Introduction The Bolca Fossil Lagersta ¨tte is a world-famous Lagersta ¨tte for exceptionally preserved fossils from the Eocene. Several hundred plant and animal species have been described from here. Among the animal species, vertebrates (fishes) are dominant and known worldwide. Invertebrates, such as the herein described scorpion, belong to the so-called minor fauna, maybe less famous but no less spectacular. Geological and paleobiological background and taphonomy The Bolca region is located in the eastern part of the Lessini Mountains, which are within the Southern Alps in Northern Italy [START_REF] Papazzoni | The Pesciara-Monte Postale Fossil-Lagersta ¨tte: 1. Biostratigraphy, sedimentology and depositional model[END_REF]. The limestone developed during the Eocene in two phases of uplift of the Tethys Ocean. It is surrounded by volcanic ash, and the limestone deposits are about 19 m thick [START_REF] Tang | Monte bolca: an eocene fishbowl[END_REF]. The limestone (Lessini Shelf) is especially recognized for a rich fossil record of fishes, which are known throughout the world for the excellent preservation. The Lessini shelf is surrounded by deep marine basins and restricted northwards by terrestrial deposits [START_REF] Papazzoni | The Pesciara-Monte Postale Fossil-Lagersta ¨tte: 1. Biostratigraphy, sedimentology and depositional model[END_REF]). The fossil scorpion described herein [START_REF] Cerato | Cerato. I pescatori del Tempo. San Giovanni Ilarione[END_REF]; Fig. 4a) was discovered in the Pesciara Fossil-Lagersta ¨tte, which contains marine life forms as well as terrestrial organisms [START_REF] Guisberti | The Pesciara-Monte Postale Fossil-Lagersta ¨tte: 4. The ''minor fauna'' of the laminites[END_REF]. The Pesciara Limestone was deposited during the Ypresian stage (Lower Eocene), roughly 49.5 Ma ago. The fossils are found in fine-grained, laminated limestones that were deposited between coarser storm-induced limestone layers [START_REF] Schwark | Organic geochemistry and paleoenvironment of the Early Eocene ''Pesciara di Bolca'' Konservat-Lagersta ¨tte, Italy[END_REF]. The paleoenvironment of the Bolca area is generally regarded as rich in variety. Several ecosystems, reaching from pelagic, as well as shallow marine habitats, and even brackish to fluvial and terrestrial habitats, are evidenced by the deposits. Due to the Eocene climatic conditions, temperatures were tropical to subtropical. The deposits generally prove former advantageous life conditions [START_REF] Tang | Monte bolca: an eocene fishbowl[END_REF], allowing a high degree of biodiversity. However, temporally anoxic and euxinic conditions are assumed by the absence of bottom dwellers [START_REF] Papazzoni | The Pesciara-Monte Postale Fossil-Lagersta ¨tte: 1. Biostratigraphy, sedimentology and depositional model[END_REF]. These events may have let to extinction events that are responsible for the rich fossil fauna. First, a diverse and abundant fish fauna is represented. The rich fish fauna is interpreted as having been close to coral reefs [START_REF] Papazzoni | The Pesciara-Monte Postale Fossil-Lagersta ¨tte: 1. Biostratigraphy, sedimentology and depositional model[END_REF]. Other vertebrates are represented by two snake specimens, a turtle and several bird remains (for a review, see [START_REF] Carnevale | The Pesciara-Monte Postale Fossil-Lagersta ¨tte: 2. Fishes and other vertebrates[END_REF], but also plants are very abundant, with more than 105 macrofaunal genera being described [START_REF] Wilde | The Pesciara-Monte Postale Fossil-Lagersta ¨tte: 3. Flora. In The Bolca Fossil-Lagersta ¨tten: A window into the Eocene World[END_REF]. The fossil fauna and flora are two-dimensionally preserved and frequently fully articulated. Soft part preservation with organs and cuticles is common among the fossils. Even color preservation has been reported. Microbial fabrics may be related to the fossilization of soft tissues [START_REF] Briggs | The role of experiments in investigating the taphonomy of exceptional preservation[END_REF]. However, the taphonomic process is actually not well known. Arthropods from Pesciara of Bolca The scorpion described (Fig. 1) here belongs to the so-called minor fauna of the Pesciara-Lagersta ¨tte, which comprises arthropods, polychaete worms, jellyfishes, mollusks, brachiopods and bryozoans. Among arthropods, insects and crustaceans are most abundant [START_REF] Guisberti | The Pesciara-Monte Postale Fossil-Lagersta ¨tte: 4. The ''minor fauna'' of the laminites[END_REF]. Arachnids, such as the scorpion Eoeuscorpius ceratoi gen. et sp. nov., are only known by two fossils. One is the scorpion described here. The other arachnid is a possible pseudoscorpion from this Lagersta ¨tte [START_REF] Guisberti | The Pesciara-Monte Postale Fossil-Lagersta ¨tte: 4. The ''minor fauna'' of the laminites[END_REF]. In general, the order Scorpiones goes back to the mid Silurian of Scotland (see Dunlop 2010 for review). Scorpions from the early Paleozoic generally differ from subsequent groups by a simple coxo-sternal region and the lack of trichobothria, as the latter structures developed as a consequence of terrestrialization. Beginning with the Devonian, a coxapophyses/stomotheca developed in scorpions, which were then very abundant during the Carboniferous [START_REF] Dunlop | Geological history and phylogeny of Chelicerata[END_REF]. Currently, 136 valid fossil scorpion species have been described, most of them from Paleozoic fossil sites. Mesozoic and Tertiary fossils are comparatively rare, though amber seems to be a good resource for fossil scorpions [START_REF] Dunlop | How many species of fossil arachnids are there?[END_REF][START_REF] Dunlop | A summary list of fossil spiders and their relatives[END_REF][START_REF] Lourenc ¸o | A synopsis on amber scorpions, with special reference to the Baltic fauna[END_REF][START_REF] Lourenc ¸o | A new species of scorpion from Chiapas amber, Mexico (Scorpiones: Buthidae)[END_REF][START_REF] Lourenc ¸o | A Synopsis on amber scorpions with special reference to Burmite species; an extraordinary development of our knowledge in only 20 years[END_REF]. Materials and methods The description of the Pesciara scorpion is based on a single specimen, which is the property of Mr. Massimo Cerato in Bolca of Vestenanova and was kindly loaned to us for scientific description. The scorpion, collection number CMC1, is currently stored at the Museum of the Cerato Family (via San Giovanni Battista, 50-37030 Bolca di Vestenanova-Verona). The scorpion was x-rayed with a Phoenix v|tome|x s Micro Tomograph, but this provided no additional information. The scorpion was also photographed using a Nikon D3x. Details were photographed with a Keyence Digital Microscope. A Leica MZ95 was used to produce fluorescent images. Image editing was carried out with Adobe Photoshop CS6 in addition to Adobe Illustrator CS6. Measurements were done with the help of ImageJ. Each structure was measured in length and width [mm], except some structures that were regarded as highly fragmentary. Length and width were measured along the middle of each segment (see Table 1). Systematic paleontology Phylum Arthropoda von Siebold, 1848 Class Arachnida Lamarck, 1801 Order Scorpionida Koch, 1837 Family ?Euscorpiidae Laurie, 1896 Genus Eoeuscorpius gen. nov. Diagnosis of the genus. Scorpion of small size with a total length of 38 mm. General morphology very similar to that of most extant species of the genus Euscorpius Thorell, 1876; however, both body and pedipalps are more bulky and less flattened. Carapace with a strong anterior emargination. Trichobothrial pattern, most certainly of the type C [START_REF] Vachon | Etude des caracte `res utilise ´s pour classer les familles et les genres de Scorpions (Arachnides). 1. La trichobothriotaxie en arachnologie. Sigles trichobothriaux et types de trichobothriotaxie chez les Scorpions[END_REF]; a number of bothria can be observed: 3 on femur, internal, dorsal and external, 1 dorsal d 1 , 1 internal and a few external on patella; the internal is partially displaced; 4-6 on dorso-external aspect of chela hand and 5 on chela fixed finger. Diagnosis of the species. As for the genus. Etymology. The name honors Mr. Massimo Cerato, Bolca of Vestenanova, Italy, who allowed us to study the specimen. Description. The scorpion is exposed from its dorsal side, nearly completely preserved and in an excellent state of preservation (Fig. 1a), although the distal segments of the walking legs are not well preserved. The cuticle has a brownish coloration and therefore optically stands out against the paler sediment. The cuticle surface is not completely preserved, as smaller areas of usually less than 1 mm are missing. The cuticle surface of the mesosomal tergites is to some extent transparent, allowing a view to the ventral cuticle. The median eyes are not preserved. In that area the sediment matrix is exposed, indicating the former position of the eyes and eye tubercle. The right carapace half is incomplete but can be reconstructed from the left half. On the cuticle surface, especially of the pedipalps, numerous insertion points of bothria and setae are preserved (Fig. 2a,b). The body of the fossil scorpion measures ca. 38 mm in length and ca. 20 mm in width (Fig. 1b). Chelicerae are ca. 2 mm long and 1.3 mm wide. With a width of 4.6 mm and a length of ca. 7 mm, the chela-bearing segments of the pedipalps are very strong, and they are the optically dominant structures of the scorpion. The carapace is 4.8 mm long and 2.1 mm on the anterior width and 4.8 mm on the posterior width. In the middle region, the mesosoma is 6.6 mm wide on average. These measurements give an impression of the scorpion size. More detailed measurements are listed in Table 1. The carapace of the scorpion is sub-quadrangular with a strong convex emargination at the anterior margin. The cuticle of the anterior region is darker than the rest, which may be because of the carapace thickness. Lateral eyes would at best be preserved on the left carapace half, but their number cannot be determined with certainty. As mentioned before, the median eyes are not preserved, but their former position is marked by a heard-like notch on the carapace surface. The median eyes are situated in the middle of the carapace. Posteriorly, a deep furrow separates the carapace in two halves. At the anterior to the frontal carapace margin, a small (1.3-mm-long and 0.8-mm-wide) laminated structure is exposed (Fig. 1b shaded area, Fig. 3). About 36 laminae are within the oval structure, with an average thickness of 0.02 mm. This structure is interpreted as exposed muscle fibrils. Although the chelicerae are normally best seen from the ventral side, in the specimen described herein the tibia and tarsus of the chelicerae are also exposed dorsally (Fig. 1a,b). As in most scorpions, the chelicerae are very small (2 mm long/1.3 mm wide for the complete structure). Parts of the more basal segments (e.g., the coxa) are also preserved, but unfortunately not in detail. The tibia is produced into the fixed finger. The inner ventral margin is equipped with a bristlecomb (Fig. 4a,b). On both the fixed and moveable finger (tibia and tarsus), tooth denticles are preserved. Roughly the teeth can be distinguished into distal, subdistal and median teeth [START_REF] Vachon | De l'utilite ´, en syste ´matique, d'une nomenclature des dents des che ´lice `res chez les Scorpions[END_REF][START_REF] Sissom | Systematics, biogeography and paleontology[END_REF]). Again, the preservation is not detailed enough for further descriptions. The pedipalps of the fossil are comparably short and stout. Especially the patella is considerably large. It has a roundish shape and on the cuticle surface three carinae from anterior to posterior. The two carinae on the right side form a V as they meet at the posterior region of the patella (Fig. 2a,b). On the cuticle surface of the patella, Fig. 3 Laminated structure on the scorpion surface above the carapace margin. Possible remains of muscle fibrils. Scale 0.5 mm several insertion points show the former position of the bothria and other setae. These structures cannot be fully observed. However, some of the insertion points are larger than others. These larger insertion points (filled circles in Fig. 2b) are most certainly the remains of bothria. The trichobothrial pattern can be observed on the doral side on the pedipalps. Three bothria (internal, dorsal, external) are preserved on the femur. On the patella, one dorsal and one internal bothria is preserved, as well as four external bothria. The internal bothria is partially displaced; four to six bothria are preserved on the dorso-external aspect of the chela hand, and five bothria are on the chela fixed finger. Though incompletely preserved, the overall impression of the legs is that they were slender compared to the robust pedipalps. The tibia of the walking legs is broader than the preceding and following leg segments. The basitarsus and tarsus are shorter than the more basal segments. Tarsal claws are not preserved. The mesosoma consists of seven segments, covered by mesosomal tergites. Tergites are rectangular in shape, with rounded edges. A carina proceeds close beneath the anterior margin of each mesosomal tergite, except on tergite VII (Fig. 1a,b). The width of the dorsal mesosomal tergites varies from 5.8 to 6.5 mm from tergite I to tergite VII. There is more variation in the length (measured from anterior to posterior). The first and second mesosomal tergites are only about 0.9 mm long; the third measures 1.4 mm. The fourth mesosomal tergite is about 1.6 mm long. The following mesosomal tergites show similar lengths of 2.3 and 2.4 mm. The seventh mesosomal tergite differs from the others in being trapezoid. It is roughly 3 mm long. The anterior part is 5 mm wide, whereas the posterior part measures only 3.3 mm. Two additional ventral plates of mesosomal segments five and six are preserved. Their shape and size are comparable to their dorsal pendants. The metasoma of the fossil scorpion consists of five segments, plus the telson (Fig. 1a,b). With the exception of segment V, tail segments are approximately of the same size. Segment V is one and half times longer than the preceding ones. On each metasomal segment, two longitudinal carinae are preserved. The cuticle surface is densely covered with granules. The telson aculeus is partly covered by segment V, because it is bent backwards (as in life position). The tip of the aculeus is broken. Discussion The scorpion is one individual of the minor fauna (Guisberti et al. 2014), which is the animal non-fish fauna and comprises inter alia a few arthropod genera. The scorpion is originally regarded as terrestrial, so it was deposited allochthonously. As the depositional environment was not far away from the coastline, it was most probable washed in from nearby land. A definitive phylogenetic position of the scorpion is difficult to determine because of the incompleteness of the specimen. According to the observed characters-the general morphology, shape of pedipalp segments (i.e., femur, patella and chela), presence of a small apophysis on the internal aspect of the patella, and the same numbers and positions of some trichobothria-the specimen is unquestionably a representative of the Chactoidea and can be assigned temporarily to the extant family ?Euscorpiidae (see Lourenc ¸o 2015). However, because of the incompleteness of the specimen and in particular because of the geological horizon (Eocene), the specimen is assigned provisionally to a new genus. Euscorpiidae were hitherto only known from extant representatives. The family comprises 31 species, belonging to four genera (Lourenc ¸o 2015). Eoeuscorpius ceratoi gen. et sp. nov. would be the fifth genus (comprising one species), probably giving the family Euscorpiidae a fossil record that goes back more than 49 million years. Etymology. The generic name refers to the geological position of the new genus within the Eocene, which is intermediate in relation to elements of the family Palaeoeuscorpiidae Lourenc ¸o, 2003 and the extant genus Euscorpius (Lourenc ¸o 2003). Type species. E. ceratoi gen. et sp. nov., from the ''Lower Part'' of Pesciara Limestone, Early Eocene, Ypresian stage. Eoeuscorpius ceratoi gen. et sp. nov. Figure 1a, b. Material. Holotype, CMC1 (Museum of the Cerato family, Verona). Fig. 1 a 1 Fig. 1 a Holotype of Eoeuscorpius ceratoi gen. et sp. nov.; the color of the photography is inverted. b Line drawing of the holotype; dashed lines of the trunk and leg indicate indistinct segment (legs) respectively dorsal sternite margins. Densely dashed lines beside and Fig Fig. 2 a, b Right pedipalp chela of Eoeuscorpius ceratoi gen. et sp. nov. br bristles, bt bothria, c carina, mf moveable finger, ti tibia. Scale 1 mm Fig. 4 a 4 Fig. 4 a Left and right chelicera of Eoeuscorpius ceratoi gen. et sp. nov., fluorescence illumination. b Line drawing of chelicera. bc bristlecomb, d distal tooth, sd subdistal tooth, m median tooth, mf moveable finger, ti tibia. Scale 1 mm Table 1 1 Measurements of Eoeuscorpius ceratoi t sting 4.1/ 1.8 ts5 3.4/ 2.9 gen. et sp. nov Tagma L/W ch pd wl 1 wl 2 wl 3 wl 4 Sternal ms1 ms2 ms3 ms4 ms5 ms6 ms7 ts1 ts2 ts3 ts4 (mm) s Sternal plates Dorsal carapace, left half (complete wide) 2.5/ 2.5/ 6.2 6.2 Tergal plates 4.8/3.2 (6.4) 0.9/ 0.9/ 1.4/ 1.6/ 2.3/ 2.4/ 3.1/ 2.4/ 1.9/ 2.4/ 2.6/ 5.1 5.8 6.0 6.6 6.7 6.5 4.8 2.9 2.7 2.3 2.5 cx l cx r tr l 1.2/ 1.9 tr r 1.8/ 1.1/ Laminated structure (near carapace) 1.3/0.8 2.7 1.2 fe l 3.5/ 3.2/ 2.4/ Eye hole 0.6/0.7 2.0 1.6 1.4 fe r 4.6/ 2.3/ 2.3/? 2.2 1.1 pa l 2.4/ 2.4/ 3.0/ 2.0 1.5 1.2 pa r 3.1/ 2.0/ 3.1/ 2.2 1.4 1.2 ti l 1.9/ 6.9/ 1.6/ 3.3/ 2.7/ 1.3 4.7 0.7 1.6 1.6 ti r 1.9/ 6.9/ 1.6/ 2.7/ 2.7/ 1.3 4.5 0.7 1.5 1.2 bt l 1.4/ 1.8/ 1.3/ 0.5 0.7 0.8 bt r 1.6/ 1.2/ 0.7 0.6 ta l 1.3/? 0.5/ 1.5/ 1.6/ 0.4 0.6 0.6 ta r 1.3/? 4.3/ 0.8/ 1.6/ 0.9 0.4 0.4 First row is divided into the tagmata from anterior to posterior. First column is divided into appendage segments l/r left/right, cx coxa, tr trochanter, fe femur, pa patella, ti tibia, bt basitarsus, ta tarsus, ch chelicera, pd pedipalp, wl walking leg, ms mesosomal segment, ts tail segment Acknowledgements Before all others, we thank Mr. Massimo Cerato (Verona) for access to his fossil scorpion. We also sincerely thank Roberto Zorzin (Verona) for friendly communication, for information on the fossil and helpful comments on the manuscript. Additionally, we thank Torsten Wappler (Bonn) for technical help with pictures and Georg Oleschinski (Bonn) for the high-quality photographs. Additionally, we would like to thank the reviewers (Jason Dunlop and Andrea Rossi) and editors (Joachim Haug and Mike Reich) for helpful comments and constructive suggestions, which helped to improve our manuscript.
20,485
[ "1011345" ]
[ "154688", "519585" ]
01755339
en
[ "shs" ]
2024/03/05 22:32:10
1999
https://amu.hal.science/hal-01755339/file/Bangkok%20streets%20in%20Thai%20short%20stories.pdf
The aim of this paper is to compare the vision of the city and its streets in Thai modern short stories. This study is based on the research I am doing for my thesis entitled : "Fiction, ville et société : les signes du changement social en milieu urbain dans les nouvelles thaïes contemporaines" at the Institut National des Langues et Civilisations Orientales (Paris). This research focuses on short stories by four authors who all received the Southeast Asia Write Award : Atsiri Thammachot, Chat Kopchitti, Sila Khomchai and Wanit Charungkit-anan. In my corpus of short stories, I chose six which show different aspects of the street, such as : description, traffic jam, relations between people in the street, vision of the street according to social class, and conflict between tradition and modernity. Corpus Besides the fact that the four authors chosen for my corpus have received the Southeast Asia Writer Award, they all belong to the same generation (born between 1947 and 1954) and are all well-known in their country. The four of them were born in the countryside and came to Bangkok to attend university. They retain from their childhood in provincial towns a nostalgia that is apparent in their short stories and novels. By Atsiri Thammachot, I have selected two short stories : " Thoe yang mi chiwit yu yang noy ko nai chai chan " (She is still alive, at least in my heart) and " Thung khra cha ni klai pai chak lam khlong sai nan " ( It is time now to escape far from this khlong). Both are extracted from the collection of short stories entitled "Khunthong chaw cha klap mua fa sang" (Khunthong, you will come back at dawn) for which Atsiri got the SEA Write in 1981. Born in 1947, Atsiri spent his childhood in Hua Hin where his parents had fisheries. After studying journalism at Chulalongkorn University, he began to work with the Sayam Rat, where he is still working today, writing short stories at the same time. In his literary work, the action is often located in the countryside or in small towns, and the problems of urbanisation, modernisation and social change are recurrent. He also focuses on the return to the village after working in Bangkok, as shown in the short story Sia lew sia pai (What is gone is gone). The first short story, Thoe yang mi chiwit yu yang noy ko nai chai chan, is taking place during the events of October 1976. Coming out of his house, a journalist runs into a young woman who is involved in the struggle for democracy. Days later, he receives the list of the people killed in the massacre. The name of the woman is in it. Atsiri shows in this short story how a journalist can see himself as a coward and feel ashamed for not having had the courage to take part directly in the events. In Thung khra cha ni klai pai chak lam khlong sai nan, the author shows how a woman, living alone with her two children in a hut on a khlong bank, feels so bad about the life and the surroundings she is offering to her children, that she decides to quit the khlong. This short story points out how the city can destroy people. Chat Kopchitti, born in 1954 in Samut Sakhon, studied art in Bangkok. He had many jobs before he decided to be only a writer. According to Marcel Barang : " He wasn't quite 20 when he decided that creative writing was his life and five years later he turned down a life in business to gamble on a literary career" (Barang, 1994, p.334). Chat got the SEA Write twice : once in 1982, for his novel Khamphiphaksa (The judgement), and again, in 1994, for Wela (Time). In 1983, Raphiporn declared : "Chat Kopchitti avec son roman le Jugement annonce un nouveau défrichement de ce genre littéraire, une composition plus brillante, à laquelle nous, les anciens, n'avions jamais pensé." [START_REF] Fels | Promotion de la littérature en Thaïlande[END_REF]. Chat published a lot of novels and short stories collections, showing the life of people living in marginality (Phan ma ba) or in social rupture (Khamphiphaksa). Some of his short stories, written like tales, like Mit pracham tua and Nakhon mai pen rai, point out the flaws of society and the use of power by the elite. Chat's vision of society is rather pessimistic. The short story Ruang thamada (An ordinary story) presents the relations between the narrator and an old woman whose daughter is dying of cancer. The narrator is witness and actor at the same time. Beyond the relations between these two characters, it is all the problems encountered by people coming from the countryside which are shown. The conflict between tradition and modernity is evident in this text : traditional healer versus modern doctor ; village customs versus city behaviours ; solidarity versus individuality. All along the story, the narrator speaks to the reader, using khun, to involve him in the action. Khrop khrua klang thanon (A family in the street), written by Sila Khomchai, is one of the most representative short stories of this corpus because the whole story takes place in the street. Actually, the street is the main character in the story which recounts the life of a middle-class couple. Living in the suburbs of Bangkok, they spend most of their time in their car, stuck in traffic jam, eating, reading, speaking, playing, observing the others. Sila Khomchai (born in 1952 in the province of Nakhon Si Thammarat) took an active part in the events of 1973-1976 and had to hide in the jungle after the massacre. When he came back to Bangkok in 1981, he became journalist and continued to write fiction. If most of his short stories and novels reflect his political commitment, Khrop khrua klang thanon is more a criticism of urban life and, somehow, of the middle-class. This short story won the SEA Write in 1993. The second story by Sila Khomchai that I chose is entitled Khop khun ... Krungthep (Thank you Bangkok). In this short story, a taxi driver and his customer, going through Bangkok at night, develop a strange relation : each imagines that the other is going to attack him. During the whole journey, they are anxious, with feelings of fear and mistrust. The last author I will consider here is Wanit Charungkit-anan. Born in 1949, in the province of Suphan Buri, Wanit studied at Silpakorn University in Bangkok. Editor, columnist, poet, author of numerous novels and short stories, Wanit is a very famous writer. He received many awards, among which the SEA Write in 1984 for his short story Soi diaw kan (The same soi). The short story I selected, Muang luang (The capital) is quite well-known in Thailand and even abroad, since it has been translated into several languages. This story gives a very different vision of the city than Sila's Khrop khrua klang thanon. The main character is a rather poor worker in Bangkok who describes the street and the people seen from the bus. The description he makes is very pessimistic : tired and desperate workers, inextricable traffic jams, dangerous and hostile city. In the bus, a man from Isan province is singing a folk song that gives the narrator a deep feeling of nostalgia. Thematic analysis The usual definition of 'street', as we can find it in dictionaries, is reduced to a minimum. In English dictionaries, as well as in French or Thai dictionaries, the street is defined as a town or village road with houses on one side or both. Beyond the simple description, the aim of this research is to analyse the way the authors see the street and show it in short stories as a social area where all the different communities of the city pass by one another, having or not some contact. In Bangkok, the life in the street is very rich. Three kinds of 'street' -in the meaning of 'way of communication' -can be distinguished : the large avenues, the soi and the khlong. A fourth kind, reserved to the motor vehicles, is the express highway, created to break up the traffic jams. Highways are now very extensive, forming a second level of road network above the old streets. The organisation of traffic in the avenues and the soi is rather difficult, especially because many soi are dead ends, making communication between the large roads almost impossible. Life in avenues is quite different from life in soi. While the large streets seem to be only ways of communication and commercial areas, the soi are where people live, re-creating the village. Nowadays many khlong, traditional waterways, have been filled in and covered by roads and buildings. If Bangkok is no longer the Asian Venice, khlong still have a role in communication. In Thon Buri, of course, but also in other districts of the town, boats transport people and goods, using the khlong as a street. Bangkok, and the city generally, is viewed by the authors of my corpus as a terrible place, for the conditions of life (traffic jams, housing far from the centre, difficulties to find a job...) and for the relations between the people. Urban society and the city are often described as monsters devouring the countryside people, leading astray men and women and getting more and more westernised. In his book Aphet kamsuan (Bad omen), Win Liaw-warin gives a dictionary of life for middle-class people in Bangkok. Under 'Krungthep', he writes : "If Krungthep were a woman, she would be a woman of easy virtue fascinated by the cheap Western culture". About the word 'dream', Win says : "There are two kinds of dream : the good one is dreaming that you fall in Hell (and wake up in Krungthep) ; the bad one is that you go to Heaven (and wake up in Krungthep)" [START_REF] Liaw-Warin | Aphet kamsuan[END_REF]. This shows well enough the feelings of the writers, which are shared by a lot of Bangkok inhabitants. The different themes about the street in the city, which appear in short stories, reveal the importance of the street in the social urban context. Description In most of the short stories of my corpus, the descriptive part is not a very important one. Due to the shortness of the text or by choice of the author, the accent is more on the characters and the action than on the description. However, some places are described throughout the story. The opposition between avenues and soi is quite evident in Chat's story Ruang Thammada. Leaving in an old wooden house located in a soi, the narrator make the difference between the street where he lives, and the streets around, that he calls "the jungle" and the big avenues, where "one can find all things making civilisation, as luxurious hotel, cinema halls, massage parlours, bowlings, restaurants, bookshop (...) and very smart people. (...) The atmosphere is perfumed and air-conditioned, people look beautiful, there is lifts, escalators and other signs of progress. Coming from the soi is like coming out of the barbarism and emerging in the centre of a fairy tale city, except that it is a real city." Speaking about the city, Chat named it muang neramit, the 'city built by supernatural powers'. The opposition between soi and main street is really evident in this text. Atsiri, in Thoe yang mi chiwit yu..., describes the soi (which is actually called trok) where lives the narrator as "long and narrow as a railroad". The narrator has to walk to join the main road, since there is no buses crossing in his soi. The khlong as a way of communication is well represented by Atsiri in his short story Thung khra cha ni klai... The mother and her two children are living in a hut under an arch of a bridge that crossed over a dirty khlong, surrounded by big buildings. Just like in a street, vendors are passing in the khlong, paddling in the stream. Even the sex market is present : at night, the family can hear the prostitutes paddling up and down the khlong. Afraid that her little girl could become one of them, the mother will decide to quit the dirty khlong. During the night, the city changes its appearance. Lights, streets and people are not the same than in daytime. For the narrator of Ruang thamada (Chat), the city at night is a place of pleasure. At the end of the story, after the death of the daughter of his neighbour, the narrator decides to go to the city : "Tonight, I am going to walk around, to sit somewhere having a drink, or even to get a girl in the fairy tale city". In Khop khun...Krungthep, Sila shows a city deserted, crossed by fast cars and illuminated by advertising lights. It is two o'clock in the morning : "[the taxi] goes fast in the dark streets. In the headlights, some closed buildings appear on both side of the streets ; the side-walks are deserted. From time to time, headlights of an other car shine while passing the taxi, in a roar of engine". Arriving on Anusawri Chai : "[the place] is empty and wide. The white shining lights of the street lamps make a warm atmosphere. The advertising billboards pierce the black screen of the night with multicoloured and flashing lights". This description is quite far from the one of the daytime city, crowded, polluted, and congested ! Traffic jam From the moment that a short story is developed within the context of the city, traffic jam takes a central role in the story. The congested streets are described in a lot of short stories, but in two of which, they are almost the principal characters. Muang luang, by Wanit, and Khrop khrua klang thanon, by Sila, give two very different visions of the traffic jam in Bangkok. The family of Sila, actually a middle-class couple, is driving in Bangkok, spending most of their time in car. Having an appointment at three o'clock in the afternoon, they decide to leave their house, located in the North suburbs, at nine in the morning. The husband, who is also the narrator, describes the way his wife prepares the car : "She put on the back seat a basket full of food, an icebox with cool drinks (...) She put also some plastic bags for rubbish, a spittoon, a spare suit hanging above the window. Just as if we were going for picnic !" In the car, they eat, play, listen to the radio, and even make love. They think to the new car they want to buy, more spacious. Especially at the end of the story, when the wife announces to her husband that she is pregnant : "My wife is pregnant ! Pregnant in the street ..." the husband wants to yell. For this couple, traffic jam become more or less a way of life : the car is means of transport, house, and office as well. The hero of Muang luang, by Wanit, has not the same reaction towards the city and traffic jams. Bus user, he feels exhausted and sick of his life in Bangkok. Spending hours packed tightly in the bus stuck in traffic jam, he dreams of the village where he was born, of the girl he left there. At a cross-road, traffic jam is so long that he gets off the bus : "How the cars could go ? Going through this city is so difficult. The traffic lights have no meaning. Cars which get green light can't move because other cars are stuck in the middle of cross-road . (...) Green light become red ; on the other side, red light become green. And all comes to the same thing, cars crawl along and stop". And when the narrator walks in the street, it is even worse : "I was feeling so bad I could die when I was waiting to cross the street in Rachaprasong corner. I was standing on an island, exposed to polluted smokes, almost wanting to spit. (...) I suffocated, almost in blackout". Relations between people in the street Reading these short stories give the feeling that the relations between people in Bangkok streets are quite similar to relations encountered in European capitals. The main feeling shown in the texts is indifference towards the others, just as in Paris. The indifference is especially clear in Muang luang (Wanit). The hero, walking in the street to the bus stop, almost received on the head a stone felt from a building under construction. Nobody notices the fact, neither the other pedestrians nor the workers. Arriving at the bus stop, he looks to the people waiting for the bus : "People waiting at the bus stop are as usual. Nobody pays attention to the others". After the struggle to get in the bus, the narrator try to find a sit in the bus that is full : "Two children and their mother are holding on the back of a seat, standing in the middle of the bus. A young guy is sitting in front of them, but he does not think to give his seat. I do not blame him ; if I was sitting, I am not sure I would give my seat to someone else". The indifference is sometimes verging on non-assistance. Chat describes in Ruang thamada how the people walk in the street near a man lying on the side-walk : " (...) quite often, I see somebody lying on the side-walk or on a footbridge. People come and go, but nobody stops, nobody takes care of him, nobody takes time to check if he is still alive, or if he is still breathing. People pass in front of him as if he was a rubbish heap -some of them do not even see him. This is an ordinary story (in our urban society). If someone stops to check or to give assistance, that is extraordinary". In other cases, people feel contempt for the ones who are acting in an unusual way. When the Isan man begins to sing in Muang luang, some people appreciate it, but most of them laugh with contempt, looking at him as if he was crazy. With indifference and contempt, a third feeling is shown by the inhabitants of the city : the fear. The fear of each other, even when there is no reason to feel it. The short story of Sila, Khop khun ... Krungthep, illustrated well enough this irrational feeling. The taxi driver, remembering that a friend of him has been attacked one night and that every day, in newspapers, he can read about violence, becomes really frightened by his customer. Tall and strong, the customer wear a thick moustache and has a scar on the cheekbone, under the left eye. He holds tight on his knees a black bag that seems very precious. His odd-looking makes the taxi driver really nervous and anxious. The driver tries several times to start up a conversation with the customer, but this one answers only by few words. Actually, the customer is afraid of the driver, and the driver of the customer. Throughout the story, they feel more and more anxious, suspicious, and frightened, until they arrive to the house where the customer wanted to go. After he left the car, the two of them feel free and thankful. Thankful for each other, and thankful for the city, that is not as bad as they think. That is why the short story is entitled Khop khun ... Krungthep. Fortunately, the relations between people in the street are not always that pessimistic ! Characters of the short stories encounter sometimes people they like or who give them good feelings. The hero of Khrop khrua klang thanon takes advantage of being stuck in traffic jams for encountering people who could be useful in his job. Walking around his car stopped, he speaks with men : "We speak about our problems, we criticise politics, we chat about business or sport. We are like neighbours. (...) I work in advertising business (...) I find sometimes some unexpected customers". Later, the narrator meets a strange guy who is planting out some banana trees on the central strip of the road. The guy wants to plant out more and more trees to fight off pollution. Despite the discouraging feelings shown by the hero of Muang luang, he makes an encounter in the bus which changes his state of mind. When he hears the Isan man singing, the narrator thinks at first that he is dreaming. Listening to the folk song, his mind is transported to his village, with his girlfriend. It makes him feeling better, forgetting his bad situation in Bangkok. At the end of the story, the narrator gets off the bus, following the singer. He asks him : "Excuse-me to ask you that, but are you crazy ?" The singer answers : "No, but I wish I am". The journalist of Thoe yang mi chiwit yu... (Atsiri) meets a young woman who is running in his soi, frightened by people chasing her. The story takes place during the events of October 1976, and the young woman is carrying some political posters. Although they just speak a brief moment, the journalist feels himself very involved in this encounter. When the girl leaves, she gives him her name, that he writes on a piece of paper. Later on, he find her name on the list of the people killed in massacre. This encounter symbolise the relation between people who were involved in the political events and the ones who did not dare to. There is a lot of emotion in all this short story, and despite the sadness, a kind of hope. Vision of the street according to social class As seen before, the vision of the street and of the city is quite different, whether heroes are car users, bus users or pedestrians. And of course, the way of transportation is usually connected to the social class. In the short stories chosen for this paper, three kinds of people are represented : the middleclass, in Khrop khrua klang thanon, in which the hero is working in advertising and seems quite fashionable, living in modern style, and appreciating the urban life ; the employees, in Ruang thamada or Muang Luang, who have definitely not the same standard of life and who did not really choose to live in Bangkok but have to for economic reasons ; the very poor, in Thung khra cha ni klai... , who have to struggle for life at every moment, living in a slum, having no job, and feeling bad because the children are not living in good conditions. Even if, sometimes, the narrator of Khrop khrua klang thanon seems to regret the countryside, he appears like a very integrated person in the urban society. Living in the suburbs, he points out a paradox : "If we were poor, we could live in a slum in the heart of the city, as high class people who reside in condominiums (...)". It is actually what is shown in Thung khra cha ni klai ... The poor woman who lives in a hut by the khlong is surrounded by rich buildings and restaurants. The hero of Khrop khrua klang thanon is attached to the signs that prove his social status : the place where he lives, and the car : "Having a car allows us to rise our social position". On the opposite, the hero of Muang luang endure his life in Bangkok with a lot of difficulty. He is suffering of the transports, the heat, the loneliness. Forced to come in Bangkok for working and living, he always keeps in mind his province : "If only I could choose ! I should not be in this terrible big city". The mother in Thung khra cha ni klai... has a vision of the city even worse. She compares the city to a tiger, that pulls her life to pieces. She came also from countryside, with her husband. He promised her that they will get a better life, jobs and money. But then, he left her with their two children and disappeared in the big city. And her life is worse than before, because of the city and the hard urban life. Conflict between tradition and modernity The opposition between tradition and modernity is to be seen in a lot of short stories. Many themes are linked to this conflict, especially the nostalgia for the province. For most of the characters of short stories -and thus, for the authors -the tradition as found in villages is often idealised in opposition to the bad effects of the modern city. The narrator of Muang luang feels really nostalgic listening to the Isan folk song : "Yes, it is that ! Exactly ! Behind my house, there was some palm trees. I plaid flute, I was an applauded singer of ram wong in the village". Thinking about his girlfriend, he dreams : "To take along my girlfriend in a boat, for fishing together. It is a dream I have, but it is only a dream". But still, he keeps the sense of reality, saying : "I would like to come back in my place, in province. I would like it so much, but what can I do there ? There is no job at all, except to fish or to collect shellfishes. Not enough for living expenses. I could not stand a job of labourer in a rice-processing factory". Even the narrator of Khrop khrua klang thanon, who seems to like his urban life, think about the traditional way of life : "I know that after we, human beings, have destroyed the nature all around us, our own inner nature has been consumed by urban life, pollution, traffic jams... The family life, that was an hymn to happiness by its rhythm and elements, felt in incoherence and instability". The characters of Ruang thamada, living in an old house in a soi, are re-creating the village life in their house. The old woman, refusing modern medicine in hospital, calls a traditional healer to cure her daughter. Although she is living in Bangkok, not far from hospitals, she reacts as if she was still inhabiting a village. Traditional healer, astrologer, masseuse are trying to remedy to the cancer, but without success. The narrator tries several times to persuade the mother to take her daughter to hospital, but she refuses, arguing that the modern methods were not efficacious and too expensive. The narrator does not dare to insist, feeling that if the girl dies in hospital, the mother will accuse him. The narrator is really representative of the young employees class in urban society. He is always hesitating between tradition and modernity, solidarity and indifference, commitment in traditional values and fascination for Westernised city. These oppositions are symbolised by the three conditions that determined his choice for a room : "I wanted a room that be cheap, near civilisation and far from crowd". Conclusion As seen in this paper, short stories are a very rich material for studying about city and urban society. I tried here to expose only a few themes connected to the street as a social area, but, of course, a lot of other themes can be analysed, especially about how the traditional ways of life are re-created in urban environment and how the urban specifies are taken back to villages. If the four authors of my corpus have distinct visions of the city and urban culture, they all point out the changes of the Thai society and the transformations of the traditional values in contact with modernisation and Westernisation of the city. Prospects of research about literature and city, are obviously wide and numerous. Since the city is in perpetual change and development, we can imagine that literature will follow the same way. How the financial crisis -that makes the city changing too -will be perceived and shown by Thai writers should be a very interesting point to study. Loved and hated, Bangkok makes everybody concerned : inhabitants, writers, researchers and even tourists... In his dictionary, Win Liaw-warin writes : "If Krungthep were a cocktail, it would be composed of : 10% of natural sweetness ; 40% of synthetic sweetness ; 30% of lead essence ; 20% of dirty sediments". Let us hope that the natural sweetness will grow up. Selected bibliography Corpus Atsiri Thammachot
26,864
[ "9413" ]
[ "191048" ]
01755560
en
[ "spi" ]
2024/03/05 22:32:10
2016
https://hal.science/hal-01755560/file/I2M_ECSSMET_2016_LAEUFFER.pdf
Hortense Laeuffer email: hortense.laeuffer@u-bordeaux.fr Brice Guiot Jean-Christophe Wahl Nicolas Perry email: nicolas.perry@ensam.eu Florian Lavelle Christophe Bois email: christophe.bois@u-bordeaux.fr RELATIONSHIP BETWEEN DAMAGE AND PERMEABILITY IN LINER-LESS COMPOSITE TANKS: EXPERIMENTAL STUDY AT THE LAMINA LEVEL The aim of this study is to provide a relevant description of damage growth and the resultant crack network to predict leaks in liner-less composite vessels. Tensile tests were carried out on three different laminates: [0 2 /90 n /0 2 ], [+45/-45] 2s and [0/+67.5/-67.5] s . Number n varies from 1 to 3 in order to study the effect of ply thickness. Transverse crack and delamination at crack tips were identified with an optical microscope during tensile loading. A length of 100 mm was observed for several loading levels to evaluate statistical effects. Results highlight a preliminary step in the damage scenario with small crack densities before a second step where the crack growth speeds up. In bulk, cross-section examinations showed that no delamination occurred at crack tip in the material of the study (M21 T700). Cross-section examinations were also performed on [+45/-45] 2s and [0/+67.5/-67.5] s layups in order to bypass the issue of free edge effects. Damage state in those layups was shown to be significantly different in the bulk than at the surface. Observations of the damage state in bulk for those layups demonstrated that there is no transverse crack in [+45/-45] 2s specimens subjected to shear strains up to 4%, and that interactions between damage of consecutive plies strongly impact both the damage kinetics and the arrangement of cracks. These elements are fundamental for the assessment of permeability performance, and will be introduced in the predictive model. INTRODUCTION Designing liner-less composite vessels for launch vehicles enables to save both cost and weight. Therefore, this is a core issue for aerospace industry. In usual composite vessels, the liner is a metallic or polymer part in charge of the gas barrier function. One challenge when designing a liner-less vessel is to reach the permeability requirement with the composite wall itself. Pristine composite laminates meet the permeability requirement, but as those materials are heterogeneous, damage growth may occur in cases of low thermo-mechanical loads. Transverse cracks and micro-delamination in adjacent plies grow and may connect together (Fig. 1), resulting in a leakage network through the composite wall. On the ply scale, transverse cracks are generated by transverse stress (mode I) and shear stress (mode II). Developing a model that predicts damage densities and arrangement for any laminate requires reliable experimental data for both modes. The aim of this study is to provide a relevant description of damage growth and the resultant network to predict leaks. The first part of this experimental study describes the method for damage characterisation. The second part presents the results obtained on the evolution of damage densities. Finally, the third part is devoted to the morphology of the damage network. METHOD FOR DAMAGE CHARACTERISATION Damage description The meso-damage state of each ply of a laminate can be described by two damage densities, as illustrated in Fig. 2: the crack density ρ, which is the average number of transverse cracks over the observed length L, and the micro-delamination length µ, which is the average length of microdelamination at each crack tip. The corresponding dimensionless variables are defined by: the crack rate : ρ " ρh " N L h (1) the delamination rate : μ " µρ " µ N L (2) where h denotes the ply thickness and N the number of cracks observed on the length L. Crack and micro-delamination rates are average variables, so that the damage stage is considered homogeneous in each ply. When used as damage variables in a model, the damage densities define the crack pattern which is commonly considered as periodic [START_REF] Ladevèze | Relationships between 'micro' and 'meso' mechanics of laminated composites[END_REF]. Experimental values and evolution of damage densities can be obtained by observation by optical microscopy [START_REF] Huchette | Sur la complémentarité des approches expérimentales et numériques pour la modélisation des mécanismes d'endommagement des composites stratifiés[END_REF][START_REF] Malenfant | Étude de l'influence de l'endommagement sur la perméabilité des matériaux composites, application à la réalisation d'un réservoir cryogénique sans liner[END_REF][START_REF] Bois | A multiscale damage and crack opening model for the prediction of flow path in laminated composite[END_REF] or X-ray tomography as well. Instrumentation and method Transverse cracks are generated by transverse stress (mode I) and shear stress (mode II). This requires testing different laminates in order to characterise damage growth in the ply in both modes or in mixed mode I + II. In this study, three different laminates were tested: [0 2 /90 n /0 2 ], [+45/-45] 2s and [0/+67.5/-67.5] s . The effect of ply thickness is also studied by making number n vary from 1 to 3. In the present work, the assessment of the damage densities was carried out by optical microscopy for several loading levels. Tensile test were performed on specimens polished on one edge. Transverse crack and delamination at crack tips were identified with a travelling optical microscope (Fig. 3). The observation area was chosen quite large, i.e. about 100 mm, in order to evaluate statistical effects. The material is carbon fibre and thermoset matrix M21 T700. Plate samples were manufactured by Automated Fibre Placement (AFP). The thickness of the elementary layer is 0.26 mm. Several layers of the same orientation can be stacked together in order to obtain a thicker ply, e.g. in a [0 2 /90 3 /0 2 ] laminate the thickness of the 90 0 ply is 0.78 mm. A specific protocol was applied in order to bypass the issue of free edge effects. Observation of the surface under loading was combined to cross-section examination : polishing of the edge was performed after loading in order to remove from 15 µm to 2 mm of the edge surface and observe the damage state in bulk. After polishing, the sample was loaded to a lower level so that damage did not propagate but cracks opened and became more easy to distinguish. Those steps were repeated for increasing maximal loading to obtain the evolution laws of damage in bulk. Beside crack growth characterisation, performing several cross-section examinations through the width of a specimen also allows to study the arrangement of cracks in three dimensions. EVOLUTION OF DAMAGE DENSITIES Edge effects Cross section examinations were carried out on damaged tensile test specimens of [0 2 /90 n /0 2 ] and [+45/-45] 2s layups. Measurements were performed on representative areas of about 50 mm long. Fig. 4 shows a transverse crack and the associated micro-delamination at crack tips on the edge of the sample before and after removing 15 microns by a polishing procedure. 15 microns under the edge surface, no visible delamination remains. Fig. 5(a) presents the evolution of micro-delamination rates before and after polishing for three ply thicknesses. Independently of the ply thickness, the micro-delamination rate tends quickly to zero. Cross section examinations were continued to verify the length of cracks. The crack rate with respect to the polishing depth is reported in Fig. 5(b). Cracks were shown to be continuous for double and triple 90 plies, while a few cracks disappeared in the [0 2 /90 1 /0 2 ] lay-up. This is consistent with edge effects in this layup. This variation of crack rate for the simple ply is about 15% of the crack rate at the surface (see Fig. 6). The [+45/-45] 2s layup was subjected to transverse strains up to ε 12 " 4%, polished and then observed. For the highest strain and after removing 1.5 mm, only a few transverse cracks remained in the central (and double) ply and in external plies. Deeper, after removing 3 mm of the surface, no crack remained at all. To exclude the eventuality of cracks being too closed to be observable, the sample was observed under loading. The results demonstrates that no transverse cracks nor micro-delamination occur in the bulk under shear load. This phenomenon can be due to the use of a toughened matrix (M21) with thermoplastic nodules. For another material, e.g. with a more fragile matrix, delamination would likely be lower in bulk than at the surface but may persist in bulk. Although no meso-damage is observable, this layup nevertheless undergoes irreversible strains and stiffness loss due to diffuse damage at the micro (fibre) scale. However, if shear loading alone does not lead to transverse cracking in this material, the shear component associated to a tensile loading will likely contribute to crack growth. This contribution may require additional testing on other layups. y (µm) μ r0 2 90 1 0 2 s, ε " 1.5% r0 2 90 2 0 2 s, ε " 1.3% r0 2 90 2 0 2 s, ε " 1.5% r0 2 90 3 0 2 s, ε " 1.1% r0 2 90 3 0 2 s, ε " 1.5% ( y (mm) ρ r0 2 90 1 0 2 s, ε " 1.3% r0 2 90 1 0 2 s, ε " 1.5% r0 2 90 2 0 2 s, ε " 1.2% r0 2 90 3 0 2 s, ε " 1.3% (b) Crack rate ρ vs polishing depth. Fig. 5. Damage rates with respect to the polishing depth y after the transverse strain ε was applied. ε 22 (%) ρ r0 2 {90 1 {0 2 s edge r0 2 {90 1 {0 2 s bulk Fig. 6. Crack rate ρ with respect to the transverse strain ε 22 at the surface of the edge and in bulk. Evolution of transverse cracking Evolution of transverse cracking was observed previously described in Section 2.2 for [+45/-45] 2s and [0 2 /90 n /0 2 ] layups with n varying from 1 to 3. As the first one was shown to be not subjected to transverse cracking, this section is devoted to the latter. The position of cracks measured for several loading levels on a [0 2 /90 1 /0 2 ] specimen is plotted in Fig. 7(a). The corresponding crack rate has been computed for the whole observed area and also for each separate third of the same area, see Fig. 7(b). The two figures highlight the benefit of observing a large enough and hence representative area. Depending on the piece of sample one chose to focus on, initiation and slope of crack growth may be very different, at least concerning the beginning of crack growth. It is likely due to weak areas (defect locations) that drive the position of the first cracks. The effect of defects on the damage threshold vanishes when damage increases: for higher strains, the distance between consecutive cracks becomes more homogeneous, and no major difference can be observed between the three small areas. The average crack rate computed on the whole length reveals a preliminary step with a progressive beginning of cracking. The evolution of damage densities according to the applied strain for two ply-thicknesses are presented in Fig. 8. In all cases, the preliminary step in the damage scenario with small cracking rates before a second step where the crack growth speeds up was observed. The curves shows an increase in cracking threshold when the ply thickness decreases. This phenomenon is explained by the energy released by the fracture being lower for a thinner ply. This point is widely described in the literature [START_REF] Parvizi | Constrained cracking in glass fibre-reinforced epoxy cross-ply laminates[END_REF][START_REF] Gudmundson | Initiation and growth criteria for transverse matrix cracks in composite laminates[END_REF][START_REF] Leguillon | Strength or toughness? A criterion for crack onset at a notch[END_REF] and obviously makes thin ply very competitive for liner-less composite vessel. The modelling of the preliminary step is fundamental for the prediction of first leak path. This phenomenon due to the variability of the material properties can be introduced in the damage model through a probabilistic approach [START_REF] Nairn | Matrix microcracking in composites[END_REF]. Evolution laws for transverse cracks and micro-delamination were built based on Finite Fracture Mechanics and energy release rates [START_REF] Tual | Multiscale model based on a finite fracture approach for the prediction of damage in laminate composites[END_REF][START_REF] Laeuffer | A model for the prediction of transverse crack and delamination density based on a strength and fracture mechanics probabilistic approach[END_REF]. Probability density functions are defined for fracture toughness and strength threshold. A set of simulations are then performed and the average response of the model are computed according to the weight associated to the density functions. Results of the simulation for both ply thicknesses are presented in Fig. 8 by doted and dashed lines. Preliminary step in the damage scenario is well described even though the densities of probability were not accurately identified yet. MORPHOLOGY OF THE DAMAGE NETWORK To study the shape of a damage network, damage observation under loading and cross-section examination were applied to [0/+67.5/-67.5] s specimens. The interest of this lay-up is that transverse cracking can occur in three different and consecutive plies, leading to the creation of a network. The results presented here concern one damage network (without damage growth) observed at several polishing depth from 0.2mm to 5mm. At the scale of the specimen, the results provide an overview of the network. The position of cracks is schematically drawn in Fig. 9 from the observations. The centre ply, ´67.5 0 , is also a double ply, and thus has a lower cracking threshold. In this ply, cracks occurred first and were continuous through the width. Conversely, cracks in the simple `67.5 0 plies were short and located around the cracks of the double ply. Hence, existence of cracks in an adjacent ply drives the position and the length of the new cracks. Moreover, the cracking threshold of the simple plies is also modified: cracks occur almost simultaneously in the three plies despite their different thicknesses. Tests on [0/+67.5/0/67.5] s specimens in which cracked plies are isolated are scheduled. Comparing the results obtained with isolated and not-isolated plies will make it possible to quantify the effect of the interaction between cracks in adjacent plies. Fig. 10 focuses on the intersection of three cracks. At the interface between two plies, delamination connects the cracks. This makes the connection area larger than the Crack Opening Displacement (COD) at crack tip and may increase the leakage rate induced by a leak path. CONCLUSION Transverse crack and delamination at crack tips were identified with an optical microscope during tensile loading. A length of 100 mm was observed for several loading levels. This allowed to highlight a preliminary step in the damage scenario with small crack densities and progressive growth before a second step with a steeper growth. In bulk, cross-section examinations showed that no delamination occurred at crack tip in the material of the study. Cross-section examinations were also performed for the observation of [+45/-45] 2s and [0/+67.5/-67.5] s layups in order to bypass the issue of free edge effects. Damage state in those layups was shown to be significantly different through the width of the specimens than at the surface of the edges. Particularly, there is no transverse crack in [+45/-45] 2s specimens subjected to shear strains up to 4%. It was also observed that crack growth and crack length are modified by the damage state of adjacent plies. These elements are fundamental for the assessment of permeability performance, and thus will be introduced in the model. Predicting the percolation of the network also requires to describe the network in terms of number of connections between two adjacent plies. It could be achieved by using only crack density and ply angle, but this is not trivial any more since crack growth and crack length are modified by the damage state of neighbouring plies. Cross-section examinations give an insight into the network pattern, nevertheless this method is restrictive because it is destructive, not very accurate and time-consuming. Additional experiments involving X-ray tomography are required to accurately characterise the crack network pattern. This kind of experiment remains a challenge because of the mismatch between the size of the crack pattern (3mm in the case of the [0/+67.5/-67.5] s specimen) and the size of its elements (COD « 2µm when the specimen is unloaded). Those experiments will also allow to assess the effect of the interactions between damage of consecutive plies. Fig. 1 . 1 Fig. 1. Transverse crack and delamination : crack network in two damaged plies and micrograph of one transverse crack with delamination at crack tip. 3 Fig. 2 . 32 Fig. 2. Measurement of the crack density ρ and average delamination at crack tip µ in a [0 2 /90/0 2 ] lay-up. Fig. 3 . 3 Fig. 3. Damage observation under tensile loading test. Fig. 4 . 4 Fig. 4. Cross-section examinations on [0 2 /90 2 /0 2 ]: diagram of the cross-sections and micrographs at the surface (y " 0µm) and after removing 15 microns (y " 15µm). a) Delamination rate μ vs polishing depth. L 55 σ u ε " 0.9% σ " 0.63 σ u ε " 1.0% σ " 0.69 σ u ε " 1.1% σ " 0.82 σ u ε " 1.3% σ " 0.99 σ u ε " Position of cracks observed on the edge for 5 loading levels. = 60 -90 mm L = 30 -60 mm L = 0 -30 mm L = 0 -90 mm (b) Crack density ρ at the surface. Fig. 7 . 7 Fig. 7. Damage measurements on a [0 2 /90 1 /0 2 ] specimen, ply thickness h 90 " 0.26 mm. r0 2 Fig. 8 . 28 Fig. 8. Measured and predicted crack rates. Fig. 9 . 9 Fig. 9. Crack network in a [0/+67.5/-67.5] s : diagram of the network and micrograph of the three damaged plies. Fig. 10 . 10 Fig. 10. Intersection between the cracks of three plies in a [0/+67.5/-67.5] s for several polishing depth. ACKNOWLEDGEMENTS The authors acknowledge the council of Region Aquitaine and the French space agency CNES for their support.
18,100
[ "739425", "177001" ]
[ "164351", "164351", "164351", "164351", "307314", "164351" ]
01755718
en
[ "phys", "sdu" ]
2024/03/05 22:32:10
2018
https://hal.sorbonne-universite.fr/hal-01755718/file/Improved_mcRSW_PrePrint.pdf
Masoud Rostami Vladimir Zeitlin email: zeitlin@lmd.ens.fr Improved moist-convective rotating shallow water model 1 "Improved moist-convective rotating shallow water model and its application to instabilities of hurricane-like vortices" Keywords: Moist Convection, Rotating Shallow Water, Tropical Cyclones, Baroclinic Instability We show how the two-layer moist-convective rotating shallow water model (mcRSW), which proved to be a simple and robust tool for studying effects of moist convection on large-scale atmospheric motions, can be improved by including, in addition to the water vapour, precipitable water, and the effects of vaporisation, entrainment, and precipitation. Thus improved mcRSW becomes cloud-resolving. It is applied, as an illustration, to model the development of instabilities of tropical cyclone-like vortices. Introduction Massive efforts have been undertaken in recent years in order to improve the quality of weather and climate modelling, and significant progress was achieved. Nevertheless, water vapour condensation and precipitations remain a weak point of weather forecasts, especially long-term ones. Thus, predictions of climate models significantly diverge in what concerns humidity and precipitations [START_REF] Stevens | What climate models miss?[END_REF] . The complexity of thermodynamics of the moist air, which includes phase transitions and microphysics, is prohibitive. That is why the related processes are usually represented through simplified parameterisations in the general circulation models. However, the essentially non-linear, switch character of phase transitions poses specific problems in modelling the water cycle. Parametrisations of numerous physical processes in general circulation models often obscure the role of the water vapour cycle upon the large-scale atmospheric dynamics. The moist-convective rotating shallow water (mcRSW) model was proposed recently, precisely, in order to understand this role in rough but robust terms. The model is based on vertically averaged primitive equations with pseudo-height as vertical coordinate. Instead of proceeding by a direct averaging of the complete system of equations with full thermodynamics and microphysics, which necessitates a series of specific ad hoc hypotheses, a hybrid approach is used, consisting in combination of vertical averaging between pairs of isobaric surfaces and Lagrangian conservation of the moist enthalpy [START_REF] Bouchut | Fronts and nonlinear waves in a simplified shallow-water model of the atmosphere with moisture and convection[END_REF][START_REF] Lambaerts | Simplified two-layer models of precipitating atmosphere and their properties[END_REF]. Technically, convective fluxes, i.e. an extra vertical velocity across the material surfaces delimiting the shallow-water layers, are added to the standard RSW model, and are linked to condensation. For the latter a relaxation parametrisation in terms of the bulk moisture of the layer, of the type applied in general circulation models, is used. Thus obtained mcRSW model combines simplicity and fidelity of reproduction of the moist phenomena at large scales, and allows to use efficient numerical tools available for rotating shallow water equations. They also proved to be useful in understanding moist instabilities of atmospheric jets and vortices [START_REF] Lambaerts | Moist versus dry baroclinic instability in a simplified two-layer atmospheric model with condensation and latent heat release[END_REF][START_REF] Lahaye | Understanding instabilities of tropical cyclones and their evolution with a moist-convective rotating shallow-water model[END_REF]Rostami and Zeitlin 2017;Rostami et al. 2017). The mcRSW model, however, gives only the crudest representation of the moist convection. The water vapour can condense, but after that the liquid water is dropped off, so there are no co-existing phases and no inverse vaporisation phase transition in the model. Yet, it is rather simple to introduce precipitable water in the model, and link it to the water vapour through bulk condensation and vaporisation. At the same time, the convective fluxes present in mcRSW can be associated with entrainment of precipitable water, and its exchanges between the layers, adding more realism in representing the moist convection. Below, we will make these additions to the mcRSW model, and thus obtain an "improved" mcRSW, which we call imcRSW. We will illustrate the capabilities of the new model on the example of moist instabilities of hurricane-like vortices. Multi-layer modelling of tropical cyclones goes back to the pioneering paper [START_REF] Ooyama | Numerical simulation of the life cycle of tropical cyclones[END_REF], which had, however, a limited range due to the constraint of axisymmetry. Strictly barotropic models were also used, e.g. [START_REF] Guinn | Hurricane spiral bands[END_REF], as well as shallow water models with ad hoc parametrisations of latent heat release, e.g. [START_REF] Hendricks | Hurricane eyewall evolution in a forced shallow-water model[END_REF]. The imcRSW model is a logical development of such approach. Derivation of the improved mcRSW Reminder on mcRSW and its derivation Let us recall the main ideas and the key points of derivation of the 2-layer mcRSW model. The starting point is the system of "dry" primitive equations with pseudo-height as vertical coordinate [START_REF] Hoskins | Atmospheric frontogenesis models: Mathematical formulation and solution[END_REF]. We recall that pseudo-height is the geopotential height for an atmosphere with an adiabatic lapse rate: z = z 0 1 -(p/p 0 ) R/cp , where z 0 = cpθ 0 /g, and the subscript 0 indicates reference sea-level) values. Horizontal momentum and continuity equations are vertically averaged between two pairs of material surfaces z 0 , z 1 , and z 1 , z 2 , where z 0 is at the ground, and z 2 is at the top. The pseudo-height z being directly related to pressure, the lower boundary is a "free surface" and the upper boundary is considered to be at a fixed pressure ("rigid lid"). The meanfield approximation is then applied, consisting, technically, in replacing averages of the products of dynamical variables by products of averages, which expresses the hypothesis of columnar motion. In the derivation of the "ordinary" RSW the fact that material surfaces z i , i = 0, 1, 2 are moving, by definition, with corresponding local vertical velocities w i allows to eliminate these latter. The main assumption of the mcRSW model is that there exist additional convective fluxes across z i , such that w 0 = dz 0 dt , w 1 = dz 1 dt + W 1 , w 2 = dz 2 dt + W 2 , (1) where W 1,2 are contributions from the extra fluxes, whatever their origin, cf. Figure 1. The resulting continuity equations for the thicknesses of the layers h 2 = z 2 -z 1 , h 1 = z 1 -z 0 are modified in a physically transparent way, acquiring additional source and sink terms: ∂ t h 1 + ∇ • (h 1 v 1 ) = -W 1 , ∂ t h 2 + ∇ • (h 2 v 2 ) = +W 1 -W 2 . (2) The modified momentum equations contain the terms of the form W i v at the boundaries z i of the layers. An additional assumption is, hence, necessary, in order to fix the value of the horizontal velocity at the interface. In the layered models the overall horizontal velocity, by construction, has the form v(z) = N i=1 v i H(z i -z)H(z -z i-1 ) , where H(z) is Heaviside (step-) function. Assigning a value to velocity at z i means assigning a value to the Heaviside function at zero, where it is not defined. This a well-known modelling problem, and any value between zero and one can be chosen, depending on the physics of the underlying system. In the present case this choice would reflect the processes in an intermediate buffer layer interpolating between the main layers, and replacing the sharp interface, if a vertically refined model is used. The "asymmetric" (non-centred) assignment H(0) = 1 was adopted in previous works. The "symmetric" (centred) assignment H(0) = 1/2 will be adopted below. This choice does not affect qualitatively the previous results obtained with mcRSW, however it does affect the forcing terms in conservation laws. It corresponds to a choice of efficiency of momentum transport between the layers. In this way, the vertically averaged momentum equations become: ∂ t v 1 + (v 1 • ∇)v 1 + f k × v 1 = -∇φ(z 1 ) + g θ1 θ0 ∇z 1 + v1-v2 2h1 W 1 , ∂ t v 2 + (v 2 • ∇)v 2 + f k × v 2 = -∇φ(z 2 ) + g θ2 θ0 ∇z 2 + v1-v2 2h2 W 1 + v1 2h2 W 2 , (3) Note that, whatever the assignment for Heaviside function, the total momentum of the two-layer system (z 1 -z 0 )v 1 + (z 2 -z 1 )v 2 is locally conserved (modulo the Coriolis force terms). In what follows, will be assuming that W 2 = 0. The system is closed with the help of hydrostatic relations between geopotential and potential temperature, which are used to express the geopotential at the upper levels in terms of the lower-level one: φ(z) = φ(z 0 ) + g θ1 θ0 (z -z 0 ) if z 0 ≤ z ≤ z 1 , φ(z 0 ) + g θ1 θ0 (z 1 -z 0 ) + g θ2 θ0 (z -z 1 ) if z 1 ≤ z ≤ z 2 , (4) The vertically integrated (bulk) humidity in each layer Q i = zi zi-1 qdz, i = 1, 2 , where q(x, y, z, t) is specific humidity, measures the total water vapour content of the air column, which is locally conserved in the absence of phase transitions. Condensation introduces c 2017 Royal Meteorological Society Prepared using qjrms4.cls a humidity sink: ∂ t Q i + ∇ • (Q i v i ) = -C i , i = 1, 2. (5) In the regions of condensation (C i > 0) specific moisture is saturated q(z i ) = q s (z i ) and the potential temperature θ(z i ) + (L/cp)q s (z i ) of an elementary air mass W i dt dx dy, which is rising due to the latent heat release, is equal to the potential temperature of the upper layer θ i+1 : θ i+1 = θ(z i ) + L cp q(z i ) ≈ θ i + L cp q(z i ), (6) If the background stratification, at constant θ(z i ) and constant q(z i ), is stable θ i+1 > θ i , by integrating the three-dimensional equation of moist-adiabatic processes d dt θ + L cp q = 0. (7) we get W i = β i C i , β i = L cp(θ i+1 -θ i ) ≈ 1 q(z i ) > 0. (8) In this way the extra vertical fluxes in (3), (2) are linked to condensation. For the system to be closed, condensation should be connected to moisture. This is done via the relaxation parametrisation, where the moisture relaxes with a characteristic time τc towards the saturation value Q s , if this threshold is crossed: C i = Q i -Q s i τc H(Q i -Q s i ). (9) Essentially nonlinear, switch character of the condensation process is reflected in this parameterisation, which poses no problem in finite-volume numerical scheme we are using below. For alternative, e.g. finite-difference schemes smoothing of the Heviside function could be used. In what follows we consider the two-layer model assuming that the upper layer is dry, and even with entrainment of water from the lower moist layer, water vapour in this layer is far from saturation, so the convective flux W 2 is negligible. In this way we get the mcRSW equations for such configuration:              ∂ t v 1 + (v 1 • ∇)v 1 + f k × v 1 = -g∇(h 1 + h 2 ) + v1-v2 2h1 βC, ∂ t v 2 + (v 2 • ∇)v 2 + f k × v 2 = -g∇(h 1 + sh 2 ) + v1-v2 2h2 βC, ∂ t h 1 + ∇ • (h 1 v 1 ) = -βC, ∂ t h 2 + ∇ • (h 2 v 2 ) = +βC, ∂ t Q + ∇ • (Qv 1 ) = -C, C = Q-Q s τc H(Q -Q s ) , (10) where s = θ 2 /θ 1 > 1 is the stratification parameter, v 1 = (u 1 , v 1 ) and v 2 = (u 2 , v 2 ) are the horizontal velocity fields in the lower and upper layer (counted from the bottom), with u i zonal and v i meridional components, and h 1 , h 2 are the thicknesses of the layers, and we will be considering the Coriolis parameter f to be constant. As in the previous studies with mcRSW, we will not develop sophisticated parameterisations of the boundary layer and of fluxes across the lower boundary of the model. Such parameterisations exist in the literature [START_REF] Schecter | Hurricane formation in diabatic ekman turbulence[END_REF], and may be borrowed, if necessary. We will limit ourselves by the simplest version of the exchanges with the boundary layer, with a a source of bulk moisture in the lower layer due to surface evaporation E. The moisture budget thus becomes: ∂ t Q + ∇ • (Qv 1 ) = E -C (11) The simplest parametrisations being used in the literature are the relaxational one E = Q -Q τ E H( Q -Q), (12) and the one where surface evaporation is proportional to the wind, which is plausible for the atmosphere over the oceanic surface: E ∝ |v|; (13) The two can be combined, in order to prevent the evaporation due to the wind to continue beyond the saturation: Es = Q -Q τ E |v|H( Q -Q). (14) The typical evaporation relaxation time τ E is about one day in the atmosphere, to be compared with τc, which is about an hour. Thus τ E τc. Q can be taken equal, or close to Qs, as we are doing, but not necessarily, as it represents complex processes in the boundary layer, and can be, in turn, parametrised. c 2017 Royal Meteorological Society Prepared using qjrms4.cls Improving the mcRSW model An obvious shortcoming of the mcRSW model presented above is that, although it includes condensation and related convective fluxes, the condensed water vapour disappears from the model. In this sense, condensation is equivalent to precipitation in the model. Yet, as is well-known, condensed water remains in the atmosphere in the form of clouds, and precipitation is switched only when water droplets reach a critical size. It is easy to include precipitable water in the model in the form of another advected quantity with a source due to condensation, and a sink due to vaporisation, the latter process having been neglected in the simplest version of mcRSW. We thus introduce a bulk amount of precipitable water, W (x, y, t), in the air column of a given layer. It obeys the following equation in each layer: ∂ t W + ∇.(W v) = +C -V, (15) where V denotes vaporisation. Vaporisation can be parametrised similarly to condensation: P = Qs -Q τv H(Qs -Q). (16) Opposite to the condensation, vaporisation engenders cooling, and hence a downward convective flux, which can be related to the background stratification along the same lines as upward flux due to condensation: Wv = -β * V, β * = L * Cv(θ 2 -θ 1 ) , ( 17 ) where L * is the latent heat absorption coefficient, Cv is specific heat of vaporisation. β * is an order of magnitude smaller than β. There is still no precipitation sink in (15). Such sink can be introduced, again as a relaxation with a relaxation time τp, and conditioned by some critical bulk amount of precipitable water in the column: P = W -Wcr τp H(W -Wcr). (18) The extra fluxes (17) due to cooling give rise to extra terms in mass and momentum equations of the model in each layer. Another important phenomenon, which is absent in the simplest version of mcRSW is the entrainment of liquid water by updrafts. This process can be modelled in a simple way as a sink in the lower-layer precipitable water equation, which is proportional, with some coefficient γ, to the updraft flux, and hence, to condensation, and provides a corresponding source of precipitable water in the upper-layer. Including the above-described modifications in the mcRSW models, and neglecting for simplicity 1) condensation and precipitations in the upper layer, by supposing that it remains far from saturation, 2) vaporisation in the lower layer, which is supposed to be close to saturation, we get the following system of equations:                                d 1 v 1 dt + f ẑ × v 1 = -g∇(h 1 + h 2 ) + ( βC -β * V h 1 )( v 1 -v 2 2 ), d 2 v 2 dt + f ẑ × v 2 = -g∇(h 1 + sh 2 ) + ( βC -β * V h 2 )( v 1 -v 2 2 ), ∂ t h 1 + ∇.(h 1 v 1 ) = -βC + β * V, ∂ t h 2 + ∇.(h 2 v 2 ) = +βC -β * V, ∂ t W 1 + ∇.(W 1 v 1 ) = +(1 -γ) C -P, ∂ t W 2 + ∇.(W 2 v 2 ) = +γ C -V, ∂ t Q 1 + ∇.(Q 1 v 1 ) = -C + E, ∂ t Q 2 + ∇.(Q 2 v 2 ) = V, (19) where d i .../dt = ∂ t ... + (v i •∇)..., i = 1, 2. Here C is condensation in the lower layer considered to be close to saturation, W i is the bulk amount of precipitable water and Q i bulk humidity in each layer, γ is the entrainment coefficient, V is vaporisation in the upper layer, considered as mostly dry. C, V , and P obey ( 9), ( 16), ( 18), respectively. Note that if the above-formulated hypotheses of mostly dry upper layer, and almost saturated lower layer are relaxed (or get inconsistent during simulations), the missing condensation, precipitation, and vaporisation in the corresponding layers can be easily restituted according to the same rules. Conservation laws in the improved mcRSW model As was already said, the total momentum of the system is locally conserved in the absence of the Coriolis force (f → 0), as can be seen by adding the equations for the momentum density in the layers: (∂ t + v 1 .∇)(h 1 v 1 ) + h 1 v 1 ∇.v 1 + f ẑ × (h 1 v 1 ) = -g∇ h 2 1 2 -gh 1 ∇h 2 -( v 1 + v 2 2 )(βC -β * V ) (20a) (∂ t + v 2 .∇)(h 2 v 2 ) + h 2 v 2 ∇.v 2 + f ẑ × (h 2 v 2 ) = -gs∇ h 2 2 2 -gh 2 ∇h 1 + ( v 1 + v 2 2 )(βC -β * V ) (20b) The last term in each equation corresponds to a Rayleigh drag produced by vertical momentum exchanges due to convective fluxes. The total mass (thickness) h = h 1 + h 2 is also conserved, while the mass in each layer h 1,2 is not. However, we can construct a moist enthalpy in the lower layer m 1 = h 1 -βQ 1 -β * W 2 , (21) ∂ t m 1 + ∇.(m 1 v 1 ) = 0, i = 1, 2. ( 22 ) The inclusion of precipitable water in the upper layer in ( 21) is necessary to compensate the downward mass flux due to vaporisation. The dry energy of the system E = dxdy(e 1 + e 2 ) is conserved in the absence of diabatic effects, where the energy densities of the layers are:        e 1 = h 1 v 2 1 2 + g h 2 1 2 , e 2 = h 2 v 2 2 2 + gh 1 h 2 + sg h 2 2 2 . In the presence of condensation and vaporisation, the energy budget changes and the total energy density e = e 1 + e 2 is not locally conserved, acquiring a sink/source term: ∂ t e = -∇ • fe -(βC -β * V )g(1 -s)h 2 , ( 23 ) where fe is the standard energy density flux in the two-layer model. For the total energy E = dxdy e of the closed system we thus get ∂ t E = (βC -β * V )g(s -1) dxdy h 2 . ( 24 ) For stable stratifications s > 1, the r.h.s. of this equation represents an increase (decrease) of potential energy due to upward (downward) convective fluxes due to condensation heating (vaporisation cooling). Note that with "asymmetric" assignment of Heaviside function at zero, an extra term corresponding to kinetic energy loss due to Rayleigh drag would appear in the energy budget, cf [START_REF] Lambaerts | Simplified two-layer models of precipitating atmosphere and their properties[END_REF]. Potential vorticity (PV) is an important characteristics of the flow. In the presence of diabatic effects it ceases to be a Lagrangian invariant, and evolves in each layer as follows: d 1 dt ( ζ 1 + f h 1 ) = ( ζ 1 + f h 1 ). (βC -β * V ) h 1 + ẑ h 1 • ∇ × v 1 -v 2 2 . βC -β * V h 1 , (25a) d 2 dt ( ζ 2 + f h 2 ) = -( ζ 2 + f h 2 ). (βC -β * V ) h 2 + ẑ h 2 • ∇ × v 1 -v 2 2 . βC -β * V h 2 , (25b) where ζ i = ẑ•(∇ × v i ) = ∂xv i -∂yu i (i = 1, 2 ) is relative vorticity, and q i = (ζ i + f )/h i is potential vorticity in each layer. One can construct a moist counterpart of potential vorticity in the lower layer with the help of the moist enthalpy (21), cf. Lambaerts et al. (2011): q 1m = ζ 1 + f m 1 . ( 26 ) The moist PV is conserved in the lower layer, modulo the Rayleigh drag effects: d 1 dt ( ζ 1 + f m 1 ) = + ẑ • ∇ × v 1 -v 2 2 . βC -β * V m 2 1 . (27) Note that the "asymmetric" assignment of the value of the step-function, which was discussed above, renders the moist PV in the lower layer conserved. 3. Illustration: application of improved mcRSW model to moist instabilities of hurricane-like vortices 3.1. Motivations We will illustrate the capabilities of the improved mcRSW, the imcRSW, on the example of moist instabilities of hurricane-like vortices. The mcRSW model, in its simplest one-layer version, captures well the salient properties of moist instabilities of such vortices, and clearly displays an important role of moisture in their development [START_REF] Lahaye | Understanding instabilities of tropical cyclones and their evolution with a moist-convective rotating shallow-water model[END_REF]). Below we extend the analysis of [START_REF] Lahaye | Understanding instabilities of tropical cyclones and their evolution with a moist-convective rotating shallow-water model[END_REF] to baroclinic tropical cyclones (TC), and use the imcRSW to check the role of new phenomena included in the model. Some questions which remained unanswered will be addressed, as well as new ones, possible to answer with the improved version of the model. In particular, we will investigate the influence of the size of TC (the radius of maximum wind) upon the structure of the most unstable mode, the role of vertical shear, and the evolution of inner and outer cloud bands at nonlinear stage of instability. Fitting velocity and vorticity distribution of the hurricanes We begin with building velocity and vorticity profiles of a typical TC within the two-layer model. An analytic form of the velocity profile is convenient both for the linear stability analysis, and for initialisations of the numerical simulations, so we construct a simple analytic fit with a minimal number of parameters. V i (r) =              i (r -r 0 ) αi e -mi(r-r0) β i max (r -r 0 ) αi e -mi(r-r0) β i , r ≥ r 0 , m 0 r, r r 0 . (28) Here i = 1, 2 indicate the lower and upper layer, respectively, r is the non-dimensional distance from the center, i measures the intensity of the velocity field, r 0 sets the non-dimensional distance of maximum wind from the centre, and other parameters allow to fit the shape of the distribution. A cubic Hermite interpolation across r = r 0 is made to prevent discontinuity in vorticity. Here and below we use a simple scaling where the distances are measured in units of barotropic deformation radius R d = √ gH/f , and velocities are measured in units of √ gH, where H is the total thickness of the atmospheric column at rest. (Hence, the parameter acquires a meaning of Froude number). Under this scaling the Rossby number of the vortex is proportional to the inverse of the non-dimensional radius of maximum wind (RM W ). A useful property of this parametrisation is a possibility to tune the ascending or descending trends of the wind near and far from the velocity peak. Velocity is normalised in a way that the maximum velocity is equal to . We suppose that velocity profile (28) corresponds to a stationary solution of "dry" equations of the model. Such solutions obey the cyclo-geostrophic balance in each layer: V 1 r + f V 1 = g ∂ ∂r (H 1 + H 2 ) , (29a) V 2 r + f V 2 = g ∂ ∂r (H 1 + αH 2 ) , (29b) so the related H i (r) are obtained by integrating these equations using (28). The radial distribution of the relative vorticity in the vortex is given by (1/r)d [rV (r)] /dr. It should be emphasised that the radial gradient of the PV corresponding to the profile (28) has sign reversal, and hence the instability of the vortex is expected. Typical velocity and vorticity fields of an intense (category 3) vortex are presented in Figure 2. 1. In what follows, we will be studying instabilities of thus constructed vortices, and their nonlinear saturation. The strategy will be the same as in [START_REF] Lahaye | Understanding instabilities of tropical cyclones and their evolution with a moist-convective rotating shallow-water model[END_REF]: namely, we identify the unstable modes of the vortex by performing detailed linear stability analysis of the "dry" adiabatic system, with switched-off condensation and vaporisation, and then use the unstable modes to initialise numerical simulations of nonlinear saturation of the instability, by superimposing them, with small amplitude, onto the background vortex. We will give below the results of numerical simulations of developing instabilities for three typical configurations which are presented in Table 1: weak barotropic (BTW) and baroclinic (BCW), and strong baroclinic (BCS) vortices. Results of the linear stability analysis: the most unstable mode and its dependence on the radius of maximal wind By applying the standard linearisation procedure and considering small perturbations to the axisymmetric background flow, we determine eigenfrequencies and eigenmodes of the "dry" system linearised about the background vortex. The technicalities of such c 2017 Royal Meteorological Society Prepared using qjrms4.cls analysis are the same as in [START_REF] Lahaye | Centrifugal, barotropic and baroclinic instabilities of isolated ageostrophic anticyclones in the two-layer rotating shallow water model and their nonlinear saturation[END_REF] extended to the two-layer configuration as in Rostami and Zeitlin (2017), and we pass directly to the results, which we will not give in detail either, limiting ourselves by what is necessary for numerical simulations in the next section. The most unstable mode with azimuthal wavenumber l = 3 of the BCS vortex is presented in Fig. 3. The unstable mode of Figure 3 Pressure is clearly of mixed Rossby -inertia gravity wave type. Such unstable modes of hurricane-like vortices are well documented in literature, both in shallow-water models [START_REF] Zhong | An eigenfrequency analysis of mixed rossby-gravity waves on barotropic vortices[END_REF] and full 3D models [START_REF] Menelau | On the relative contribution of inertia-gravity wave radiation to asymmetric intsabilities in tropical cyclone-like vortices[END_REF]). It should be stressed that at Rossby numbers which are about 40 and small Burger number, as can be inferred from the right panel of Fig. 2, the vortex Rossby wave part of the unstable mode, which is known since the work [START_REF] Montgomery | A theory for vortex Rossby-waves and its application to spiral bands and intensity changes in hurricanes[END_REF] and is clearly seen in the upper panels of Fig. 3, is inevitably coupled to inertia-gravity wave field through Lighthill radiation mechanism, cf. [START_REF] Zeitlin | Decoupling of balanced and unbalanced motions and inertia -gravity wave emission: Small versus large rossby numbers[END_REF]. The most unstable modes of BCW and BTW vortices have wavenumber l = 4. With our scaling the strength of the vortex is inversely proportional to its non-dimensional RM W , and thus the structure of the most unstable mode depends on RM W . Yet, as follows from Figure 4, the mode l = 4 is dominant through the wide range of RM W . In general, higher values of RM W correspond to higher azimuthal wavenumbers and lower growth rates. Nonlinear evolution of the instability We now use the unstable modes identified by the linear stability analysis to initialise numerical simulation of nonlinear evolution of the instability. We superimpose the unstable modes with weak amplitude (several per cent with respect to the background values) onto the vortex and trace the evolution of the system, as follows from numerical simulations with finite-volume well-balanced scheme developed for moist-convective RSW model [START_REF] Bouchut | Fronts and nonlinear waves in a simplified shallow-water model of the atmosphere with moisture and convection[END_REF]). Numerical simulations with each of the vortex configurations of Table 1 were performed both in "dry" (M), with diabatic effects switched off, and moist-convective (MCEV) environments. The values of parameters controlling condensation, evaporation, vaporisation, and precipitation in MCEV environment are given in Table 2. The values of parameters controlling condensation: τc, Q s , stratification: s = θ 2 /θ 1 , evaporation: τ E , vaporisation: τv, and precipitation: Wcr, τp. ∆t is the time-step of the code. τc Q s Q s τ E τv Wcr τp γ 5∆t 0.75 ≈ Q s 1.5 200∆t ≈1 day 10τc 0.01 3∆t 0.3 content. We present below some outputs of the simulations, illustrating different aspects of moist vs "dry" evolution, and difference in behaviour of baroclinic and barotropic vortices. Evolution of potential vorticity We start by evolution of the PV field of the weak cyclone, as it is understandably slower than the evolution of the strong one, and different stages can be clearly distinguished. The evolution of potential vorticity in both layers during nonlinear saturation of the instability of the BCW vortex in MCEV environment is presented in Fig. 5,6. The simulations show formation of a transient polygonal pattern inside the RM W at initial stages, with a symmetry of the most unstable linear mode. The patterns of this kind are frequently observed [START_REF] Muramatsu | The structure of polygonal eye of a typhoon[END_REF][START_REF] Lewis | Polygonal eye walls and rainbands in hurricanes[END_REF][START_REF] Muramatsu | The structure of polygonal eye of a typhoon[END_REF][START_REF] Kuo | A possible mechanism for the eye rotation of typhoon herb[END_REF]). The polygon is further transformed into an annulus of high PV. Such annuli of elevated vorticity (the so-called hollow PV towers (Hendricks and Schubert 2010) ) are found in both moist-convective and dry cases. It is worth mentioning that the growth of the primary unstable mode is accompanied by enhancement of outer gravity-wave field, as follows from the divergence field presented in Fig. 7. As follows from Figs. 5, 6 the polygon loses its shape at t ≈ 17. At this time the modes with azimuthal wavenumbers l = 1, 2 are being produced by nonlinear interactions, and start to grow and interact with the polygonal eye-wall, which leads to symmetry loss by the core. A secondary, dipolar instability of the core thus develops, and gives rise to formation of an elliptical central vortex, corresponding to azimuthal mode l = 1, and of a pair of satellite vortices indicating the presence of l = 2 mode. The interaction of initial l = 4 mode with emerging l = 1 and l = 2 modes is accompanied by inertia-gravity wave (IGW) emission, and enhancement of water vapour condensation that will be discussed below. It should be emphasised that interaction between l = 2 mode and elliptical eye, of the kind we observe in simulations, was described in TC literature, e.g. [START_REF] Kuo | A possible mechanism for the eye rotation of typhoon herb[END_REF], where reflectivity data from a Doppler radar were used to hypothesise that it was due to azimuthal propagation of l = 2 vortex Rossby waves around the eye-wall. Further nonlinear evolution consists in breakdown of the central ellipse with subsequent axisymmetrisation of the PV field, and its intensification at the center. This process characterises the evolution of both BTW (not shown) and BCW vortices, but is more efficient in the baroclinic case, as follows from Fig. 8. As seen in the Figure, the azimuthal velocity in the core region with r < 0.5RM W is subject to strong intensification. The exchanges of PV between the eye-wall and the eye, and intensification are known from the barotropic simulations [START_REF] Montgomery | A theory for vortex Rossby-waves and its application to spiral bands and intensity changes in hurricanes[END_REF][START_REF] Schubert | Polygonal eyewalls, asymmetric eye contraction, and potential vorticity mixing in hurricanes[END_REF][START_REF] Lahaye | Understanding instabilities of tropical cyclones and their evolution with a moist-convective rotating shallow-water model[END_REF]). As we see in Fig. 8, the intensification is enhanced by baroclinicity of the background vortex. This is confirmed by Fig. 9, and by Fig. 10, which illustrate the enhancement effect of both moist convection and baroclinicity upon palinstrophy, which is defined as P(t) = 1 2 ∇ζ.∇ζdxdy, (30) in each layer, and which diagnoses the overall intensity of vorticity gradients. It is worth emphasising that, because of higher vorticity and smaller RMW, the axisymmetric steady state is achieved in the lower layer more rapidly than in the upper one in the case of baroclinic vortices. In the case of intense vortex, nonlinear evolution of the instability follows similar scenario, but is considerably accelerated, as follows from Fig. 11. Spiral cloud bands Tropical cyclones exhibit specific cloud patterns. The new version of the model gives a possibility to follow clouds and precipitation, and it is interesting to test its capability to produce realistic cloud patterns. There are two types of cloud and rain bands associated with tropical cyclones, as reported in literature: the inner bands, that are situated close to the vortex core, within ≈ 2 RM W , and the outer spiral bands located farther from the centre and having larger horizontal scales [START_REF] Guinn | Hurricane spiral bands[END_REF][START_REF] Wang | How do outer spiral rainbands affect tropical cyclone structure and intensity[END_REF]. Fig. 12 shows formation of inner and outer cloud bands, the latter having characteristic spiral form, during nonlinear evolution of the instability. Spiral cloud bands are related to inertia-gravity "tail" of the developing unstable mode. The link of spiral bands to inertia-gravity waves in "dry" RSW model of hurricanes was discussed in literature [START_REF] Zhong | A theory for mixed rossby-gravity waves in tropical cyclones[END_REF]). Here we see it in "cloud-resolving" imcRSW. It is to be stressed that amount of clouds strongly depends on the initial water vapour content. If it is closer to the saturation value, the amount of clouds and precipitation obviously increases and eventually covers the whole vortex. Conclusions and discussion Thus, we have shown that the moist-convective rotating shallow water model "augmented" by adding precipitable water, and relaxational parametrisations of related processes of vaporisation, precipitation, together with entrainment, is capable to capture some salient features of the evolution of instabilities of hurricane-like vortices in moist-convective environment, and allows to analyse the importance of moist processes on the life-cycle of these instabilities. There exist extended literature on the dynamics of the hurricanes eyewall, with tentative explanations in terms of transient internal gravity waves, which form spiral bands, cf. [START_REF] Lewis | Polygonal eye walls and rainbands in hurricanes[END_REF]Hawkins (1982) Willoughby (1978), [START_REF] Kurihara | On the development of spiral bands in a tropical cyclone[END_REF], or alternative explanations [START_REF] Guinn | Hurricane spiral bands[END_REF] in terms of PV dynamics and vortex Rossby waves. Thus [START_REF] Schubert | Polygonal eyewalls, asymmetric eye contraction, and potential vorticity mixing in hurricanes[END_REF] obtained formation of polygonal eyewalls as a result of barotropic instability near the radius of maximum wind in a purely barotropic model, without gravity waves. A detailed analysis of instabilities of tropical cyclones was undertaken with a cloud-resolving model in [START_REF] Naylor | Evaluation of the impact of moist convectionon the developmentof asymmetric inner core instabilities in simulated tropical cyclones[END_REF], and showed that the results of [START_REF] Schubert | Polygonal eyewalls, asymmetric eye contraction, and potential vorticity mixing in hurricanes[END_REF] gave a useful first approximation for the eyewall instabilities. As was already mentioned in section 3.3, at high Rossby numbers the vortex Rossby wave motions are inevitably coupled to inertia-gravity waves, and our linear stability analysis confirms this fact. The mixed character of the wave perturbations of axisymmetric hurricane-like vortices was abundantly discussed in literature, e.g. [START_REF] Zhong | A theory for mixed rossby-gravity waves in tropical cyclones[END_REF]. A detailed analysis of instabilities of hurricane-like vortices in continuously stratified fluid was given recently by [START_REF] Menelau | On the relative contribution of inertia-gravity wave radiation to asymmetric intsabilities in tropical cyclone-like vortices[END_REF], where it was shown that the inertia-gravity part of the unstable modes intensifies with increasing Froude number. The vortex profiles used above in section 3.2 have moderate Froude numbers, and the corresponding unstable modes have weak inertia-gravity tails. They are, however, sufficient to generate spiral cloud patterns, as we showed. The development of the instability of the eyewall proper at early stages (up to ≈ 40f -1 ) is only weakly influenced by moist convection, in accordance with findings of [START_REF] Naylor | Evaluation of the impact of moist convectionon the developmentof asymmetric inner core instabilities in simulated tropical cyclones[END_REF]. This can be seen from comparison of the right and left panels of Fig. 9 and from Fig. 10. An advantage of our model, as compared to simple barotropic models, is its ability to capture both vorticity and inertia gravity waves Although we limited ourselves above by an application to tropical cyclones, the model can be used for analysis of various phenomena in mid-latitudes and tropics. The passage to the equatorial beta-plane is straightforward in the model, and it can be easily extended to the whole sphere. An important advantage of the model is that it allows for self-consistent inclusion of topography in the numerical scheme, giving a possibility to study a combination of moist and orographic effects. As was already mentioned, more realistic parametrisations of the boundary layer are available, and generalisations to three-layer versions are straightforward. c 2017 Royal Meteorological Society Prepared using qjrms4.cls Figure 1 . 1 Figure1. Notations for the simplified two-layer scheme with mass flux across material surfaces. From[START_REF] Lambaerts | Simplified two-layer models of precipitating atmosphere and their properties[END_REF], with permission of AIP. Figure 2 . 2 Figure2. Normalised radial structure of azimuthal tangential wind with a fixed slope close to the centre (left panel) and relative vorticity (right panel) in both layers corresponding to the BCS vortex in Table1. Figure 3 . 3 Figure 3. Upper row: Pressure and velocity fields in the x -y plane in the lower (left panel) and upper (right panel) layers corresponding to the most unstable mode with azimuthal wavenumber l = 3 of the BCS vortex of Table1. Lower row: left panel-corresponding divergence field of the most unstable mode, right panel -radial structure of three components of the most unstable mode: pressure anomaly η, and radial (v) and tangential (u) components of velocity ; dashed (solid) lines: imaginary (real) part, thick (thin) lines correspond to upper (lower) layer. Note that the domain in the lower left panel is ≈ ten times larger than that of the upper panels. Figure 4 . 4 Figure4. Dependence of the linear growth rates (in units of f -1 ) of the unstable modes with different azimuthal wavenumbers on the radius of maximum wind (RMW ). Figure 5 . 5 Figure5. Nonlinear evolution of the most unstable l = 4 mode superimposed onto the background BCW vortex in MCEV environment, as seen in potential vorticity field in the lower layer. Formation of meso-vortices (zones of enhanced PV in the vorticity ring) is clearly seen, giving way to axisymmetrisation and monotonisation of the PV profile. Time is measured in units of f -1 . Figure 6 .Figure 7 .Figure 8 . 678 Figure6. Same as in Fig.5, but for the upper layer. Figure 9 .Figure 10 .Figure 11 .Figure 12 . 9101112 Figure9. Effect of vertical shear on intensification of vorticity in the vortex core in environments without (M ) and with (MCEV ) moist convection and surface evaporation. The vorticity is normalised by its initial value. Table 1 . 1 Parameters of the background vortices. BCW(S): weak(strong) baroclinic, BTW: weak "barotropic", without vertical shear, l: the most unstable azimuthal mode conf ig. l 1 2 α 1 α 2 β 1 β 2 m 1 m 2 r 01 r 02 BCS 3 0.41 0.49 4.5 4.5 0.180 0.178 48 47.5 0.01 0.0101 BTW 4 0.4 0.40 2.25 2.25 0.25 0.25 14 14 0.1 0.1 BCW 4 0.4 0.36 2.25 2.25 0.25 0.237 14 12.6 0.1 0.115 observations were collected from Atlantic and eastern Pacific storms during 1977 -2001: It has the form which is consistent with Mallen et al. (2005) where flight-level c 2017 Royal Meteorological Society Prepared using qjrms4.cls Table 2 2 Prepared using qjrms4.cls . It must be stressed that amount of precipitable water in each layer is highly sensitive to the values of parameters, especially to the intensity of surface evaporation. Condensation and precipitation time scales are chosen to be short, just few time steps ∆t, while vaporisation and surface evaporation time-scales are much larger which is consistent with physical reality. Changing these parameters within the same order of magnitude does not lead to qualitative changes in the results. Wcr is an adjustable parameter that controls precipitation and γ controls entrainment of condensed water. The MCEV simulations were initialised with spatially uniform moisture c 2017 Royal Meteorological Society
41,363
[ "1030029", "772649" ]
[ "541698", "65198", "541698" ]
01755880
en
[ "phys" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01755880/file/DissipationDrivenTimeReversaloFWaves.pdf
Vincent Bacot Sander Wildeman Surabhi Kottigegollahalli Sreenivas Maxime Harazi Xiaoping Jia Arnaud Tourin Mathias Fink Emmanuel Fort email: emmanuel.fort@espci.fr Dissipation driven time reversal for waves Dissipation is usually associated with irreversibility. Here we present a counter-intuitive concept to perform wave time reversal using a dissipation impulse. A sudden and strong modification of the damping, localized in time, in the propagating medium generates a counter-propagating timereversed version of the initial wave. In the limit of a high dissipation shock, it amounts to a 'freezing' of the wave, where the initial wave field is retained while its time derivative is set to zero at the time of the impulse. The initial wave then splits into two waves with identical profiles, but with opposite time evolution. In contrast with other time reversal methods, the present technique produces an exact time reversal of the initial wave field, compatible with broad-band time reversal. Experiments performed with interacting magnets placed on a tunable air cushion give a proof of concept. Simulations show the ability to perform time reversal in 2D complex media. Dissipation in physical systems introduces a time-irreversibility by breaking the time symmetry of their dynamics [1] [2] [3] [4] [5]. Similarly damping in wave propagation deteriorates timereversal (TR) operations. However, for short times compared to the characteristic dissipation time, the dissipation-free wave-propagation model remains a good approximation of the wave dynamics [6]. Thus, increasing dissipation can generally be seen as effectively reducing the reversibility of the system. On the other hand, dissipation can be seen as the time reverse of gain. This is exemplified in the concept of coherent perfect absorbers (CPA), combining an interplay between interferences and absorption to produce the equivalent of a time-reversed laser [7] [8]. CPA addresses the TR of sources into sinks [9]. In this paper, we present a concept in which dissipation produces a time reversed wave. The generation process is similar to that employed in the Instantaneous Time Mirror (ITM) approach [10] in which a sudden change in the wave propagation speed in the entire medium produces a time reversed wave. Here, we consider instead a dynamical change of the dissipation coefficient. We will show that in this case also the production mechanism can be interpreted as a modification of the initial conditions characterizing the system at a given instant by virtue of Cauchy's theorem [START_REF] Hadamard | Lectures on Cauchy's Problem in Linear Partial Differential Equations[END_REF]. In the following we will refer to this new, dissipation based, TR concept as DTR, for Dissipation Time Reversal. DTR can, in principle, be implemented for any type of wave propagating in a (dissipative) medium. For example, for acoustic waves, through a modulation of the viscosity, and for EM waves, through a sudden change in conductivity in the medium. Because the source term involved is a first order time derivative, this new TR technique is able to create an exact TR wave, i.e. precisely proportional to the initial wave field. This results in a higher fidelity and enhanced broadband capabilities compared to other methods [10] [START_REF] Fink | [END_REF]. In the following, we demonstrate the validity of DTR concept performing a proof-of-concept 1D experiment using a chain of coupled magnets. In addition, we show the performance of DTR in a complex 2D highly scattering media using computer simulations. Theory of Dissipation driven Time Reversal (DTR) Waves in homogeneous non-dissipative medium are usually governed by d'Alembert wave equation [START_REF] Crawford | Berkeley Physics Course[END_REF]. More generally, the wave fields can be described by an equation of the same structure, but in which the Laplacian operator is replaced by a more complex spatial operator. This is the case for instance when describing acoustic waves in a non-homogeneous medium or with gravity-capillary waves at the surface of deep water. This type of equations can be written in a general manner in the spatial Fourier space for the wave vector 𝒌 [START_REF] Benjamin | [END_REF] [15]: ! ! ! !! ! 𝒌, 𝑡 + 𝜔 ! ! 𝒌 𝜙 𝒌, 𝑡 = 0, (1) where 𝜔 ! 𝒌 is the dispersion relation of the waves. The time reversal symmetry is a direct consequence of the second order of the time derivative: if 𝜙 𝒌, 𝑡 is a solution of the equation, 𝜙 𝒌, -𝑡 obviously satisfies the same equation. The damping effect induced by dissipation is usually described by an additional first order derivative term in this equation: ! ! ! !! ! 𝒌, 𝑡 + 𝜁 𝒌, 𝑡 !! !" 𝒌, 𝑡 + 𝜔 ! ! 𝒌 𝜙 𝒌, 𝑡 = 0, (2) where 𝜁 𝒌, 𝑡 is time-dependent damping coefficient. This additional dissipation term breaks the time symmetry of the equation which is precisely why dissipation is generally associated with irreversibility. For simplicity, we remove the dissipation k-dependence in the following notations. If 𝜁 𝑡 remains small compared to 𝜔 ! , an approximate reversibility is retained for times smaller than 1/𝜁 𝑡 [10]. In the following, we consider a medium where the damping coefficient is initially small or negligible, i.e. 𝜁~0. At a time 𝑡 ! , the dissipation coefficient is set to a very high value over the entire medium 𝜁 ≫ 𝜔 ! and stays at this value until a later time 𝑡 ! = 𝑡 ! + 𝛥𝑡, where it is set back to its original value. 𝜁 𝑡 can thus be written as 𝜁 𝑡 = 𝜁. Π(𝑡), where Π(𝑡) is a unit rectangle function spanning from 𝑡 ! to 𝑡 ! . An initial wave 𝜙 ! with a Fourier transform 𝜙 ! is originally present in the medium. During the dissipation impulse, the last term of equation ( 2) is negligible and one may write the approximate expressions for the wave field and its time derivative at times t after the damping impulse at time 𝑡 ! as: 𝜙 𝑡 = 𝜙 ! 𝑡 ! + ! ! !! ! !" 𝑡 ! 1 -𝑒 !! !!! ! !! !" 𝑡 = !! ! !" 𝑡 ! 𝑒 !! !!! ! . (3) Taking 𝜁 towards infinity, we obtain that the field remains approximately constant 𝜙 𝑡 ≈ 𝜙 ! 𝑡 ! during this dissipation impulse. The system is described as an overdamped harmonic oscillator that returns very slowly to a steady equilibrium state without oscillating. In this regime, the characteristic oscillations corresponding to wave motion are stopped and the time it takes for the system to relax increases with dissipation so that in the high dissipation limit we are considering, amplitude damping does not have time to occur. A more detailed calculation shows that in the long run the amplitude decreases as exp - ! ! ! !! 𝑡 -𝑡 ! (see supplemental material). For the strong dissipation limit to hold and for the wave amplitude to be retained, the duration 𝛥𝑡 of the dissipation pulse should thus satisfy 1/𝜁 < 𝛥𝑡 < 𝜁/𝜔 ! ! . If the damping is strong enough, the duration of the dissipation phase may be large compared to the period of the original wave. At time 𝑡 ! , when the dissipation ends, the wave field starts evolving again according to equation (2), with the initial conditions: 𝜙 𝑡 ! , !! !" 𝑡 ! = 𝜙 ! 𝑡 ! , 0 . (4) The DTR process can be interpreted as a change of the initial Cauchy conditions which characterizes the future evolution of the wave field from this initial time 𝑡 ! . As in the case of ITM [10], this state can be decomposed into two counter-propagative wave components using the superposition principle: 𝜙 𝑡 ! , !! !" 𝑡 ! = ! ! 𝜙 ! 𝑡 ! , !! ! !" 𝑡 ! + ! ! 𝜙 ! 𝑡 ! , - !! ! !" 𝑡 ! . ( 5 ) The first term is associated (up to a factor one half) to the exact state of the incident wave field before the DTR, it corresponds to the same wave shifted in time: 𝜙 ! 𝑡 = ! ! 𝜙 ! 𝑡 -𝑡 ! + 𝑡 ! . The second term is associated to a wave whose derivative has a minus sign. It corresponds to the time reversed wave: 𝜙 ! 𝑡 = ! ! 𝜙 ! 𝑡 ! + 𝑡 ! -𝑡 . Figure 1 shows the principle of DTR with a chirped pulse containing several frequencies. The pulse created at time 𝑡 = 0 propagates in a dispersive medium undergoing spreading (see figure 1a). At time 𝑡 !"# , a damping impulse is applied and the pulse is frozen (see figure 1b). The damping is then removed resulting in the creation of two counter-propagating pulses with half the amplitude of the initial wave field. The forward propagating pulse is identical to the initial propagating pulse as if no DTR was applied apart from the amplitude factor. The backward propagating pulse is the TR version of the initial pulse. It thus narrows as it propagates, reversing the dispersion, until at time 2𝑡 !"# it returns to the initial profile (with a factor of one half in the amplitude). 1D proof-of-concept experiment using a chain of magnets We have performed a 1D experiment using a chain of magnets as a proof-of-concept for DTR. Figure 2a shows the experimental set-up composed of a circular chain of 41 plastic disks. A small magnet oriented vertically with their North pole upward is glued on each 1 cm disk. The disks orientation being constrained, it induces a repulsive interaction between them. The disks are confined horizontally on a circle of 30 cm diameter by an underlying circle of fixed magnets with their North pole oriented upward. In addition, the friction of the disks on the table is drastically reduced by the use of an air cushion system. This allows waves to propagate in this coupled oscillator chain. A computer-controlled electromagnet is used to trigger the longitudinal waves in the chain. The change in dissipation is obtained by a sudden stop the air flow using the controlled valve which results in "freezing" the disk chain. Thus, in that case the damping coefficient 𝜁 can be considered infinite. For small amplitude oscillations, the chain can be modeled has a system of coupled harmonic oscillators with a rigidity constant κ which can be fitted from the dispersion curve measurements. When the air pressure is slightly increased, the disks are set in the random motion as shown in Figure 2b, which presents the displacements of the disks as a function of time in gray scale. It is possible to retrieve from this noise motion the dispersion curve of the chain by performing a space-time Fourier transform. The color scale is given in degree along the circle. One degree is approximately equivalent to 2.6 mm in length. Figure 2c shows the resulting experimental dispersion curve together with the fit of the harmonic model (dashed line). The good agreement of the fit confirms the validity of and harmonic interaction and permits to give a value for the rigidity constant of κ=0.13 Ν.m -1 . Figure 3a shows the propagation of an initially localized perturbation of the magnetic chain (the magnet #37 is acting as a source). A wave packet is launched in the magnetic chain. At time t 0 the chain is suddenly damped by turning off the air flow resulting in a freezing of the disks motion. At time t 1 the air flow is restored resulting in the creation of two counter propagating wave packets: i) one which resembles the initial wave packet as if no DTR had been applied apart from a decrease of the global amplitude by a factor of approximately 2; ii) a time reversed wave packet refocusing on the source. Figure 3b shows the disk displacements simulated using the harmonic interaction model with the fitted coupling constant. The initial magnet displacement is also taken from the experiment at time t init . This signal processing enables one to remove the forward propagating wave packet after the DTR. The resulting displacement pattern clearly shows the two counter propagating waves after the DTR impulse. Its similarity with the experimental data also shows the validity of the model. Simulations of DTR in 2D disordered medium We performed computer simulations for a 2D disordered system to show the robustness of TDR technique in complex media. The simulations are based on the mass-spring model of Harazi et al. [START_REF] Harazi | Multiple Scattering and Time Reversal of Ultrasound in Dry and Immersed Granular Media[END_REF] [17] introduced to simulate wave propagation in disordered stressed granular packings. The model consists of a two-dimensional percolated network of point particle with mass m, connected by linear springs of random stiffness as shown in the schematics in Figure 4a. The masses are able to move in the plane of the network and are randomly placed on a 70x70 square lattice with a filling factor of 91%. Each mass of the network is subjected to the Newton's equation: ! ! 𝒓 ! !! ! = 𝜔 !" ( 𝒓 !" -𝑎 ! ) 𝒓 !" 𝒓 !" !∈! ! , ( 6 ) where r i is the vector position of particle i, V i the set of neighboring particles connected to this particle, 𝜔 !" is the angular frequency and and 𝑎 ! the rest length of the spring, 𝒓 !" and 𝒓 !" are the vector between the two particles i and j and its norm respectively. The angular frequencies 𝜔 !" are uniformly distributed between 0.5 𝜔 !" and 1.5 𝜔 !" , where 𝜔 !! is the average angular frequency of the spring-mass systems. Before launching the wave, the network is submitted to a static stress by pulling the four walls of the domain (strain equal to 0.2) in order to ease the propagation of transverse waves. After this phase, the boundaries of the domain are fixed (zero displacement). The network is then excited by a horizontal displacement of one of the particle during a finite time. The profile of the excitation is given by 𝑢(𝑡) = W 𝑡 cos 𝜔 ! 𝑡 , where W 𝑡 is a temporal window restricting the oscillation to a single period and 𝜔 ! is the driving source pulsation chosen at 0.35 of the average angular frequency 𝜔 !" of the spring-mass systems. Figure 4b and4c show the evolution of the horizontal displacement with time of the source and the map of the displacements of the particles at various times respectively. At t=0, the displacement is confined to the source particle. Then, the displacement of the source decreases rapidly at a noise level of approximately one tenth of its initial value. The initial perturbation propagates and is strongly scattered in the inhomogeneous mass-spring network. At time 𝑡 = 95 𝑇 ! , where 𝑇 ! = 2𝜋/𝜔 ! is the excitation period, the perturbation is spread over the network and the particle displacements become randomly distributed. The network acts as a complex medium due to the random spring stiffnesses and the random vacancies in the squared network. The DTR freezing is applied at 𝑡 !"# = 300 𝑇 ! , all the particle velocities are set to zero, as if an infinite damping was applied instantaneously, keeping only the potential energy in the system. Right after this instant, the masses are released from their frozen positions with zero velocity (see third panel Fig. 4d), and evolve according to equation (1). After a complex motion of the particles, a coherent field appears refocusing back to the initial source around time 𝑡 = 2𝑡 !"# =600 𝑇 ! (see panel four Fig. 4d). The horizontal displacement of the source undergoes a very sharp increase reaching approximately 80% of its initial value (see Fig. 4b). Right after this time, the converging wave diverges again, yielding to a new complex displacement field (see the movie of the propagation in supplemental material). In addition to the spatial focusing on the initial source, a time refocusing is also observed showing a temporal shortening of the initial impulsion at time 𝑡 = 2𝑡 !"# (see Fig. 4c). The time width of the refocusing is approximately 6𝑇 ! (FWHM) i.e. twice the one of the source signal. Discussion The DTR process is associated with a decrease of the amplitude of the initial wave field. The kinetic energy associated to the time derivative of the field 𝜕𝜙 ! 𝜕𝑡 vanishes during the dissipation impulse leaving the potential energy, associated to the wave field 𝜙 ! unaffected. In the case of an initially propagating wave, the energy of the wave is equally partitioned between potential and kinetic energy. Thus, half of the initial energy is lost in the DTR process resulting in a quarter of the initial energy being TR while an other quarter is retained in the initial wave. It is interesting to note that, for standing waves, the effect of the DTR depends on its relative time phase relative to the impulse since wave energy alternates between kinetic and potential. The Cauchy analysis in terms of initial conditions to determine the wave field evolution enable one to make a link with Loschmidt's Gedankenexperiment for particle evolution. Loschmidt imagined a deamon capable of instantaneously reversing the velocity of the particles of a gas while keeping their position unaffected and thus time reversing the gas evolution [18] [19]. Although this scheme is impossible in the case of particles due to the extreme sensitivity to initial conditions, it is more amenable for waves because they can often be described with a linear operator and any error in initial conditions will not suffer from chaotic behavior. The wave analogue of this Loschmidt daemon in terms of Cauchy's initial condition 𝜙 ! , 𝜕𝜙 ! 𝜕𝑡 is to change the sign of the wave field time derivative 𝜙 ! , -𝜕𝜙 ! 𝜕𝑡 . Because of the superposition principle, the DTR is thus acting as a Loschmidt daemon by decoupling the wave field from its derivative. The DTR concept is generic and applies in the case of complex inhomogeneous materials as shown by the 2D simulations (see Fig. 3). This can be shown directly from the Cauchy theorem. After the freezing the wave field initial condition are reset with a time derivative of the wave the wave field equal to zero. The superposition principle given in Eq. 5 holds. In contrast with the ITM approach based on wave velocity changes [10] and standard time reversal cavities [START_REF] Fink | [END_REF], the backward propagating wave is directly proportional to the TR of the original wave and not to the time reversal of its time derivative or antiderivative. From that perspective, DTR has thus no spectral limitations and can be applied for the TR of broadband wave packets removing one of the limitation of the existing TR techniques. The limitation in the TR spectral range comes from the ability to freeze the field sufficiently rapidly compared with the phase change in the wave packet, resulting in the maximum value for the time pulsation 𝜁 ≫ 𝜔 ! . Conclusion This paper presents a new way to perform an instantaneous time mirror using a dissipation modulation impulse. This concept is generic and could be applied to other type of waves. In optics, DTR could be induced by changing abruptly the conductivity of the medium as in graphene [20], in acoustics it could be obtained by changing electrorheological medium [21]. The excitation is initially a localized perturbation of the magnetic chain, the magnet #37 is acting as a source. At time t 0 the chain is suddenly damped by turning off the air flow resulting in a freezing of the disks motion. At time t 1 the air flow is restored resulting in the creation of two counter propagating wave packets; b) Simulations of the magnet displacements using the harmonic interaction model with the fitted coupling constant. The initial magnet displacement is also taken from the experiment at time t init . Supplemental Material For 𝑡 ∈ 𝑡 ! , 𝑡 ! : ! ! ! !! ! 𝒌, 𝑡 + 𝜁 𝒌 !! !" 𝒌, 𝑡 + 𝜔 ! ! 𝒌 𝜙 𝒌, 𝑡 = 0, (S1) We consider the regime of high dissipation, so that we assume ζ 𝒌 > 2𝜔 ! 𝒌 . (S1) is thus the equation of a damped harmonic oscillator in the overdamped regime, whose solutions are given by: 𝜙 𝑡 = 𝐴𝑒 ! ! 𝒌 ! !! !! ! ! ! 𝒌 ! ! 𝒌 (!!! ! ) + 𝐵𝑒 ! ! 𝒌 ! !! !! ! ! ! 𝒌 ! ! 𝒌 (!!! ! ) , ( S2 ) with 𝐴 and 𝐵 two constants. Given the initial conditions of continuity of the field and its time derivative, we obtain: 𝜙 𝒌, 𝑡 = !! ! ! ! 𝒌 ! ! 𝒌 !! ! ! 𝒌,! ! ! ! ! 𝒌 !! ! !" 𝒌,! ! ! !! ! ! ! 𝒌 ! ! 𝒌 𝑒 ! ! 𝒌 ! !! !! ! ! ! 𝒌 ! ! 𝒌 (!!! ! ) + !! ! ! ! 𝒌 ! ! 𝒌 !! ! ! 𝒌,! ! ! ! ! 𝒌 !! ! !" 𝒌,! ! ! !! ! ! ! 𝒌 ! ! 𝒌 𝑒 ! ! 𝒌 ! !! !! ! ! ! 𝒌 ! ! 𝒌 (!!! ! ) . ( S3 ) Developing at first order in ! ! 𝒌 ! 𝒌 → 0 in front of and (at order 2) inside the exponential terms: 𝜙 𝑡 = - ! ! 𝒌 !! ! !" 𝒌, 𝑡 ! 𝑒 ! ! 𝒌 ! !! ! ! ! ! ! 𝒌 ! ! 𝒌 !! ! ! ! 𝒌 ! ! 𝒌 !!! ! + 𝜙 ! 𝒌, 𝑡 ! + ! ! 𝒌 !! ! !" 𝒌, 𝑡 ! 𝑒 ! ! 𝒌 ! ! ! ! ! ! 𝒌 ! ! 𝒌 !! ! ! ! 𝒌 ! ! 𝒌 !!! ! , ( S4 Figure captions:Figure 1 : 1 Figure captions: Figure 1: Principle of DTR mirror: a) Propagation of a chirped pulse in a dispersive medium at 3 Figure 2 : 2 Figure 2: a) Schematics of the experimental set-up composed of a circular chain of 41 plastic Figure 3 : 3 Figure 3: a) Displacement amplitudes of the magnets as a function of time represented. The Figure 4 : 4 Figure 4: a) Schematic view of the 2D spring model. The model consists of a 2D percolated Figure 1 :Figure 2 :Figure 3 : 123 Figure 1: Figure 4 : 4 Figure 4: ) where we used the fact that! ! 𝒌 !! ! !" 𝒌, 𝑡 ! is of order one in 𝜔 ! 𝒌 /ζ 𝒌 .Taking the zero th order inside the exponential yields equation (4) of the main text. Equation (S4) also reveals that the wave amplitude decreases like 𝑒! ! ! ! 𝒌 !! 𝒌!!! ! in the long run. Acknowledgements: We are very grateful to Y. Couder for fruitful and stimulating discussions. We thank A. Fourgeaud for their help in building the experimental set-up. S. K. S. acknowledge the French Ambassy in India for a Charpak scholarship. The authors acknowledge the support of the AXA research fund and LABEX WIFI (Laboratory of Excellence ANR-10-LABX-24) within the French Program 'Investments for the Future' under reference ANR-10-IDEX-0001-02 PSL*
21,523
[ "1030034", "759642", "828807", "939812", "738644" ]
[ "1004908", "1159", "1004908", "1159", "1004908", "1004908", "1004908", "1004908", "1004908", "1004908", "1159" ]
01756026
en
[ "math" ]
2024/03/05 22:32:10
2020
https://hal.science/hal-01756026/file/PKB_AKG_PL_20180328.pdf
Prasanta Kumar Barik Ankik Kumar Giri Philippe Laurençot Mass-conserving solutions to the Smoluchowski coagulation equation with singular kernel Keywords: Coagulation, Singular coagulation kernels, Existence, Mass-conserving solutions MSC (2010): Primary: 45J05, 45K05, Secondary: 34A34, 45G10 Cueto Camejo & Warnecke (2015). In particular, linear growth at infinity of the coagulation kernel is included and the initial condition may have an infinite second moment. Furthermore, all weak solutions (in a suitable sense) including the ones constructed herein are shown to be mass-conserving, a property which was proved in Norris (1999) under stronger assumptions. The existence proof relies on a weak compactness method in L 1 and a by-product of the analysis is that both conservative and non-conservative approximations to the SCE lead to weak solutions which are then mass-conserving. Introduction The kinetic process in which particles undergo changes in their physical properties is called a particulate process. The study of particulate processes is a well-known subject in various branches of engineering, astrophysics, physics, chemistry and in many other related areas. During the particulate process, particles merge to form larger particles or break up into smaller particles. Due to this process, particles change their size, shape and volume, to name but a few. There are various types of particulate processes such as coagulation, fragmentation, nucleation and growth for instance. In particular, this article mainly deals with the coagulation process which is governed by the Smoluchowski coagulation equation (SCE). In this process, two particles coalesce to form a larger particle at a particular instant. The SCE is a nonlinear integral equation which describes the dynamics of evolution of the concentration g(ζ, t) of particles of volume ζ > 0 at time t ≥ 0 [START_REF] Smoluchowski | Versuch einer mathematischen Theorie der Koagulationskinetik kolloider Lösungen[END_REF]. The evolution of g is given Here ∂g(ζ,t) ∂t represents the time partial derivative of the concentration of particles of volume ζ at time t. In addition, the non-negative quantity Ψ(ζ, η) denotes the interaction rate at which particles of volume ζ and particles of volume η coalesce to form larger particles. This rate is also known as the coagulation kernel or coagulation coefficient. The first and last terms B c (g) and D c (g) on the right-hand side to (1.1) represent the formation and disappearance of particles of volume ζ due to coagulation events, respectively. Let us define the total mass (volume) of the system at time t ≥ 0 as: M 1 (g)(t) := ∞ 0 ζg(ζ, t)dζ. (1.5) According to the conservation of matter, it is well known that the total mass (volume) of particles is neither created nor destroyed. Therefore, it is expected that the total mass (volume) of the system remains conserved throughout the time evolution prescribed by (1.1)-(1.2), that is, M 1 (g)(t) = M 1 (g in ) for all t ≥ 0. However, it is worth to mention that, for the multiplicative coagulation kernel Ψ(ζ, η) = ζη, the total mass conservation fails for the SCE at finite time t = 1, see [START_REF] Leyvraz | Singularities in the kinetics of coagulation processes[END_REF]. The physical interpretation is that the lost mass corresponds to "particles of infinite volume" created by a runaway growth in the system due to the very high rate of coalescence of very large particles. These particles, also referred to as "giant particles" [START_REF] Aldous | Deterministic and stochastic model for coalescence (aggregation and coagulation): a review of the mean-field theory for probabilists[END_REF] are interpreted in the physics literature as a different macroscopic phase, called a gel, and its occurrence is called the sol-gel transition or gelation transition. The earliest time T g ≥ 0 after which mass conservation no longer holds is called the gelling time or gelation time. Since the works by Ball & Carr [START_REF] Ball | The discrete coagulation-fragmentation equations: Existence, uniqueness and density conservation[END_REF] and Stewart [START_REF] Stewart | A global existence theorem for the general coagulation-fragmentation equation with unbounded kernels[END_REF], several articles have been devoted to the existence and uniqueness of solutions to the SCE for coagulation kernels which are bounded for small volumes and unbounded for large volumes, as well as to the mass conservation and gelation phenomenon, see [START_REF] Dubovskii | Existence, uniqueness and mass conservation for the coagulationfragmentation equation[END_REF][START_REF] Escobedo | Gelation in coagulation and fragmentation models[END_REF][START_REF] Escobedo | Gelation and mass conservation in coagulation-fragmentation models[END_REF][START_REF] Giri | Weak solutions to the continuous coagulation with multiple fragmentation[END_REF][START_REF] Ph | From the discrete to the continuous coagulation-fragmentation equations[END_REF][START_REF] Norris | Smoluchowski's coagulation equation: uniqueness, non-uniqueness and hydrodynamic limit for the stochastic coalescent[END_REF][START_REF] Stewart | A uniqueness theorem for the coagulation-fragmentation equation[END_REF], see also the survey papers [START_REF] Aldous | Deterministic and stochastic model for coalescence (aggregation and coagulation): a review of the mean-field theory for probabilists[END_REF][START_REF] Ph | Weak compactness techniques and coagulation equations[END_REF][START_REF] Ph | On coalescence equations and related models[END_REF] and the references therein. However, to the best of our knowledge, there are fewer articles in which existence and uniqueness of solutions to the SCE with singular coagulation rates have been studied, see [START_REF] Camejo | Regular solutions to the coagulation equations with singular kernels[END_REF][START_REF] Camejo | The singular kernel coagulation equation with multifragmentation[END_REF][START_REF] Escobedo | On self-similarity and stationary problem for fragmentation and coagulation models[END_REF][START_REF] Escobedo | Dust and self-similarity for the Smoluchowski coagulation equation[END_REF][START_REF] Norris | Smoluchowski's coagulation equation: uniqueness, non-uniqueness and hydrodynamic limit for the stochastic coalescent[END_REF]. In [START_REF] Norris | Smoluchowski's coagulation equation: uniqueness, non-uniqueness and hydrodynamic limit for the stochastic coalescent[END_REF], Norris investigates the existence and uniqueness of solutions to the SCE locally in time when the coagulation kernel satisfies Ψ(ζ, η) ≤ φ(ζ)φ(η), (ζ, η) ∈ (0, ∞) 2 , (1.6) for some sublinear function φ : (0, ∞) → [0, ∞), that is, φ enjoys the property φ(aζ) ≤ aφ(ζ) for all ζ ∈ (0, ∞) and a ≥ 1, and the initial condition g in belongs to L 1 ((0, ∞); φ(ζ) 2 dζ). Massconservation is also shown as soon as there is ε > 0 such that φ(ζ) ≥ εζ for all ζ ∈ (0, ∞). In [START_REF] Escobedo | Dust and self-similarity for the Smoluchowski coagulation equation[END_REF][START_REF] Escobedo | On self-similarity and stationary problem for fragmentation and coagulation models[END_REF], global existence, uniqueness, and mass-conservation are established for coagulation rates of the form Ψ(ζ, η) = ζ µ 1 η µ 2 + ζ µ 2 η µ 1 with -1 ≤ µ 1 ≤ µ 2 ≤ 1, µ 1 + µ 2 ∈ [0, 2] , and (µ 1 , µ 2 ) = (0, 1). Recently, global existence of weak solutions to the SCE for coagulation kernels satisfying ), and k * > 0, is obtained in [START_REF] Camejo | Regular solutions to the coagulation equations with singular kernels[END_REF] and further extended in [START_REF] Camejo | The singular kernel coagulation equation with multifragmentation[END_REF] to the broader class of coagulation kernels Ψ(ζ, η) ≤ k * (1 + ζ + η) λ (ζη) -σ , (ζ, η) ∈ (0, ∞) 2 , with σ ∈ [0, 1/2], λ -σ ∈ [0, 1 Ψ(ζ, η) ≤ k * (1 + ζ) λ (1 + η) λ (ζη) -σ , (ζ, η) ∈ (0, ∞) 2 , (1.7) with σ ≥ 0, λ -σ ∈ [0, 1), and k * > 0. In [START_REF] Camejo | The singular kernel coagulation equation with multifragmentation[END_REF], multiple fragmentation is also included and uniqueness is shown for the following restricted class of coagulation kernels Ψ 2 (ζ, η) ≤ k * (ζ -σ + ζ λ-σ )(η -σ + η λ-σ ), (ζ, η) ∈ (0, ∞) 2 , where σ ≥ 0 and λ -σ ∈ [0, 1/2]. The main aim of this article is to extend and complete the previous results in two directions. We actually consider coagulation kernels satisfying the growth condition (1.6) for the non-negative function φ β (ζ) := max ζ -β , ζ , ζ ∈ (0, ∞), and prove the existence of a global mass-conserving solution of the SCE (1.1)-(1.2) with initial conditions in L 1 ((0, ∞); (ζ -2β + ζ)dζ), thereby removing the finiteness of the second moment required to apply the existence result of [START_REF] Norris | Smoluchowski's coagulation equation: uniqueness, non-uniqueness and hydrodynamic limit for the stochastic coalescent[END_REF] and relaxing the assumption λ < σ + 1 used in [START_REF] Camejo | The singular kernel coagulation equation with multifragmentation[END_REF] for coagulation kernels satisfying (1.7). Besides this, we show that any weak solution in the sense of Definition 2.2 below is mass-conserving, a feature which was enjoyed by the solution constructed in [START_REF] Norris | Smoluchowski's coagulation equation: uniqueness, non-uniqueness and hydrodynamic limit for the stochastic coalescent[END_REF] but not investigated in [START_REF] Camejo | Regular solutions to the coagulation equations with singular kernels[END_REF][START_REF] Camejo | The singular kernel coagulation equation with multifragmentation[END_REF]. An important consequence of this property is that it gives some flexibility in the choice of the method to construct a weak solution to the SCE (1.1)-(1.2) since it will be mass-conserving whatever the approach. Recall that there are two different approximations of the SCE (1.1) by truncation have been employed in recent years, the so-called conservative and non-conservative approximations, see (4.4) below. While it is expected and actually verified in several papers that the conservative approximation leads to a mass-conserving solution to the SCE, a similar conclusion is not awaited when using the nonconservative approximation which has rather been designed to study the gelation phenomenon, in particular from a numerical point of view [START_REF] Filbet | Numerical simulation of the Smoluchowski coagulation equation[END_REF][START_REF] Bourgade | Convergence of a finite volume scheme for coagulation-fragmentation equations[END_REF]. Still, it is by now known that, for the SCE with locally bounded coagulation kernels growing at most linearly at infinity, the non-conservative approximation also allows one to construct mass-conserving solutions [START_REF] Filbet | Mass-conserving solutions and non-conservative approximation to the Smoluchowski coagulation equation[END_REF][START_REF] Barik | A note on mass-conserving solutions to the coagulation-fragmentation equation by using non-conservative approximation[END_REF]. The last outcome of our analysis is that, in our case, the conservative and non-conservative approximations can be handled simultaneously and both lead to a weak solution to the SCE which might not be the same due to the lack of a general uniqueness result but is mass-conserving. We now outline the results of the paper: In the next section, we state precisely our hypotheses on coagulation kernel and on the initial data together with the definition of solutions and the main result. In Section 3, all weak solutions are shown to be mass-conserving. Finally, in the last section, the existence of a weak solution to the SCE (1.1)-(1.2) is obtained by using a weak L 1 compactness method applied to either the non-conservative or the conservative approximations of the SCE. Main result We assume that the coagulation kernel Ψ satisfies the following hypotheses. Hypotheses 2.1. (H1) Ψ is a non-negative measurable function on (0, ∞) × (0, ∞), (H2) There are β > 0 and k > 0 such that 0 ≤ Ψ(ζ, η) = Ψ(η, ζ) ≤ k(ζη) -β , (ζ, η) ∈ (0, 1) 2 , 0 ≤ Ψ(ζ, η) = Ψ(η, ζ) ≤ kηζ -β , (ζ, η) ∈ (0, 1) × (1, ∞), 0 ≤ Ψ(ζ, η) = Ψ(η, ζ) ≤ k(ζ + η), (ζ, η) ∈ (1, ∞) 2 . Observe that (H2) implies that Ψ(ζ, η) ≤ k max ζ -β , ζ max η -β , η , (ζ, η) ∈ (0, ∞) 2 . Let us now mention the following interesting singular coagulation kernels satisfying hypotheses 2.1. (a) Smoluchowski's coagulation kernel [START_REF] Smoluchowski | Versuch einer mathematischen Theorie der Koagulationskinetik kolloider Lösungen[END_REF] (with β = 1/3) Ψ(ζ, η) = ζ 1/3 + η 1/3 ζ -1/3 + η -1/3 , (ζ, η) ∈ (0, ∞) 2 . (b) Granulation kernel [16] Ψ(ζ, η) = (ζ + η) θ 1 (ζη) θ 2 , where θ 1 ≤ 1 and θ 2 ≥ 0. (c) Stochastic stirred froths [START_REF] Clark | Stably coalescent stochastic froths[END_REF] Ψ(ζ, η) = (ζη) -β , where β > 0. Before providing the statement of Theorem 2.3, we recall the following definition of weak solutions to the SCE (1.1)-(1.2). We set L 1 -2β,1 (0, ∞) := L 1 ((0, ∞); (ζ -2β + ζ)dζ). Definition 2.2. Let T ∈ (0, ∞] and g in ∈ L 1 -2β,1 (0, ∞), g in ≥ 0 a.e. in (0, ∞). A non- negative real valued function g = g(ζ, t) is a weak solution to equations (1.1)-(1.2) on [0, T ) if g ∈ C ([0, T ); L 1 (0, ∞)) L ∞ (0, T ; L 1 -2β,1 (0, ∞)) and satisfies ∞ 0 [g(ζ, t) -g in (ζ)]ω(ζ)dζ = 1 2 t 0 ∞ 0 ∞ 0 ω(ζ, η)Ψ(ζ, η)g(ζ, s)g(η, s)dηdζds, (2.1 ) for every t ∈ (0, T ) and ω ∈ L ∞ (0, ∞), where ω(ζ, η) := ω(ζ + η) -ω(ζ) -ω(η), (ζ, η) ∈ (0, ∞) 2 . Now, we are in a position to state the main theorem of this paper. Theorem 2.3. Assume that the coagulation kernel satisfies hypotheses (H1)-(H2) and consider a non-negative initial condition g in ∈ L 1 -2β,1 (0, ∞). There exists at least one massconserving weak solution g to the SCE (1.1)-(1.2) on [0, ∞), that is, g is a weak solution to (1.1)-(1.2) in the sense of Definition 2.2 satisfying M 1 (g)(t) = M 1 (g in ) for all t ≥ 0, the total mass M 1 (g) being defined in (1.5). Weak solutions are mass-conserving In this section, we establish that any weak solution g to (1.1)-(1.2) on [0, T ), T ∈ (0, ∞], in the sense of Definition 2.2 is mass-conserving, that is, satisfies M 1 (g)(t) = M 1 (g in ), t ≥ 0. (3.1) To this end, we adapt an argument designed in [2, Section 3] to investigate the same issue for the discrete coagulation-fragmentation equations and show that the behaviour of g for small volumes required in Definition 2.2 allows us to control the possible singularity of Ψ. In order to prove Theorem 3.1, we need the following sequence of lemmas. Lemma 3.2. Assume that (H1)-(H2) hold. Let g be a weak solution to (1.1)-(1.2) on [0, T ). Then, for q ∈ (0, ∞) and t ∈ (0, T ), q 0 ζg(ζ, t)dζ - q 0 ζg in (ζ)dζ = - t 0 q 0 ∞ q-ζ ζΨ(ζ, η)g(ζ, s)g(η, s)dηdζds. (3.2) Proof. Set ω(ζ) = ζχ (0,q) (ζ) for ζ ∈ (0, ∞) and note that ω(ζ, η) =                0, if ζ + η ∈ (0, q), -(ζ + η), if ζ + η ≥ q, (ζ, η) ∈ (0, q) 2 , -ζ, if (ζ, η) ∈ (0, q) × [q, ∞), -η, if (ζ, η) ∈ [q, ∞) × (0, q), 0, if (ζ, η) ∈ [q, ∞) 2 . Inserting the above values of ω into (2.1) and using the symmetry of Ψ, we have q 0 [g(ζ, t) -g in (ζ)]ζdζ = 1 2 t 0 ∞ 0 ∞ 0 ω(ζ, η)Ψ(ζ, η)g(ζ, s)g(η, s)dηdζds = - 1 2 t 0 q 0 q q-ζ (ζ + η)Ψ(ζ, η)g(ζ, s)g(η, s)dηdζds - 1 2 t 0 q 0 ∞ q ζΨ(ζ, η)g(ζ, s)g(η, s)dηdζds - 1 2 t 0 ∞ q q 0 ηΨ(ζ, η)g(ζ, s)g(η, s)dηdζds =- t 0 q 0 q q-ζ ζΨ(ζ, η)g(ζ, s)g(η, s)dηdζds - t 0 q 0 ∞ q ζΨ(ζ, η)g(ζ, s)g(η, s)dηdζds, which completes the proof of Lemma 3.2. In order to complete the proof of Theorem 3.1, it is sufficient to show that the right-hand side of (3.2) goes to zero as q → ∞. The first step in that direction is the following result. Lemma 3.3. Assume that (H1)-(H2) hold. Let g be a solution to (1.1)-(1.2) on [0, T ) and consider t ∈ (0, T ). Then (i) ∞ q [g(ζ, t) -g in (ζ)]dζ = - 1 2 t 0 ∞ q ∞ q Ψ(ζ, η)g(ζ, s)g(η, s)dηdζds + 1 2 t 0 q 0 q q-ζ Ψ(ζ, η)g(ζ, s)g(η, s)dηdζds, (ii) lim q→∞ t 0 q q 0 q q-ζ Ψ(ζ, η)g(ζ, s)g(η, s)dηdζ - ∞ q ∞ q Ψ(ζ, η)g(ζ, s)g(η, s)dηdζ ds = 0. Proof. Set ω(ζ) = χ [q,∞) (ζ) for ζ ∈ (0, ∞) and the corresponding ω is ω(ζ, η) =                0, if ζ + η ∈ (0, q), 1, if ζ + η ∈ [q, ∞), (ζ, η) ∈ (0, q) 2 , 0, if (ζ, η) ∈ (0, q) × [q, ∞), 0, if (ζ, η) ∈ [q, ∞) × (0, q), -1, if (ζ, η) ∈ [q, ∞) 2 . Inserting the above values of ω into (2.1), we obtain Lemma 3.3 (i). Next, we readily infer from the integrability of ζ → ζg(ζ, t) and ζ → ζg in (ζ) and Lebesgue's dominated convergence theorem that lim q→∞ q ∞ q [g(ζ, t) -g in (ζ)]dζ ≤ lim q→∞ ∞ q ζ[g(ζ, t) + g in (ζ)]dζ = 0. Multiplying the identity stated in Lemma 3.3 (i) by q, we deduce from the previous statement that the left-hand side of the thus obtained identity converges to zero as q → ∞. Then so does its right-hand side, which proves Lemma 3.3 (ii). Then, for t ∈ (0, T ), (i) lim q→∞ t 0 q 0 ∞ q ζΨ(ζ, η)g(ζ, s)g(η, s)dηdζds = 0, and (ii) lim q→∞ q t 0 ∞ q ∞ q Ψ(ζ, η)g(ζ, s)g(η, s)dηdζds = 0. Proof. Let q > 1, t ∈ (0, T ), and s ∈ (0, t). To prove the first part of Lemma 3.4, we split the integral as follows q 0 ∞ q ζΨ(ζ, η)g(ζ, s)g(η, s)dηdζ = J 1 (q, s) + J 2 (q, s), with J 1 (q, s) := 1 0 ∞ q ζΨ(ζ, η)g(ζ, s)g(η, s)dηdζ, J 2 (q, s) := q 1 ∞ q ζΨ(ζ, η)g(ζ, s)g(η, s)dηdζ. On the one hand, it follows from (H2) and Young's inequality that J 1 (q, s) ≤ k 1 0 ∞ q ζ 1-β ηg(ζ, s)g(η, s)dηdζ ≤ k ∞ 0 ζ 1-β g(ζ, s)dζ ∞ q ηg(η, s)dη ≤ k g(s) L 1 -2β,1 (0,∞) ∞ q ηg(η, s)dη and the integrability properties of g from Definition 2.2 and Lebesgue's dominated convergence theorem entail that lim q→∞ t 0 J 1 (q, s)ds = 0. ( On the other hand, we infer from (H2) that J 2 (q, s) ≤ k q 1 ∞ q ζ(ζ + η)g(ζ, s)g(η, s)dηdζ ≤ 2k q 1 ∞ q ζηg(ζ, s)g(η, s)dηdζ ≤ 2kM 1 (g)(s) ∞ q ηg(η, s)dη, and we argue as above to conclude that lim q→∞ t 0 J 2 (q, s)ds = 0. Recalling (3.3), we have proved Lemma 3.4 (i). Similarly, by (H2), q ∞ q ∞ q Ψ(ζ, η)g(ζ, s)g(η, s)dηdζ ≤ k ∞ q ∞ q (qζ + qη)g(ζ, s)g(η, s)dηdζ ≤ 2k ∞ q ∞ q ζηg(ζ, s)g(η, s)dηdζ ≤ 2kM 1 (g)(s) ∞ q ηg(η, s)dη, and we use once more the previous argument to obtain Lemma 3.4 (ii). Now, we are in a position to prove Theorem 3.1. Proof of Theorem 3.1. Let t ∈ (0, T ). From Lemma 3.4 (i), we obtain lim q→∞ t 0 q 0 ∞ q ζΨ(ζ, η)g(ζ, s)g(η, s)dηdζds = 0, (3.4) while Lemma 3.3 (ii) and Lemma 3.4 (ii) imply that lim q→∞ q t 0 q 0 q q-ζ Ψ(ζ, η)g(ζ, s)g(η, s)dηdζds = 0. (3.5) Since t 0 q 0 ∞ q-ζ ζΨ(ζ, η)g(ζ, s)g(η, s)dηdζds ≤ q t 0 q 0 q q-ζ Ψ(ζ, η)g(ζ, s)g(η, s)dηdζds + t 0 q 0 ∞ q ζΨ(ζ, η)g(ζ, s)g(η, s)dηdζds, it readily follows from (3.4) and (3.5) that the right-hand side of (3.2) converges to zero as q → ∞. Consequently, M 1 (g)(t) = lim q→∞ q 0 ζg(ζ, s)dζ = lim q→∞ q 0 ζg in (ζ)dζ = M 1 (g in ). This completes the proof of Theorem 3.1. Existence of weak solutions This section is devoted to the construction of weak solutions to the SCE (1.1)-(1.2) with a nonnegative initial condition g in ∈ L 1 -2β,1 (0, ∞). It is achieved by a classical compactness technique, the appropriate functional setting being here the space L 1 (0, ∞) endowed with its weak topology first used in the seminal work [START_REF] Stewart | A global existence theorem for the general coagulation-fragmentation equation with unbounded kernels[END_REF] and subsequently further developed in [START_REF] Barik | A note on mass-conserving solutions to the coagulation-fragmentation equation by using non-conservative approximation[END_REF][START_REF] Camejo | Regular solutions to the coagulation equations with singular kernels[END_REF][START_REF] Camejo | The singular kernel coagulation equation with multifragmentation[END_REF][START_REF] Escobedo | Gelation and mass conservation in coagulation-fragmentation models[END_REF][START_REF] Filbet | Mass-conserving solutions and non-conservative approximation to the Smoluchowski coagulation equation[END_REF][START_REF] Giri | Weak solutions to the continuous coagulation with multiple fragmentation[END_REF][START_REF] Ph | From the discrete to the continuous coagulation-fragmentation equations[END_REF]. Given a non-negative initial condition g in ∈ L 1 -2β,1 (0, ∞), the starting point of this approach is the choice of an approximation of the SCE (1.1)-(1.2), which we set here to be ∂g n (ζ, t) ∂t = B c (g n )(ζ, t) -D θ c,n (g n )(ζ, t), (ζ, t) ∈ (0, n) × (0, ∞), (4.1) with truncated initial condition g n (ζ, 0) = g in n (ζ) := g in (ζ)χ (0,n) (ζ), ζ ∈ (0, n), (4.2) where n ≥ 1 is a positive integer, θ ∈ {0, 1}, Ψ θ n (ζ, η) := Ψ(ζ, η)χ (1/n,n) (ζ)χ (1/n,n) (η) 1 -θ + θχ (0,n) (ζ + η) (4.3) for (ζ, η) ∈ (0, ∞) 2 and D θ c,n (g)(ζ) := n-θζ 0 Ψ θ n (ζ, η)g(ζ)g(η)dη, ζ ∈ (0, n), (4.4) the gain term B c (g)(ζ) being still defined by (1.3) for ζ ∈ (0, n). The introduction of the additional parameter θ ∈ {0, 1} allows us to handle simultaneously the so-called conservative approximation (θ = 1) and non-conservative approximation (θ = 0) and thereby prove that both approximations allow us to construct weak solutions to the SCE (1.1)-(1.2), a feature which is of interest when no general uniqueness result is available. Note that we also truncate the coagulation for small volumes to guarantee the boundedness of Ψ θ n which is a straightforward consequence of (H2) and (4.3). Thanks to this property, it follows from [START_REF] Stewart | A global existence theorem for the general coagulation-fragmentation equation with unbounded kernels[END_REF] (θ = 1) and [START_REF] Filbet | Mass-conserving solutions and non-conservative approximation to the Smoluchowski coagulation equation[END_REF] (θ = 0) that there is a unique non-negative solution g n ∈ C 1 ([0, ∞); L 1 (0, n)) to (4.1)-(4. 2) (we do not indicate the dependence upon θ for notational simplicity) which satisfies n 0 ζg n (ζ, t)dζ = n 0 ζg in n (ζ)dζ -(1 -θ) t 0 n 0 n n-ζ ζΨ θ n (ζ, η)g n (ζ, s)g n (η, s)dηdζds (4.5) for t ≥ 0. The second term in the right-hand side of (4.5) vanishes for θ = 1 and the total mass of g n remains constant throughout time evolution, which is the reason for this approximation to be called conservative. In contrast, when θ = 0, the total mass of g n decreases as a function of time. In both cases, it readily follows from (4.5) that n 0 ζg n (ζ, t)dζ ≤ n 0 ζg in n (ζ)dζ ≤ M 1 (g in ), t ≥ 0. (4.6) For further use, we next state the weak formulation of (4.1)-(4.2): for t > 0 and ω ∈ L ∞ (0, n), there holds n 0 ω(ζ)[g n (ζ, t) -g in n (ζ)]dζ = 1 2 t 0 n 1/n n 1/n H θ ω,n (ζ, η)Ψ θ n (ζ, η)g n (ζ, s)g n (η, s)dηdζds, (4.7) where H θ ω,n (ζ, η) := ω(ζ + η)χ (0,n) (ζ + η) -[ω(ζ) + ω(η)] 1 -θ + θχ (0,n) (ζ + η) for (ζ, η) ∈ (0, n) 2 . In order to prove Theorem 2.3, we shall show the convergence (with respect to an appropriate topology) of a subsequence of (g n ) n≥1 towards a weak solution to (1.1)-(1.2). For that purpose, we now derive several estimates and first recall that, since g in ∈ L 1 -2β,1 (0, ∞), a refined version of de la Vallée-Poussin theorem, see [START_REF] Châu-Hoàn | Etude de la classe des opérateurs m-accrétifs de L 1 (Ω) et accrétifs dans L ∞ (Ω)[END_REF] or [START_REF] Ph | Weak compactness techniques and coagulation equations[END_REF]Theorem 8], guarantees that there exist two non-negative and convex functions σ 1 and σ 2 in C 2 ([0, ∞)) such that σ ′ 1 and σ ′ 2 are concave, σ i (0) = σ ′ i (0) = 0, lim x→∞ σ i (x) x = ∞, i = 1, 2, (4.8) and I 1 := ∞ 0 σ 1 (ζ)g in (ζ)dζ < ∞, and I 2 := ∞ 0 σ 2 ζ -β g in (ζ) dζ < ∞. (4.9) Let us state the following properties of the above defined functions σ 1 and σ 2 which are required to prove Theorem 2.3. Lemma 4.1. For (x, y) ∈ (0, ∞) 2 , there holds (i) σ 2 (x) ≤ xσ ′ 2 (x) ≤ 2σ 2 (x), (ii) xσ ′ 2 (y) ≤ σ 2 (x) + σ 2 (y), and (iii) 0 ≤ σ 1 (x + y) -σ 1 (x) -σ 1 (y) ≤ 2 xσ 1 (y) + yσ 1 (x) x + y . Proof. A proof of the statements (i) and (iii) may be found in [START_REF] Ph | Weak compactness techniques and coagulation equations[END_REF]Proposition 14] while (ii) can easily be deduced from (i) and the convexity of σ 2 . We recall that throughout this section, the coagulation kernel Ψ is assumed to satisfy (H1)-(H2) and g in is a non-negative function in L 1 -2β,1 (0, ∞). Moment estimates We begin with a uniform bound in L 1 -2β,1 (0, ∞). Lemma 4.2. There exists a positive constant B > 0 depending only on g in such that, for t ≥ 0, n 0 ζ + ζ -2β g n (ζ, t)dζ ≤ B. Proof. Let δ ∈ (0, 1) and take ω(ζ) = (ζ + δ) -2β , ζ ∈ (0, n), in (4.7) . With this choice of ω, H θ ω,n (ζ, η) ≤ (ζ + η + δ) -2β -(ζ + δ) -2β -(η + δ) -2β χ (0,n) (ζ + η) ≤ 0 for all (ζ, η) ∈ (0, n) 2 , so that (4.7) entails that, for t ≥ 0, n 0 (ζ + δ) -2β g n (ζ, t)dζ ≤ n 0 (ζ + δ) -2β g in n (ζ)dζ ≤ ∞ 0 ζ -2β g in (ζ)dζ. We then let δ → 0 in the previous inequality and deduce from Fatou's lemma that n 0 ζ -2β g n (ζ, t)dζ ≤ ∞ 0 ζ -2β g in (ζ)dζ, t ≥ 0. Combining the previous estimate with (4.6) gives Lemma 4.2 with B := g in L 1 -2β,1 (0,∞) . We next turn to the control of the tail behavior of g n for large volumes, a step which is instrumental in the proof of the convergence of each integral on the right-hand side of (4.1) to their respective limits on the right-hand side of (1.1). Lemma 4.3. For T > 0, there is a positive constant Γ(T ) depending on k, σ 1 , g in , and T such that, (i) sup t∈[0,T ] n 0 σ 1 (ζ)g n (ζ, t)dζ ≤ Γ(T ), and (ii) (1 -θ) T 0 n 1 n 1 σ 1 (ζ)χ (0,n) (ζ + η)Ψ(ζ, η)g n (ζ, s)g n (η, s)dηdζds ≤ Γ(T ). Proof. Let T > 0 and t ∈ (0, T ). We set ω (ζ) = σ 1 (ζ), ζ ∈ (0, n), into (4.7) and obtain n 0 σ 1 (ζ)[g n (ζ, t) -g in n (ζ)]dζ = 1 2 t 0 n 1/n n 1/n σ1 (ζ, η)χ (0,n) (ζ + η)Ψ(ζ, η)g n (ζ, s)g n (η, s)dηdζds - 1 -θ 2 t 0 n 1/n n 1/n [σ 1 (ζ) + σ 1 (η)]χ [n,∞) (ζ + η)Ψ(ζ, η)g n (ζ, s)g n (η, s)dηdζds, recalling that σ1 (ζ, η) = σ 1 (ζ + η) -σ 1 (ζ) -σ 1 (η) , hence, using (H2) and Lemma 4.1, Owing to the concavity of σ ′ 1 and the property σ 1 (0) = 0, there holds n 0 σ 1 (ζ)[g n (ζ, t) -g in n (ζ)]dζ ≤ k 2 4 i=1 J i,n (t) -(1 -θ)R n (t), with J 1,n (t) := t 0 1 0 1 0 σ1 (ζ, η)(ζη) -β g n (ζ, s)g n (η, s)dηdζds, J 2,n (t) := t 0 1 0 n 1 σ1 (ζ, η)ζ -β ηg n (ζ, s)g n (η, s)dηdζds, σ1 (ζ, η) = ζ 0 η 0 σ ′′ 1 (x + y)dydx ≤ σ ′′ 1 (0)ζη , (ζ, η) ∈ (0, ∞) 2 . ( 4.10) By (4.10), Lemma 4.2, and Young's inequality, J 1,n (t) ≤ σ ′′ 1 (0) t 0 1 0 1 0 ζ 1-β η -β g n (ζ, s)g n (η, s)dηdζds ≤ σ ′′ 1 (0) t 0 1 0 ζ + ζ -2β g n (ζ, s)dζ 2 ds ≤ σ ′′ 1 (0)B 2 t. Next, Lemma 4.1 (iii), Lemma 4.2, and Young's inequality give J 2,n (t) = J 3,n (t) ≤ 2 t 0 1 0 n 1 ζσ 1 (η) + ησ 1 (ζ) ζ + η ζ -β ηg n (ζ, s)g n (η, s)dηdζds ≤ 2 t 0 1 0 n 1 ζ 1-β σ 1 (η) + σ 1 (1)ζ -β η g n (ζ, s)g n (η, s)dηdζds ≤ 2 t 0 1 0 ζ + ζ -2β g n (ζ, s)dζ n 1 σ 1 (η)g n (η, s)dη ds + σ 1 (1) t 0 1 0 ζ + ζ -2β g n (ζ, s)dζ n 1 ηg n (η, s)dη ds ≤ 2σ 1 (1)B 2 t + 2B t 0 n 0 σ 1 (η)g n (η, s)dηds, and J 4,n (t) ≤ 2 t 0 n 1 n 1 (ησ 1 (ζ) + ζσ 1 (η)) g n (ζ, s)g n (η, s)dηdζds ≤ 4B t 0 n 0 σ 1 (η)g n (η, s)dηds. Gathering the previous estimates, we end up with n 0 σ 1 (ζ)[g n (ζ, t) -g in n (ζ)]dζ ≤ k σ ′′ 1 (0) 2 + 2σ 1 (1) B 2 t + 4kB t 0 n 0 σ 1 (η)g n (η, s)dηds -(1 -θ)R n (t), and we infer from Gronwall's lemma and (4.9) that n 0 σ 1 (ζ)g n (ζ, t)dζ + (1 -θ)R n (t) ≤ e 4kBt n 0 σ 1 (ζ)g in n (ζ)dζ + σ ′′ 1 (0) 8 + σ 1 (1) 2 Be 4kBt ≤ I + σ ′′ 1 (0) + σ 1 (1) B e 4kBt . This completes the proof of Lemma 4.3. Uniform integrability Next, our aim being to apply Dunford-Pettis' theorem, we have to prevent concentration of the sequence (g n ) n≥1 on sets of arbitrary small measure. For that purpose, we need to show the following result. Lemma 4.4. For any T > 0 and λ > 0, there is a positive constant L 1 (λ, T ) depending only on k, σ 2 , g in , λ, and T such that sup t∈[0,T ] λ 0 σ 2 ζ -β g n (ζ, t) dζ ≤ L 1 (λ, T ). Proof. For (ζ, t) ∈ (0, n) × (0, ∞), we set u n (ζ, t) := ζ -β g n (ζ, t). Let λ ∈ (1, n) , T > 0, and t ∈ (0, T ). Using Leibniz's rule, Fubini's theorem, and (4.1), we obtain d dt λ 0 σ 2 (u n (ζ, t))dζ ≤ 1 2 λ 0 λ-η 0 σ 2 ′ (u n (ζ + η, t))(ζ + η) -β Ψ θ n (ζ, η)g n (ζ, t)g n (η, t)dζdη. (4.11) It also follows from (H2) that Ψ θ n (ζ, η) ≤ Ψ(ζ, η) ≤ 2kλ 1+2β (ζη) -β , (ζ, η) ∈ (0, λ) 2 . (4.12) We then infer from (4.11), (4.12), Lemma 4.1 (ii) and Lemma 4.2 that d dt λ 0 σ 2 (u n (ζ, t))dζ ≤kλ 1+2β λ 0 λ-η 0 σ ′ 2 (u n (ζ + η, t))(ζ + η) -β u n (ζ, t)u n (η, t)dζdη ≤kλ 1+2β λ 0 λ-η 0 η -β [σ 2 (u n (ζ + η, t)) + σ 2 (u n (ζ, t))] u n (η, t)dζdη ≤2kλ 1+2β λ 0 η -2β g n (η, t) λ-η 0 σ 2 (u n (ζ + η, t))dζdη ≤2kλ 1+2β B λ 0 σ 2 (u n (ζ, t))dζ. Then, using Gronwall's lemma, the monotonicity of σ 2 , and (4.9), we obtain λ 0 σ 2 (ζ -β g n (ζ, t))dζ ≤ L 1 (λ, T ), where L 1 (λ, T ) := I 2 e 2kλ 1+2β BT , and the proof is complete. Time equicontinuity The outcome of the previous sections settles the (weak) compactness issue with respect to the volume variable. We now turn to the time variable. Lemma 4.5. Let t 2 ≥ t 1 ≥ 0 and λ ∈ (1, n). There is a positive constant L 2 (λ) depending only on k, g in , and λ such that λ 0 ζ -β |g n (ζ, t 2 ) -g n (ζ, t 1 )|dζ ≤ L 2 (λ)(t 2 -t 1 ). Proof. Let t > 0. On the one hand, by Fubini's theorem, (4.12), and Lemma 4.2, λ 0 ζ -β B c (g n )(ζ, t)dζ ≤ 1 2 λ 0 λ-ζ 0 (ζ + η) -β Ψ(ζ, η)g n (ζ, t)g n (η, t)dηdζ ≤ kλ 1+2β λ 0 λ 0 ζ -β η -2β g n (ζ, t)g n (η, t)dηdζ ≤ kλ 1+3β λ 0 ζ -2β g n (ζ, t)dζ 2 ≤ kλ 1+3β B 2 . On the other hand, since Ψ θ n (ζ, η) ≤ Ψ(ζ, η) ≤ 2kλ β ηζ -β , 0 < ζ < λ < η < n, we infer from (4.12) and Lemma 4.2 that λ 0 ζ -β D θ c,n (g n )(ζ, t)dζ ≤ λ 0 n 0 ζ -β Ψ(ζ, η)g n (ζ, t)g n (η, t)dηdζ ≤ 2kλ 1+2β λ 0 λ 0 ζ -2β η -β g n (ζ, t)g n (η, t)dηdζ + 2kλ β λ 0 n λ ζ -β ηg n (ζ, t)g n (η, t)dηdζ ≤ 2kB 2 (1 + λ 1+β )λ β . Consequently, by (4.1), Convergence We are now in a position to complete the proof of the existence of a weak solution to the SCE (1.1)-(1.2). Proof of Theorem 2.3. For (ζ, t) ∈ (0, n) × (0, ∞), we set u n (ζ, t) := ζ -β g n (ζ, t). Let T > 0 and λ > 1. Owing to the superlinear growth (4.8) of σ 2 at infinity and Lemma 4.4, we infer from Dunford-Pettis' theorem that there is a weakly compact subset K λ,T of L 1 (0, λ) such that (u n (t)) n≥1 lies in K λ,T for all t ∈ [0, T ]. Moreover, by Lemma 4.5, (u n ) n≥1 is strongly equicontinuous in L 1 (0, λ) at all t ∈ (0, T ) and thus also weakly equicontinuous in L 1 (0, λ) at all t ∈ (0, T ). A variant of Arzelà-Ascoli's theorem [26, Theorem 1.3.2] then guarantees that (u n ) n≥1 is relatively compact in C w ([0, T ]; L 1 (0, λ)). This property being valid for all T > 0 and λ > 1, we use a diagonal process to obtain a subsequence of (g n ) n≥1 (not relabeled) and a non-negative function g such that To complete the proof of Theorem 2.3, it remains to show that g is a weak solution to the SCE (1.1)-(1.2) on [0, ∞) in the sense of Definition 2.2. This step is carried out by the classical approach of [START_REF] Stewart | A global existence theorem for the general coagulation-fragmentation equation with unbounded kernels[END_REF] with some modifications as in [START_REF] Camejo | Regular solutions to the coagulation equations with singular kernels[END_REF][START_REF] Camejo | The singular kernel coagulation equation with multifragmentation[END_REF] and [START_REF] Ph | From the discrete to the continuous coagulation-fragmentation equations[END_REF] to handle the convergence of the integrals for small and large volumes, respectively. In particular, on the one hand, the behavior for large volumes is controlled by the estimates of Lemma 4.3 with the help of the superlinear growth (4.8) of σ 1 at infinity and the linear growth (H2) of Ψ. On the other hand, the behavior for small volumes is handled by (H2), Lemma 4.2, and (4.13). Finally, g being a weak solution to (1.1)-(1.2) on [0, ∞) in the sense of Definition 2.2, it is mass-conserving according to Theorem 3.1, which completes the proof of Theorem 2.3. 2 ζ 0 Ψ 0 Ψ 200 by ∂g(ζ, t) ∂t = B c (g)(ζ, t) -D c (g)(ζ, t), (ζ, t) ∈ (0, ∞) 2 , (1.1) with initial condition g(ζ, 0) = g in (ζ) ≥ 0, ζ ∈ (0, ∞),(1.2)where the operator B c and D c are expressed asB c (g)(ζ, t) := 1 (ζ -η, η)g(ζ -η, t)g(η, t)dη (1.3)andD c (g)(ζ, t) := ∞ (ζ, η)g(ζ, t)g(η, t)dη.(1.4) Theorem 3 . 1 . 31 Suppose that (H1)-(H2) hold. Let g be a weak solution to (1.1)-(1.2) on [0, T ) for some T ∈ (0, ∞]. Then g satisfies the mass-conserving property (3.1) for all t ∈ (0, T ). Lemma 3 . 4 . 34 Assume that (H1)-(H2) hold. Let g be a weak solution to (1.1)-(1.2) on [0, T ). J 3 σ 1 31 η)ζη -β g n (ζ, s)g n (η, s)dηdζds, η)(ζ + η)g n (ζ, s)g n (η, s)dηdζds, (ζ)χ [n,∞) (ζ + η)Ψ(ζ, η)g n (ζ, s)g n (η, s)dηdζds. g n -→ g in C w ([0, T ]; L 1 (0, λ))for all T > 0 and λ > 1. Owing to Lemma 4.3 and the superlinear growth (4.8) of σ 1 at infinity, a by-now classical argument allows us to improve the previous convergence tog n -→ g in C w ([0, T ]; L 1 ((0, ∞); (ζ -β + ζ)dζ)).(4.13) Acknowledgments This work was supported by University Grant Commission (UGC), India, for providing Ph.D fellowship to PKB. AKG would like to thank Science and Engineering Research Board (SERB), Department of Science and Technology (DST), India for providing funding support through the project Y SS/2015/001306.
33,166
[ "14588" ]
[ "531402", "531402", "1954" ]
00111419
en
[ "spi" ]
2024/03/05 22:32:10
2004
https://hal.science/hal-00111419/file/nguyentajan2004.pdf
On a new computational method for the simulation of periodic structures subjected to moving loads Application to vented brake discs Introduction The brake is a major security part of a car. In friction braking systems subjected to very severe conditions, various kinds of defects can appear : honeycomb cracking on the rubbing surface, thru-cracks through the disc thickness, fracture of the elbow of the bowl, wear... A numerical model able to predict these phenomena is an alternative and a complement to expensive bench-tests. The computational approach we develop consists in: -new numerical strategies suitable for problems involving components subjected to moving loads; -a relevant modelling of the behavior of the material; -a modelling of the different damage phenomena undergone by the disc, which takes account of the multiaxial and anisothermal characteristics of the loads. It is essential for the numerical determination of the thermomechanical state of the disc to take into account the main couplings between the different phenomena, the transient characteristic of the thermal history, the inelastic behavior of the material, the non-homogeneous thermomechanical gradients taking place in the disc and the rotation of the disc. The use of classical finite element methods leads to excessive computational times. More precisely, one particularity of the brakes discs is the fact that they are subjected to repeated thermomechanical rotating loads with amplitude varying with the rotations. To simulate a whole braking, dozens of rotations are necessary and dozens of incremental rotations are needed to compute a rotation so that the total computational time is too high. To circumvent this difficulty, algorithms adapted to problems of structures subjected to thermomechanical moving loads have been developed. The alternative approach consists in using the stationary methods, first proposed by Nguyen and Rahimian [START_REF] Nguyen | Mouvement permanent d'une fissure en milieu élastoplastique[END_REF] and later developed by Dang Van and Maitournam [START_REF] Maitournam | Formulation et résolution numérique des problèmes thermoviscoplastiques en régime permanent[END_REF][START_REF] Van | Steady-state flow in classical elastoplasticity: Application to repeated rolling and sliding Contact[END_REF], which permit to directly calculate the mechanical state of the structure after one rotation or directly the asymptotic state after repeated passes. These algorithms lead to important reductions of the computational time. They can be applied to structures which geometries are generated by the translation or the rotation of a two-dimensional section. Many industrial applications for instance railways [START_REF] Van | On some recent trends in modelling of contact fatigue and wear in rail[END_REF] and brake discs [NGU02A, NGU02B] have been treated with these methods. In a previous paper [START_REF] Nguyen-Tajan T | Une méthode de calcul de structures soumises à des chargements mobiles. Application au freinage automobile[END_REF], we have treated only solid disc. Here, we consider the case of vented discs (figure 1). These structures have a periodic geometry, so that stationary methods cannot be used in their previous formulations. The objective of the paper is to propose an extension of such methods to periodic structures subjected to moving loads and to quickly determine the mechanical state after a given number of rotations by calculating the solution cycle by cycle. First, we present the principle of the method and its formulation in the case of an elastoplastic material with linear kinematic hardening. Then, an example of simulation of a vented disc is given. The periodic stationary method Overview of the stationary method The stationary method was first proposed by Nguyen and Rahimian [START_REF] Nguyen | Mouvement permanent d'une fissure en milieu élastoplastique[END_REF]; Dang Van and Maitournam [MAI89,[START_REF] Van | Steady-state flow in classical elastoplasticity: Application to repeated rolling and sliding Contact[END_REF] developed it in the case of repeated moving loads. The considered structures are generated by the translation or the rotation of a given 2D section. They are subjected to repeated moving thermomechanical loads. Quasi-static evolution and infinitesimal deformations are assumed. The objective of the pass-by-pass stationary algorithm is to directly determine the thermomechanical response of the structure after each pass of the moving loads. The stationary method relies on the following hypothesis : -the amplitude and velocity of the loads remain constant during one load pass; -in a reference frame attached to the moving loads, the thermomechanical quantities are stationary (steady-state assumption). The idea is then to use this frame instead of the one related to the structure and so to use eulerian coordinates. The steady state assumption makes the problem become time independent. Internal variables are therefore directly calculated (without time incremental resolutions) by integration along the streamlines which are known as the hypothesis of small transformation holds: it is as if the constitutive law is non local [START_REF] Maitournam | Formulation et résolution numérique des problèmes thermoviscoplastiques en régime permanent[END_REF][START_REF] Van | Steady-state flow in classical elastoplasticity: Application to repeated rolling and sliding Contact[END_REF]. In fact, we consider a continuous medium subjected to a thermomechanical load moving with a velocity V(t) relatively to a frame R = (O, e X , e Y , e Z ). We adopt the frame R =(O , e x , e y , e z ) attached to the load moving. In this frame, the structure is therefore moving with the velocity -V(t). The material derivative of a tensorial quantity B related to the material is given by : Ḃ(x, t) = ∂B ∂t (x, t) + ∇ x B(x, t).v(x, t) with : v(x, t) = v r (x, t) -V(x, t) where x is the geometrical position of the material point in R , v the velocity of the material point relatively to R and v r its velocity relatively to R. Thanks to the hypothesis of infinitesinal transformation in the reference linked to the solid, the term v r (x, t) becomes negligible compared to V(x, t). Hence, the expression of the material derivative of B becomes : Ḃ(x, t) = ∂B ∂t (x, t) -∇ x B(x, t).V(x, t) [1] The assumption of steady state in a frame moving with the loads leads to a timeindependent problem for which all time partial derivatives vanish. So the expression of the material derivative of B becomes : Ḃ(x, t) = -∇ x B(x, t).V(x, t) [2] To numerically solve the problem, we use a frame moving with the loads and replace material derivatives in the governing equations (equilibrium, constitutive law and boundary conditions) by expressions as given above. The periodic stationary method We consider a structure with a periodic geometry subjected to a repeated moving load. The load moves with a constant velocity V e x . Its intensity is constant or varies periodically. Although we treat here a rotating load, we choose to present the method for translating load; the two cases are formally identical but the notations are less heavy in the case of translation. By periodic structure (figure 1), we design a structure which is generated by the translation or the rotation of an inhomogeneous material volume (heterogeneous solid made of different materials or containing voids). An elementary heterogeneous volume is called "cell". Figure 1. The vented disc: an example of periodic structure Due to this geometrical and material inhomogeneity, the thermomechanical state of each cell depends on the relative position of the load over the cell. The steady state assumption in the load reference no longer holds. On the other hand, the non stationary behavior is assumed to be periodic. The solution method adopted consists in two main features: -determination of the transient solution in a time period (time necessary for the loading to move along a cell); -use of the steady-state assumption at the cells scale in the reference frame related to the moving load. So the computations involve two stages: the trial transient elastic solution on a time period will be first sought and then integrations along the streamline associated to the cells are performed (as in the "classical" stationary method). These two points are used in the formulation of the problem to be numerically solved: on each time interval with a period T (T is the time necessary for the load to cover the distance X equivalent to the length of a cell along e x ), the response is variable; it is T-periodic in the load reference. In other words, in the load reference, for any physical quantity B and for any point x of the structure: B(x, t) = B(x -Xe x , t -T ) [3] This equation allows the transport of the physical quantities from cell to cell along the streamlines, as in the case of the stationary method. On the other hand, for the transient regime (lasting the time necessary for the load to cover the length of a cell), one has simply Ḃ(x, t) = ∂B ∂t (x, t) and a method similar to the Large time Increment Method [LAD96] is adopted. Formulation of the periodic stationary method in elastoplasticity From the equations (2) and (3), we are able to propose a solution scheme for the periodic stationary method and give the discretized equations of the problem. We just consider here a von Mises elastic-plastic material with a linear kinematic hardening (c) and a yield function (f ) of the following form: f (σ, cε p ) = (devσ -cε p ) : (devσ -cε p )-k = devσ -cε p -k = ξ -k [4] Let us recall that the steady state at the cell scale is assumed; the solution is then entirely determined by the knowledge of the response over a period (t ∈ [0, T ]). This time interval is discretized in m instants corresponding to the number of positions of the load on a cell. Practically, we have the following two-stage algorithm. A global stage consisting in calculating the elastic solution over the whole time interval [0, T ] with given internal variables is first carried out: in fact one performs discrete sequence of elastic calculations for all the positions of the load over a particular cell considered as reference. It is followed by a local stage for the determination of the internal variables by integration along the streamlines, and this is done for all the instants of the interval [0, T ]. This integration scheme using closest point projection, is detailed in the following. We denote by j the position of the load on the reference cell, j ∈ {1, ..., m}, ε p j the plastic deformation of the current point of the cell (n). (.) j denotes quantities at a point of the cell (n), for the j position of the load while (.) j (n -1) denotes quantities at the homologous point of the preceding cell (n -1), for the jth position of the load. Within, the periodic stationary method, the plastic deformation ε p is calculated as follows: If j = 1 we define ξ * j = devσ j-1cε p j-1 + 2μΔ(devε) j , one has: -if ξ * j > k, plastification occurs, so ε p j = ε p j-1 + 1 2μ+C 1 -k ξ * j ξ * j -if ξ * j ≤ k, no plastification occurs, so ε p j = ε p j-1 If j = 1 , we define ξ * 1 = devσ m-1 (n -1) -cε p m-1 (n -1) + 2μΔ(devε) 1 one has: -si ξ * 1 > k, plastification occurs, so ε p 1 = ε p m-1 (n -1) + 1 2μ+C 1 -k ξ * j ξ * j -If ξ * 1 ≤ k, no plastification occurs, so ε p 1 = ε p m-1 (n -1) Figure 2. Calculated positions of the load This algorithm has been implemented in the code Castem 2000 for a von Mises elastic-plastic material with a linear kinematic hardening. Application to a vented disc In this section, we present an application of the periodic stationary method to the vented disc (figure 1). Instead of making a realistic simulation of the braking in which thermal effects are proeminent, we choose to illustrate the different kinds of results that can obtained with this method in the case of purely mechanical problems. The dimensions of the disc are the following: external radius R e = 133 mm, internal radius of rubbing surfaces R i = 86.5mm, thickness of these surfaces e = 13 mm. The disc constitutive material is cast iron assumed to be, at room temperature, a von Mises elastic-plastic material with a linear kinematic hardening. Its characteristics are: Young modulus E = 110000MPa, Poisson coefficient ν = 0.3, yield limit in traction σ y = 90 MPa, hardening modulus h = 90000 MPa. The figure 3 shows the adopted mesh and the applied loading. The loading consists in two distributions of hertzian pressure (prescribed at the contact zones between the disc and the pads) with the maximum pressure equal to 500 MPa. This pressure is greater than the ones encountered during real brakings but here, as thermal effects are not taken into account, we choose a greater pressure just for illustration of the capabilities of the method which is interesting only in inelastic cases. Eight load positions are used for the simulation. They are represented with different colors on figure 3. Thanks to the periodicity of the solution in the cells "away" from the load, the mesh is truncated; only five cells are represented. Figure 3. Hertzian pressure distributions on the two surfaces of the vented disc : the eight considered load positions are represented The evolutions of equivalent plastic deformations during the first pass of the loading are shown on figure 4 for the eight positions of the load. The load is moving in the anti-clock-wise. One can notice that the solution is transient and depends on the relative position of the loading to the cell. On the first of figures 4, we observe a plastic deformation induced backwards by the moving loads. On figure 5, one can see the equivalent plastic deformations during the first pass of the loading. The calculation of five successive passes of the loading show that the mechanical stabilized state is reached quickly. The stabilized state is defined by the periodicity of the mechanical response at any material point. On figure 7, we plot the evolution of equivalent plastic deformations along a constant radius streamline: one can then see that the stabilized state is a plastic shakedown. One can also notice that backward, the plastic deformation becomes periodic, with a period equal to the length of a cell (0.08m). On figure 6, we show the equivalent stresses in the limit state obtained for the fifth pass of the loading. Conclusion In this paper, an extension of the stationary methods to periodic inelastic structures subjected to repeated moving loads is presented. It is based on the hypothesis of the periodicity of the response in a reference frame moving with the loads and the use of an approach similar to the Large Time Increment Method. It permits the directly determination of the mechanical state during a whole pass of the loading and therefore the determination of the limit state in the case of repeated moving loads. Even though only the capabilities of this method have be shown, it is clear that it reduces considerably the computational times and accordingly allows the simulation of complicated structures such as vented discs and the determination of asymptotic state of such structures subjected to repeated loads. Figure 4 .Figure 5 .Figure 6 . 456 Figure 4. Equivalent plastic deformations during the movement of the load along a cell Figure 7 . 7 Figure 7. Evolution of equivalent plastic deformations along a streamline for the five first passes of the loading: case of a plastic shakedown Mac Lan Nguyen-Tajan -Habibou Maitournam Luis Maestro PSA Peugeot Citroën, Direction de la Recherche, Route de Gisy F-78943 Vélizy-Villacoublay maclan.nguyen@mpsa.com LMS, UMR 7649, Ecole Polytechnique, F-91128 Palaiseau cedex habibou@lms.polytechnique.fr Stagiaire de l'ENSTA ABSTRACT. The purpose of the paper is to present a new numerical method suitable for the computation of periodic structures subjected to repeated moving loads. It directly derives from the stationary methods proposed for cylindrical and axisymmetrical structures. Its mains features are the use of a calculation reference related to the moving loads and the periodic property of the thermomechanical response. These methods are developped by PSA and the Ecole Polytechnique, in order to design vented brake discs. In this paper, a brief description of the algorithm is first given and examples of numerical simulations of a vented brake disc are treated. RÉSUMÉ. Cet article porte sur le développement d'une nouvelle méthode de résolution numérique adaptée au calcul de structures périodiques soumises à des chargements mobiles et répétés. Il s'inspire directement des méthodes stationnaires développées pour les structures cylindriques ou axisymétriques. Cette méthode repose sur les principes suivants : le repère de calcul est lié au chargement et non plus à la structure, et la réponse thermomécanique de la structure y est supposée périodique. Cette méthode a été développée par PSA Peugeot Citroën et l'Ecole Polytechnique dans le cadre du dimensionnement des disques de frein ventilés. Dans cet article, on donnera une brève description de l'algorithme puis la simulation mécanique d'un disque de frein ventilé illustrera la méthode. KEYWORDS: periodic steady state algorithm, cyclic moving load, brake disc, thermomechanics. MOTS-CLÉS : algorithme stationnaire périodique, chargement mobile cyclique, disque de frein, thermomécanique.
17,537
[ "3991" ]
[ "7736", "1167" ]
01756172
en
[ "phys" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01756172/file/1803.10828.pdf
Raul Ramos Diego Scoca Rafael Borges Merlo Francisco Chagas Marques Luiz Fernando Alvarez Fernando Zagonel email: zagonel@ifi.unicamp.br Study of nitrogen ion doping of titanium dioxide films Keywords: nitrogen ion doping, titanium dioxide, Anatase, transparent conducting oxide, diffusion; electronic transport This study reports on the properties of nitrogen doped titanium dioxide (TiO 2 ) thin films considering the application as transparent conducting oxide (TCO). Sets of thin films were prepared by sputtering a titanium target under oxygen atmosphere on a quartz substrate at 400 or 500°C. Films were then doped at the same temperature by 150 eV nitrogen ions. The films were prepared in Anatase phase which was maintained after doping. Up to 30at% nitrogen concentration was obtained at the surface, as determined by in situ x-ray photoelectron spectroscopy (XPS). Such high nitrogen concentration at the surface lead to nitrogen diffusion into the bulk which reached about 25 nm. Hall measurements indicate that average carrier density reached over 10 19 cm -3 with mobility in the range of 0.1 to 1 cm 2 V -1 s -1 . Resistivity about 3.10 -1 cm could be obtained with 85% light transmission at 550 nm. These results indicate that low energy implantation is an effective technique for TiO 2 doping that allows an accurate control of the doping process independently from the TiO 2 preparation. Moreover, this doping route seems promising to attain high doping levels without significantly affecting the film structure. Such approach could be relevant for preparation of N:TiO 2 transparent conduction electrodes (TCE). Introduction The increasing demand of energy efficiency and cost-effectiveness for display and energy technologies pushes continuously towards the search of new materials to surpass industry standards. In flat panel displays, light emitting devices and some solar cells, efficiency is linked to the performance of the transparent conducting electrodes (TCEs) used which allows front electrical contacts and simultaneously letting visible light in or out of the device. While tindoped indium oxide (ITO) is an industry standard TCE, it presents a high cost linked to indium scarcity. Most alternatives available today, such as ZnO or SnO 2 , has interesting niche applications. New and better TCEs, both in terms of cost and efficiency, are of great interest for several wide or niche applications. Since titanium oxide (TiO 2 ) doped with niobium has been proposed as TCE by Hasegawa group see refs. [1,2], several studies discussed the optical and electrical properties of TiO 2 doped with Nb, Ta, W, and N [3,4,5,6]. Moreover, it has been shown that, by doping TiO 2 with nitrogen, it is possible to reduce its optical gap and favor catalytic activity with visible light, with considerable interest for water splitting, among other applications [7,8]. Since then, several studies explored the properties of nitrogen doped titanium oxides (mainly Anatase and Rutile) prepared in various ways with respect to its optical, catalytic and transport properties. Nitrogen doped TiO 2 has already been synthetized by reactive sputtering and by post treatments with ammonia or ion implantations, among others. Using electron cyclotron resonance plasma sputtering under O 2 and N 2 gases, H. Akazawa showed that it is possible to continuously control carrier concentration and obtained films with a resistivity of 0.2 cm with a maximum transparence in the visible of about 80%, but the films were frequently amorphous, while large crystalline grains might favor better conductivity at similar transparency [4,6]. Using reactive d.c. magnetrons sputtering in an Ar+O 2 +N 2 gas mixture, N. Martin et al. obtained about 25cm with about 30% transmission, depositing TiN x O y . Again, in this study, crystal structure was difficult to control and, besides Rutile and Anatase, even Ti 3 O 5 was observed [9]. In another work by J.-M. Chappé et al., also using d.c. reactive magnetron sputtering, prepared TiO x N y films with visible light transmittance ranging from very low to nearly 80%, with a resistivity ranging from 10 -3 cm to 50 cm and with a complex crystal structure where Anatase was not the majority phase present [10]. Given the inherent difficulty in controlling independently composition and crystal structure/quality in reactive sputtering, splitting the processes in two parts is an interesting alternative. In this approach, Anatase or Rutile samples can be prepared and doped a posteriori with nitrogen by, for instance, ion implantation or NH 3 gas [11,12,13]. Using this approach, H. Shen et al. showed that implantation with 200 eV nitrogen ions successfully doped Anatase nanoparticles and enhanced photocatalytic efficiency without changing the crystal structure [13]. Considering that ion doping of TiO 2 could be relevant for TCE preparation, in this work, we deposited Anatase thin films at 400 and 500°C and then doped by low energy nitrogen ion implantation at 150eV from a simple laboratory ion gun. Such process allows doping pure Anatase thin films with controllable nitrogen amounts and following closely its properties. By heating the sample during the ion implantation, we could allow the diffusion of nitrogen from the surface into the bulk, thus developing a dopant profile. The results indicate that low Experimental Methods Sample preparation was performed in two sequential steps. First, a Ti target was sputtered using an ion beam (Ion Beam Deposition) to grow a thin film on amorphous quartz substrate. Argon was used as inert gas for bombarding the Ti target at an energy of 1.5keV. During the deposition, a partial pressure of 2.510 -2 Pa of oxygen was maintained (chamber base pressure was about 210 -4 Pa). Such partial pressure results in Anatase films with well-defined x-ray diffraction peaks [14]. During the deposition, Argon partial pressure in the chamber was about 110 -2 Pa. During a second step, nitrogen ions were implanted at low energy, 150 eV, into the thin film surface. The ions were produced in a Kaufman cell fed with 5 sccm of Nitrogen and 0.5 sccm of hydrogen, resulting in 2.110 -2 Pa and 2.110 -3 Pa partial pressures in the chamber respectively (Argon and oxygen were not used in this step). Such sample preparation was performed in a custom-built system that features two ion guns (one pointing to a sputter target and the other to the sample holder) in one vacuum chamber that is directly connected to another chamber for in situ X-ray Photoemission Spectroscopy analysis. More details of the deposition system and its capabilities can be found in references [15] and [16]. Hydrogen was used in analogy with ref. [17] (see also references there in) to remove oxygen from the surface to make it more reactive for incoming nitrogen. Indeed, the formation enthalpy favors TiO 2 over TiN [18] and in principle residual oxygen gas and water vapor in the vacuum chamber could keep the surface partially oxidized preventing nitrogen intake. The ion gun points perpendicularly to the sample surface and is located about 30 cm from the sample. The samples were prepared at different substrate temperatures (for both steps): 400 and 500°C and with different implantation times: 0, 10, 30 and 60 minutes. Film thickness was evaluated by perfilometry and it ranges from 70 to 100 nm. Just after preparation, samples were in situ analyzed in UHV by X-ray Photoemission Spectroscopy (XPS) using Al K radiation. Spectra were fitted using Avantage software. Average inelastic mean free path for Anatase and kinetic energies from 900-1100 eV are estimated as about 2 nm [19]. X-ray diffraction (XRD) was performed using Cu Kand keeping incidence angle at 1°. In this geometry, the average penetration depth is estimated as 0.05 m for Anatase [START_REF] Noyan | Residual Stress: Measurement by Diffraction and Interpretation[END_REF]. Sheet resistance was measured using 4-probe technique. Mobility, resistivity and carrier concentration were determined by Hall measurements using the Van der Pauw method in an Ecopia-3000 device using a 0.55 T permanent magnet. For Hall measurements, indium was used to provide ohmic contacts. Optical transparency measurements were performed in an Agilent 8453 device which uses a CCD detector. Resistance versus temperature measurement was performed for the sample deposited at 500°C and implanted for 60 minutes (at the same temperature). The measurement was carried out in a CTI Cryodine closed-cycle helium refrigerator in the temperature range from 80 K to 300 K. The electrical data was acquired using a Keithley model 2602A SourceMeter and the indium contacts previously used for Hall measurement, in the van der Pauw geometry. A constant current of 10 A was applied between two contacts and the voltage was measured between the other two, in a parallel configuration. A complete thermal loop was carried out to confirm the reproducibility of the data. Sample's morphology was studied by Atomic Force Microscopy (AFM) and Transmission Electron Microscopy (TEM). TEM analysis was performed in a JEOL 2100F TEM equipped with a Field Emission Gun (FEG) operating at 200 kV with an energy resolution of about 1 eV. EELS was obtained using a Gatan GIF Tridiem installed in this TEM and Gatan Digital Micrograph routines were used for quantification. The data were acquired in Scanning Transmission Electron Microscopy (STEM) mode in the form of spectrum lines (the electron beam is focused on the sample and a spectrum is acquired for each position along a line forming a bidimensional dataset). Topographic images of the sample's surface were taken with an Innova Bruker Atomic Force Microscope (AFM) in non-contact mode. Results and Discussions Composition and Structural characterization The effectiveness of the ion implantation at 150 eV was demonstrated by the presence of large amounts of nitrogen at the surface, as observed by in situ XPS. Figure 1 shows XPS spectra for the sample prepared at 500°C for 60 minutes, with similar results for all other samples. Main features observed by XPS are expected for TiO 2 and for TiN. In situ XPS on TiO 2 samples grown at 400 and 500°C (not shown) are similar to those in ref [14] and [START_REF] Scoca | [END_REF], typical for Anatase film close to stoichiometry. It must be noted that absolute binding energies are not accurately known due to some degree of uncontrolled spectral shift that is attributed to sample charging. From the indicated decomposition into several proposed chemical bounds, it is possible to observe that nitrogen concentration is similar to that of oxygen and that a TiO x N y alloy was created at the surface (TiN and TiO 2 components are observed in the Ti 2p spectrum). It is noteworthy in Figure 1(c) the presence of two XPS peaks for N 1s, one, smaller, at higher binding energy, and another, bigger, close to 396 eV (that is in turn composed of two peaks). Such smaller and bigger peaks are attributed to interstitial nitrogen and substitutional nitrogen, respectively [12,13,22,23]. In our case, interstitial N accounts to about 10% of the total amount of nitrogen observed on the surface, a much lower value when compared to ref. [8], which used NH 3 as nitrogen source, or ref. [13], which used 200 eV nitrogen ions without hydrogen. Therefore, depending on the incorporation route, different chemical locations are possible. This difference is relevant since depending on nitrogen site different diffusion mechanisms apply [12]. To evaluate if hydrogen was significantly affecting N 1s spectra, a sample was prepared without hydrogen gas during the implantation step. N1s peaks for samples prepared with and without hydrogen at 500°C and implanted for 60 minutes are shown in Figure 2. The spectra show that even without hydrogen the peak is still present and with similar (although smaller) ratio with respect to main N 1s peak. This shows that it does not depend on presence of hydrogen during the nitriding process, in contrast to literature suggestion [24]. However, as discussed below, total nitrogen concentration is smaller without hydrogen, indicating that hydrogen contributes to nitrogen incorporation at the surface, possibly by removing oxygen. [ To support the given interpretation of XPS results, SRIM simulations have been performed [25,[START_REF]The Stopping and Range of Ions in Matter[END_REF]. Simulations considered nitrogen ions (N+) on Anatase. Average penetration depth is about 0.9 nm while 90% of implanted nitrogen ion reaches a depth within 1.7 nm. These simulations indicate that XPS is probing exactly the implanted region and hence is suitable to investigate the nitrogen intake by the sample from the ion beam. Following the procedures detailed in references [START_REF] Scofield | [END_REF] and [28], we used XPS results to calculate the elemental concentrations of surface components. The results are shows in Figure 3 for samples prepared at 400 and at 500°C with implantation times ranging from 0 to 60 minutes. It is observed that in the first few minutes of implantation, a significant nitrogen concentration builds up at the surface and after the concentration increases yet to reach about 33at.%. This is explained by the high reactivity of the nitrogen ion beam and the low diffusion coefficient of nitrogen into the interior of the thin film. In such scenario, we consider that a high nitrogen concentration builds up during the first moments and is maintained by the ion beam creating a high nitrogen chemical potential at the surface. This nitrogen concentration will be the driving force for nitrogen diffusion into the thin film. The extent of the diffusion will depend mainly on the temperature but also on several details of the thin film microstructure, such as vacancies, grain boundaries, stress, and so on. This process is in tight analogy to the plasma or ion beam nitriding of steels at low temperatures where nitrogen diffusion is also slow [29,30]. It is important to note that for both studied temperatures, with sufficient nitriding time, the surface builds a titanium oxynitride alloy with stoichiometry close to TiO 1 N 1 (note such result applies only at the outer 2 nm of the thin film surface). It is also interesting that the sample prepared without hydrogen had a nitrogen concentration of only 25at.% while the sample prepared with hydrogen in the same conditions (500°C -60 minutes) had 33at.% of nitrogen at the surface. Again this indicates that hydrogen may favor oxygen removal opening sites for nitrogen chemical adsorption and reaction, even if nitrogen arrives at 150eV at the surface and the sample is in high vacuum. X-ray diffractograms are shown in Figure 4 for samples prepared at 400 and 500° without nitrogen implantation and implanted for 10, 30 and 60 minutes, as before. It is observed that all samples display peaks associated with Anatase phase with considerable intensity indicating the crystalline nature of the thin films. Moreover, implantation with Nitrogen and Hydrogen does not disturb the crystal structure, similarly to what has been reported in the literature for 200eV nitrogen implantation into Anatase thin films [13]. The position of the Anatase (101) peak remains within (25.30±0.05)° for all diffractograms while reference Anatase ( 101) is expected at 25.33° according to ICSD 9852. It must be noted that samples are kept at deposition temperature during the implantation and are therefore annealed what could affect their crystalline structure. It is also noteworthy that peak ratios do not agree with expected values for Anatase powder and hence the films should have some texture [31]. Transmission Electron Microscopy was applied to determine the extent of nitrogen diffusion into the film from the surface. For that, we considered only the sample prepared at 500°C for 60 minutes, considering that other samples would have a shallower nitrogen penetration depth. Figure 5 shows a cross-section of the sample. Apart from the amorphous quartz substrate and the protective coating used for FIB lamella preparation, we can observe the N implanted TiO 2 thin film in two layers, on top a layer that apparently has been modified by the implantation/diffusion process and on the bottom the pristine Anatase film. HRTEM images indicate the presence of atomic planes and grains from bottom to top of the thin film, again confirming the Anatase film preserved its crystal structure even after nitrogen shallow implantation (shallower than 2nm from SRIM simulations) and subsequent diffusion. Electron energy loss spectroscopy was used to detect nitrogen and determine its profile in the sample cross-section. Figure 6(a) shows the profiles of nitrogen, oxygen and titanium from the surface to the interior of the thin film. The detection of nitrogen was difficult due to sample damage: apparently the electron beam removed nitrogen during beam exposure. For this reason (and also because of diffraction and thickness effects were not taken into account), the results in Figure 6(a) may underestimate the original concentration (due to damage even in reduced dose measurements) or other systematic error (due to the other mentioned effects). However, it can accurately be considered semi-quantitatively to measure the diffusion depth. In Figure 6(a), a complementary error function fitting is added to the nitrogen profile as a thin line. Despite the noise, it is clear that nitrogen is detected down to 25 nm or so (where estimated nitrogen concentrations decreases to 10% of its surface value). The presence of nitrogen is also clear in the fine-structure of Titanium and oxygen absorption edges, shown in Figure 6 (b). Again, a transition from one edge shape to the other is observed around 30 nm from the surface. Moreover, it is interesting to note that the obtained nitrogen profile didn't affect the crystal structure as observed by RH-TEM (Figure 5 (b)), that is, no amorphous layer was found despite the observed nitrogen concentration. Indeed, in some studies, the presence of nitrogen in reactive sputtering leads to amorphous N:TiO 2 films [4]. Atomic force microscopy was used to gather a broader idea of the growth and also a clearer picture of the surface before and after ion implantation. The average surface roughness is about 1 nm for all samples, indicating a smooth growth of TiO 2 Anatase thin film by reactive sputtering and also that implantation at 150eV does not induce surface roughening. Illustrative results, for samples growth at 400°C without implantation and implanted for 30 minutes, are shown in Figure 7. Without implantation, the surface shows small grains having about 40-60 nm in diameter, but the height difference from peak to valley is just about 3 nm. After implantation, crystal grains are partially revealed, as indicated by arrows in Fig. 7(b), and their diameter is in fact about 200 to 300 nm, a result more consistent with TEM observations. As we consider that the ion implantation at 150eV or the annealing time didn't change the crystal structure, grains should have diameters in the hundreds of nanometers from the beginning of the deposition, but ion polishing was necessary to reveal the actual grains to preferential sputtering of different crystal orientations [32]. From XRD, XPS, TEM and AFM results, it is possible to form the following picture of the implantation process: the ion beam is highly reactive and, with the help of hydrogen, creates a high surface chemical potential which is, together with the process temperature, the driving force for nitrogen diffusion into the thin TiO 2 film. Considering that a roughly constant nitrogen concentration builds up in the first minutes, a diffusion coefficient of nitrogen on anatase at 500°C can be calculated as approximately 210 -8 m 2 s -1 , considering the solution of Fick's second law for a constant surface concentration. R. G. Palgrave et al. fond similar values (2.5310 -8 m 2 s -1 ) at 675° for rutile and also, analyzing very accurate nitrogen concentration profiles, reported different diffusion coefficients, indicating more than one diffusion route [12]. Moreover, they show that interstitial nitrogen diffuses much faster than substitutional nitrogen. In this case, the quantitative concentration of nitrogen obtained by EELS should be compared to the interstitial nitrogen concentration, which, from our in situ XPS results, is about 3at.%, meaning a better agreement between EELS nitrogen concentration near the surface and XPS results. It must be noted that since only about half to one third of the film is actually doped, the average nitrogen concentration is probably closer to 1-2at.%. Electrical characterization Very generally, the effect of nitrogen implantation and diffusion into the Anatase thin film can be monitored by 4 probe electrical resistivity measurements. Such results, converted into sheet resistance, are shown in Figure 8 (a). It is observed that sheet resistance drops by 7 orders of magnitude and reaches 54.7k/. These results are in close agreement with resistivity, , as measured by van der Pauw method using indium contacts, shown in Figure 8(b). The films resistivity could be as low as 310 -1 cm, for the sample prepared at 500°C and implanted for 60 minutes (average nitrogen concentration about 1-2%). This resistivity is about 10 fold lower than reported for TiO 1.88 N 0.12 (4at.% of nitrogen) prepared by plasma-assisted molecular beam epitaxy [33]. Moreover, the presented results are very similar to N:TiO 2 prepared by electron cyclotron resonance and by reactive sputtering in refs [4,6] (with slightly lower light transmittance, see below) but higher than TiO 2 doped with Nb, Ta or W, which may show resistivity much lower than 10 -2 cm [5,34,35]. In Figures 8 and9 the symbols cover the estimated uncertainty bars. Mobility and carrier concentration results are shown in Figure 9. Highest carrier concentration is observed for sample implanted at 500°C and for 60 minutes and reaches up to 610 19 cm -3 . Mobility values measured are always lower than 1 cm 2 v -1 s -1 (and just above 0.1 cm 2 v -1 s -1 ), which is much lower (at least by one or even two orders of magnitude) than usual TCOs like ITO or SnO 2 [36]. Such mobility is however similar to reported to Nb doped TiO 2 [34]. Note that resistivity and carrier concentration values are calculated considering the full film thickness and, as EELS Nitrogen profile showed (Figure 6(a)), nitrogen concentration is far from homogenous along the thin film. If one takes into account that nitrogen is present in about one third of the films (30 nm instead of 90 nm), then carrier concentration would be in such region and in average about 210 20 cm -3 (it should be higher close to the surface). Such corrected carrier concentration starts to be similar to values obtained in the literature as 310 20 cm -3 for Ta:TiO 2 and 10 21 cm -3 for Nb:TiO 2 [35,34]. Industry standard TCEs have again similar carrier concentration values, such as 1.510 20 cm -3 for FTO, or about 20 21 cm -3 for ITO and AZO [37,38,39]. Similarly, sheet resistance (or resistivity) in the doped region would be 3 fold smaller than in the thin film average. Moreover, nitrogen concentration gradient may explain low mobility since the region more relevant to electrical measurements has higher carrier concentration, which in turn may reduce mobility. The temperature dependence of the resistance is shown in Figure 10 (a) for the sample prepared at 500°C and implanted for 60 minutes. A very small hysteresis was observed. Failure to fit data in an Arrhenius plot (LnR vs. 1/T) denoted that the resistance is not governed by thermal activation, contrary to plasma-assisted molecular beam epitaxy N:TiO 2 samples that contained mostly substitutional nitrogen [33]. However, the resistance scales as LogR α T -1/2 , as shown in Figure 10 (b). This suggests that the conduction mechanism close to room temperature is variable range hopping (VRH). For this regime the resistivity should follow: ρ(T) = ρ 0 exp[T 0 /T] p , ( 1 ) where p = 1/4 for Mott (Mott-VRH) [40] and p = 1/2 for Efros and Shklovskii (ES-VRH) [41]. Both mechanisms were observed in ion implanted TiO 2 single crystals [42] and disordered TiO 2 thin films [43,44] in a wide temperature range. To determine which one of the mechanisms is dominant in our sample we used the method proposed by Zabrodskii and Zinoveva [45] to obtain the exponent p, where w(T) = -∂log(R)/∂log(T) and log(w) = log(pT0 p) -p log(T). By plotting log(w) versus log(T) we can find the value of the exponent p from the slope of the curve. As depicted in the insert of Figure 10(b), for T > 235 K the curve is fitted with p = 0.488 ± 0.007, very close to value expected for ES-VRH conduction mechanism. For lower temperatures, the data diverge, indicating a change in the conduction mechanism. Further study is necessary to understand this behavior but is outside the scope of this work. Optical Characterization The UV-Vis-NIR light transmission spectra for the studied samples are shown in Figure 11. The transmission spectra for amorphous quartz (used as substrate) and undoped anatase prepared at 400 and 500° are also shown. Interference fringes are observed for the thin films and the maximum transmission is in the range from 500 to 600 nm (green). It is observed that undoped anatase have a transmission maximum very similar to amorphous quartz and that by doping the thin films the transmission falls from about 90% to 85% (with respect to air). Such observed transmission is better than some literature results for doped TiO 2 with similar resistivity, as indicated above. [4,6,34]. The transmission curves were simulated (not shown) using the method described in ref [ 46 ] and the general shape is very well described considering only the thickness and refraction indexes of the film and substrate. Transmission spectra measured further into the IR up to 3000 nm (not shown) are still featureless with only one absorption region near 2720 nm due to the quartz substrate. Absorption spectra can be used to determine the optical band-gap. Taking in account that Anatase has an indirect band-gap and following the procedure indicated in [47], the optical band gap can be obtained by plotting the square-root of the absorption coefficient as function of the energy and extrapolating the absorption edge at high energies [48]. Figure 12 shows the results for two extreme cases: undoped Anatase thin film prepared at 500°C and nitrogen implanted for 60 minutes also prepared at 500°C. Optical band-gap in both cases is about 3.29 eV, in agreement with Anatase value [47,49]. This indicates that the doped region does not affect significantly the overall light absorption edge. Similar results were obtained for all other samples (not shown). Such result is in agreement to literature reports that indicate that gap narrowing is related to substitutional nitrogen, which in our case could be restricted to the surface. Interstitial nitrogen, on the other hand, does not reduce band-gap [12,50,51]. Shelf stability Finally, the shelf stability was evaluated by measuring the resistivity, mobility and carrier concentration on the interval of some days for the sample implanted for 30 minutes at 500°C (without any surface protection/coating). The results, shown in Figure 13, indicate that the films maintains its resistivity with only a 40% resistivity increase, despite the fact that TiO 2 is thermodynamically favorable with respect to TiN. It is observed that the resistivity increases slightly from (1.8±0.2) to (2.5±0.3) cm, see Figure 13 (a). This change is accompanied by a decrease of carrier density and an increase in mobility, as shown in Figure 13(b) and (c). Such changes could be due to surface oxidation that would displace nitrogen and reduce its concentration. However, oxygen diffusion would be too slow to displace nitrogen deeper in the thin film. The stability of this N:TiO 2 film, even if not subjected to heat or UV light, is interesting with respect to literature [52]. Further study is needed to compare the stability of N:TiO 2 to that of other TCOs [53]. Conclusions In summary, a comprehensive study showed that, by implanting 150 eV nitrogen and hydrogen ions into Anatase films, it is possible to build ~33% nitrogen surface concentrations which drive nitrogen diffusion into the volume of the film. For the sample prepared at 500°C and implanted for 60 minutes, the films resistivity could be as low as 310 -1 cm while transparency at 550nm is about 85%. In this case, nitrogen diffusion could reach about 25 to 30 nm deep into the thin film. Note that 150eV nitrogen ions are readily available with simple laboratory ion guns. The proposed two step deposition and doping technique could, as planned, provide Anatase thin films with a nitrogen doped zone near the surface. Moreover, carrier densities and conductivities similar to other established TCOs could be obtained. These results show the effectiveness of nitrogen diffusion into the Anatase film from the surface due to the obtained nitrogen surface concentration and applied temperature. Clearly, it was not possible to dope with nitrogen the whole film or to create homogenously doped sample at 500°C and 60 minutes of implantation. However, by adjusting properly the nitriding implanting time, desired thin film properties could be obtained. Indeed, the presented results indicate that, by doping during longer times until the whole thin film is doped, it could be possible to obtain resistivities lower than 10 -1 cm. This study supports the interpretations that interstitial nitrogen has higher binding energy in XPS, that it diffuses faster in Anatase (with respect to Nitrogen in substitutional sites) and that it does not affect Anatase optical bandgap. Finally, low energy ion doping using simple ion guns can be applied to Anatase prepared by other means, even colloidal synthesized nanoparticles. The high reactivity of low energy nitrogen ions associated with hydrogen ions, we speculate, could be also effective in other Anatase samples kinds. Figure 1 : 1 Figure 1: In situ x-ray photoemission spectra from sample grown at 500°C and implanted (at the same temperature) for 60 minutes with 150 eV nitrogen ions. The spectra include decomposition into components for different expected chemical bounds in the sample. Figure 2 : 2 Figure 2: X-ray photoemission spectra are shown for samples prepared with and without hydrogen in the gas feed to the ion gun. The peak associated to Nitrogen in interstitial sites is observed in both samples. Figure 3 : 3 Figure 3: Nitrogen, Oxygen and Titanium concentration obtained by in situ XPS for samples prepared at 400° and 500°C for several implantation times. Lines are a guide to the eyes. Nitrogen concentration reaches about 33at.%. Figure 4 : 4 Figure 4: Diffractograms from grazing incidence X-ray Diffraction for TiO 2 films prepared at 400 and 500°C and implanted for the time indicated in each curve. Anatase lines are indicated in the bottom according to ICSD 9852. Figure 5 : 5 Figure 5: TEM cross-section micrograph from sample prepared at 500°C and implanted for 60 minutes. (a) A modified region at the surface is observed. (b) HR-TEM shows atomic planes from small grains from bottom to the top of the thin film. The insert shows a SAED pattern obtained nearby on a region with larger grains. Figure 6 : 6 Figure 6: (a) Nitrogen, Oxygen and Titanium semi-quantitative profile determined by STEM-EELS on a cross-section lamella. A complementary error function fit was added to nitrogen profile. (b) Averaged EEL spectra of titanium 2p and oxygen 1s absorption edges indicating the difference from the upper (nitrogen doped) layer to the bottom (pristine) TiO 2 . Figure 7 : 7 Figure 7: AFM images of the surface of samples implanted by 0 minutes (undoped) and 30 minutes, both prepared at 400°C, are shown is (a) and (b), respectively. Ion implantation reveals partially the grains by preferential sputtering. Arrows in (b) indicate grain boundaries. Figure 8 : 8 Figure 8: (a) Sheet resistance and (b) Resistivity from all samples measured by 4-probe and van der Pauw, respectively. The results are in close agreement. Figure 9 : 9 Figure 9: Hall mobility and carrier concentration measured by van der Pauw method. Carrier density is show in open symbols while solid symbols show carrier mobility. Figure 10 . 10 Figure 10. (a) Resistance versus temperature for the film prepared at 500°C and implanted for 60 minutes. A small hysteresis was observed. (b) Logarithm plot of resistance versus T -1/2 showing a linear dependency (blue line) at high temperature. The inset shows the double-log plot of function w(T) and T and the value of exponent "p", as determined from the slope of the curve. Figure 11 : 11 Figure 11: Light transmission for samples prepared at 500 and 400°C. Amorphous quartz substrate and undoped anatase thin film are also shown. Figure 12 : 12 Figure12: Square-root of the absorption coefficient for undoped and 60 minutes N doped Anatase films prepared at 500°C. In both cases, the gap is about 3.29 eV. Figure 13 : 13 Figure 13: Resistivity, carrier concentration and mobility for shelf storage of the sample nitride for 30 minutes and prepared at 500°C. Acknowledgments Part of this work was supported by FAPESP, projects 2014/23399-9 and 2012/10127-5. TEM experiments were performed at the Brazilian Nanotechnology National Laboratory (LNNano/CNPEM).
33,697
[ "897650", "985982", "771755", "2070" ]
[ "246260", "246260", "246260", "246260", "246260", "246260" ]
01698582
en
[ "info" ]
2024/03/05 22:32:10
2017
https://ensta-bretagne.hal.science/hal-01698582/file/Relating%20Student%2C%20Teacher%20and%20External%20Assessments%20in%20a%20Bachelor%20Capstone%20Project.pdf
Keywords: process assessment, competencies model, capstone project The capstone is arguably the most important course in any engineering program because it provides a culminating experience and is often the only course intended to develop non-technical, but essential skills. In a software development, the capstone runs from requirements to qualification testing. Indeed, the project progress is sustained by software processes. This paper yields different settings where students, teachers and third-party assessors performed [self-] assessment and the paper analyses corresponding correlation coefficients. The paper presents also some aspects of the bachelor capstone. A research question aims to seek if an external process assessment can be replaced or completed with students' self-assessment. Our initial findings were presented at the International Workshop on Software Process Education Training and Professionalism (IWSPETP) 2015 in Gothenburg, Sweden and we aimed to improve the assessment using teacher and third-party assessments. Revised findings show that, if they are related to curriculum topics, students and teacher assessments are correlated but that external assessment is not suitable in an academic context. Introduction Project experience for graduates of computer science programs has the following characteristic in the ACM Computer Science Curricula [START_REF]Computer Science Curricula -Curriculum Guidelines for Undergraduate Degree Programs in Computer Science[END_REF]: "To ensure that graduates can successfully apply the knowledge they have gained, all graduates of computer science programs should have been involved in at least one substantial project. […] Such projects should challenge students by being integrative, requiring evaluation of potential solutions, and requiring work on a larger scale than typical course projects. Students should have opportunities to develop their interpersonal communication skills as part of their project experience." The capstone is arguably the most important course in any engineering program because it provides a culminating experience and is often the only course used to develop non-technical, but essential skills [START_REF]The glossary of education reform[END_REF]. Many programs run capstone projects in different settings [START_REF] Dascalu | Computer science capstone course senior projects: from project idea to prototype implementation[END_REF][START_REF] Umphress | Software process in the classroom: the Capstone project experience[END_REF][START_REF] Karunasekera | Preparing software engineering graduates for an industry career[END_REF][START_REF] Vasilevskaya | Assessing Large-Project Courses: Model, Activities, and Lessons Learned[END_REF][START_REF] Bloomfield | A service learning practicum capstone[END_REF][START_REF] Goold | Providing process for projects in capstone courses[END_REF]. The capstone project is intended to provide students with a learning by doing approach about software development, from requirements to qualification testing. Indeed, the project progress is sustained by software processes. Within the ISO/IEC 15504 series and the ISO/IEC 330xx family of standards, process assessment is used for process improvement and/or process capability determination. Process assessment helps students to be conscious about and improve what they are doing. Hence, a capstone teacher's activity is to assist students with appreciation and guidance, a task that relies on the assessment of students' practices and students' products. This paper yields different settings where students, teachers and third-party assessors performed [self-] assessment and analyses correlation coefficients. Incidentally, the paper presents some aspects of the bachelor capstone project at Brest University. Data collection started 3 years ago. Initial findings were presented in [START_REF] Ribaud | Process Assessment Issues in a Bachelor Capstone Project[END_REF]. The paper structure is: section 2 overviews process assessment, section 3 presents different settings we carried process assessments; we finish with a conclusion. 2 Process assessment Process Reference Models Most software engineering educators will agree that the main goal of the capstone project is to learn by doing a simplified cycle of software development through a somewhat realistic project. For instance, Dascalu et al. use a "streamlined" version of a traditional software development process [START_REF] Dascalu | Computer science capstone course senior projects: from project idea to prototype implementation[END_REF]. Umphress et al. state that using software processes in the classroom helps in three ways: 1 -processes describe the tasks that students must accomplish to build software; 2 -processes can give the instructor visibility into the project; 3 -processes can provide continuity and corporate memory across academic terms [START_REF] Umphress | Software process in the classroom: the Capstone project experience[END_REF]. Consequently, the exposition to some kind of process assessment is considered as a side-effect goal of the capstone project. It is a conventional assertion that assessment drives learning [START_REF] Dollard | Personality and psychotherapy; an analysis in terms of learning, thinking, and culture[END_REF]; hence process assessment drives processes learning. Conventionally, a process is seen as a set of activities or tasks, converting inputs into outputs [START_REF]Systems and software engineering --Software life cycle processes[END_REF]. This definition is not suited for process assessment. Rout states that "it is of more value to explore the purpose for which the process is employed. Implementing a process results in the achievement of a number of observable outcomes, which together demonstrate achievement of the process purpose [START_REF] Rout | The evolving picture of standardisation and certification for process assessment[END_REF]." This approach is used to specify processes in a Process Reference Model (PRM). We use a small subset of the ISO/IEC 15504- Ability model From an individual perspective, the ISO/IEC 15504 Exemplar Process Assessment Model (PAM) is seen as a competencies model related to the knowledge, skills and attitudes involved in a software project. A competencies model defines and organizes the elements of a curriculum (or a professional baseline) and their relationships. During the capstone project, all the students use the model and self-assess their progress. A hierarchical model is easy to manage and use. We kept the hierarchical decomposition issued from the ISO/IEC 15504 Exemplar PAM: process groupsprocessbase practices and products. A competency model is decomposed into competency areas (mapping to process groups); each area corresponding to one of the main division of the profession or of a curriculum. Each area organizes the competencies into families (mapping to processes). A family corresponds to main activities of the area. Each family is made of a set of knowledge and abilities (mapping to base practices), called competencies; each of these entities is represented by a designation and a description. The ability model and its associated tool eCompas have been presented in [START_REF] Ribaud | Towards an ability model for SE apprenticeship[END_REF]. Process assessment The technique of process assessment is essentially a measurement activity. Within ISO/IEC 15504, process assessment has been applied to a characteristic termed process capability, defined as "a characterization of the ability of a process to meet current or projected business goals" [START_REF]Information technology --Process assessment --Part 5: An exemplar software life cycle process assessment model[END_REF]. It is now replaced in the 330xx family of standards by the larger concept of process quality, defined as "ability of a process to satisfy stated and implied stakeholders needs when used in a specific context [START_REF]Information technology --Process assessment --Concepts and terminology[END_REF]. In ISO/IEC 33020:2015, process capability is defined on a six point ordinal scale that enables capability to be assessed from the bottom of the scale, Incomplete, through the top end of the scale, Innovating [START_REF]Information technology --Process assessment --Process measurement framework for assessment of process capability[END_REF]. We see Capability Level 1, Performed, as an achievement: through the performance of necessary actions and the presence of appropriate input and output work products, the process achieves its process purpose and outcomes. Hence, Capability Level 1 will be the goal and the assessment focus. If students are able to perform a process, it denotes a successful learning of software processes, and teachers' assessments rate this capability. Because we believe that learning is sustained by continuous, self-directed assessment, done by teachers or a third-party, the research question aims to state how students' self-assessment and teacher's assessment are correlated and if self-assessment of BPs and WPs is an alternative to external assessment about ISO/IEC 15504 Capability Level 1. Obviously, the main goal of assessment is students' ability to perform the selected processes set. 3 The Capstone Project Overview Schedule The curriculum is a 3-year Bachelor of Computer Science. The project is performed during two periods. The first period is dispatched all the semester along and homework is required. The second period (2 weeks) happens after the final exams and before students' internship. Students are familiar with the Author-Reader cycle: each deliverable can be reviewed as much as needed by the teacher that provides students with comments and suggestions. It is called Continuous Assessment in [START_REF] Karunasekera | Preparing software engineering graduates for an industry career[END_REF][START_REF] Vasilevskaya | Assessing Large-Project Courses: Model, Activities, and Lessons Learned[END_REF]. System Architecture The system is made of 2 sub-systems: PocketAgenda (PA) for address books and agenda management and interface with a central directory; WhoIsWho (WIW) for managing the directory and a social network. PocketAgenda is implemented with Java, JSF relying on an Oracle RDBMS. WhoIsWho is implemented in Java using a RDBMS. Both sub-systems communicate with a protocol to establish using UDP. The system is delivered in 3 batches. Batch 0 established and analyzed requirements. Batch 1 performed collaborative architectural design, separate client and server development, integration. Batch 2 is focused on information system development. Students consent Students were advised that they can freely participate to the experiment described in this paper. The class contains 29 students, all agreed to participate; 4 did not complete the project and do not take part to the study. Students have to regularly update the competencies model consisting in the ENG process group, the 6 processes above and their Base Practices and main Work Products and self-assess on an achievement scale: Not -Partially -Largely -Full. There will be also teacher and third-party assessments that will be anonymously joined to self-assessments by volunteer students. Batch 0 : writing and analyzing requirements Batch 0 is intended to capture, write and manage requirements through use cases. It is a non-technical task not familiar to students. In [START_REF] Bloomfield | A service learning practicum capstone[END_REF], requirements are discussed as one of the four challenges for capstone projects. Students use an iterative process of writing and reviewing by the teacher. Usually, 3 cycles are required to achieve the task. Table 1 presents the correlation coefficient r between student and teacher assessment for the ENG.4 Software requirements analysis. It relies on 3 BPs and 2 WPs. Table 2 presents also the average assessment for each assessed item. The overall correlation coefficient relates 25 * 6 = 150 self-assessment measures with the corresponding teacher assessment measures, its value r = 0.64 indicates a correlation. Thanks to the Author-Reader cycle, specification writing iterates several time during the semester and the final mark given to almost 17-8 Interface requirements and 17-11 Software requirement documents was Fully Achieved. Hence correlation between students and teacher assessments is complete. However, students mistake documents assessment for the BP1: Specify software requirements. Documents were improved through the author-reader cycle, but only reflective students improve their practices accordingly. Also, students did not understand the ENG.4. BP4: Ensure consistency and failed the self-assessment. Most students did not take any interest in traceability and self-assessed at a much higher level that the teacher did. A special set of values can bias a correlation coefficient; if we remove the BP4: Ensure consistency assessment, we get r = 0.89, indicating an effective correlation. However, a bias still exists because students are mostly self-assessing using the continuous feedback they got from the teacher during the Author-Reader cycle. Students reported that they wrote use cases from a statement of work for the first time and that they could not have succeeded without the Author-Reader cycle. Batch 1 : a client-server endeavor For the batch 1, students have to work closely in pairs, to produce architectural design and interface specification and to integrate the client and server sub-systems, each sub-system being designed, developed and tested by one student. Defining the highlevel architecture, producing the medium and low-level design are typical activities of the design phase [START_REF] Dascalu | Computer science capstone course senior projects: from project idea to prototype implementation[END_REF]. 4 pairs failed to work together and split, consequently lonesome students worked alone and have to develop both sub-systems. We were aware of two biases: 1 -students interpret the teacher's feedback to selfassess accordingly; 2 -relationship issues might prevent teachers to assess students to their effective level. Hence, for ENG.3 System architectural design process and ENG.7 Software integration process, in addition to teachers' assessment, another teacher, experienced in ISO/IEC 15504 assessments, acted as a third-party assessor. Architectural design For the ENG.3 System architectural design, table 2 presents the correlation coefficient between student and teacher assessments and the correlation coefficient between student and third-party assessments. Assessment relies on 3 BPs and 2 WPs. Table 2 presents also the average assessment for each assessed item. The correlation coefficient between self-assessment and teacher assessment measures is r1 = 0.28 and the correlation coefficient between self-assessment and third-party assessment measures is r2 = 0.24. There is no real indication for a correlation. poor, except maybe for database design and interface design, but these technical topics are deeply addressed in the curriculum. An half of students perform a very superficial architectural work because they are eager to jump to the code. They believe that the work is fair enough but teachers do not. The BP4. Ensure consistency is a traceability matter that suffers the same problem described above. A similar concern to requirements arose: most students took Work Products (Design Documents) assessment as an indication of their achievement. Students reported that requirement analysis greatly helped to figure out the system behavior and facilitated the design phase and interface specification. However, students had never really learnt architectural design and interface between sub-systems, indeed it explains the low third-party assessment average for BPS and WPs. Integration ENG.7 Software integration is assessed with 6 main Base Practices and 2 Work Products. The correlation coefficient between self and teacher assessments is r1 = -0.03 and the correlation coefficient between self and third-party assessments is r2 = 0.31. However, several BPs or WPs were assessed by the third-party assessor with the same mark for all students (N or P): the standard deviation is zero and the correlation coefficient is biased and was not used. Table 3 presents the assessment average for the third types of assessment. 48 All BPs and WPs related to integration and test are weakly third-party assessed, indicating that students are not really aware of these topics, a common hole in a Bachelor curriculum. Some students were aware of the poor maturity of the integrated product, partly due to the lack of testing. Although the Junit framework has been taught during the first semester, some students did not see the point to use it while some others did not see how to use it for the project. As mentioned by [START_REF] Umphress | Software process in the classroom: the Capstone project experience[END_REF], we came to doubt the veracity of process data we collected. Students reported that they appreciated the high-level discipline that the capstone imposed, but they balked at the details. Batch 2 : information system development For the batch 2, students have to work loosely in pairs; each of the two has developed different components of the information system and has been assessed individually. Table 4 presents the correlation coefficient r between student and teacher assessment for the ENG.6 Software construction process. It relies on 4 Base Practices and 2 Work Products. Table 4 presents also the average assessment for each assessed item. The correlation coefficient is r = 0.10 and there is no indication for a correlation. However, BPs and WPs related to unit testing were assessed by the teacher with almost the same mark for all students (N or P), biasing the correlation coefficient. If we remove BPs and WPs related to unit testing (17-14 Test cases specification; 15-10 Test incidents report; BP1: Develop unit verification procedures), we get r = 0.49, indicating a possible correlation. Our bachelor students have little awareness of the importance of testing, including test specification and bugs reporting. This issue has been raised by professional tutors many times during the internships but no effective solution has been found until yet. Students reported that the ENG.6 Software construction process raised a certain anxiety because students had doubt about their ability to develop a stand-alone server interoperating with a JDeveloper application and two databases but most students succeeded. For some students, a poor Java literacy compromised the project progress. It is one problem reported by Goold: the lack of technical skills in some teams [START_REF] Karunasekera | Preparing software engineering graduates for an industry career[END_REF]. Conclusion The research question aims to see how students' self-assessment and external assessment [by a teacher or a third-party] are correlated. This is not true for topics not addressed in the curriculum or unknown by students. For well-known topics, assessments are correlated roughly for the half of the study population. It might indicate that in a professional setting, where employees are skilled for the required tasks, selfassessment might be a good replacement to external assessment. Using a third-party assessment instead of coaches' assessment was not convincing. Third-party assessment is too harsh and tends to assess almost all students with the same mark. Self-knowledge or teacher's understanding tempers this rough assessment towards a finer appreciation. The interest of a competencies model (process/BPs/WPs) is to supply a reference framework for doing the job. Software professionals may benefit from selfassessment using a competencies model in order to record abilities gained through different projects, to store annotations related to new skills, to establish snapshots in order to evaluate and recognize knowledge, skills and experience gained over long periods and in diverse contexts, including in non-formal and informal settings. Table 1 1 : ENG.4 assessment (self and teacher) Stud. avg Tch. avg r BP1: Specify software requirements 2.12 1.84 0.31 BP3: Develop criteria for software testing 1.76 1.76 1.00 BP4: Ensure consistency 1.92 0.88 0.29 17-8 Interface requirements 1.88 1.88 1.00 17-11 Software requirements 2.08 2.08 1.00 Table 2 : 2 ENG.3 (self, teacher and third-party) Stud. Tch. 3-party r Std- r Std- avg avg avg. Tch 3party Table 3 : 3 ENG.7 indicators Stud. Tch. 3-party avg avg avg. Table 4 : 4 ENG.6 assessment (self and teacher) Stud. avg Tch. avg r Acknowledgements We thank all the students of the 2016-2017 final year of Bachelor in Computer Science for their agreement to participate to this study, and especially Maxens Manach and Killian Monot who collected and anonymized the assessments. We thank Laurence Duval, a teacher that coached and assessed half of the students during batch 1.
21,124
[ "742", "12595" ]
[ "489718", "497668", "497676" ]
01756185
en
[ "info" ]
2024/03/05 22:32:10
2017
https://hal.univ-brest.fr/hal-01756185/file/PID5031269.pdf
Cassandra Balland Néné Satorou Cissé Louise Hergoualc'h, Gwendoline Kervot Audrey Lidec Alix Machard Lisa Ribaud-Le Cann Constance Rio Louise Hergoualc'h Maelle Sinilo Valérie Dantec Catherine Dezan Cyrielle Feron Claire François Chabha Hireche Arwa Khannoussi Vincent Ribaud Do Scratch a First Round with the Essence Kernel Keywords: Scratch, gender equality, elementary school de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Girls Who . . . I. INTRODUCTION "Girls who…" is a facility intended to disseminate Scratch in elementary schools. It takes part of a French national plan accompanying schools in science and technology (ASTEP -Accompagnement en Sciences et Technologie à l'École Primaire). "Girls who…" is also a girl collective that develops and maintains a practicum of sciences, called the factory. "Girls who…" have a double goal: to set an example of sciences by women and to support the practice of sciences for elementary school pupils. Scratch is an open-source environment for multi-media creation (https://scratch.mit.edu) young people 8 to 16 years old. The Scratch language is made of blocks, arranged in categories. Series of connected blocks form scripts. Categories Motion and Pen allow the displacement of sprites in a scene (a Cartesian coordinate system) and the layout of geometrical and artistic figures. Categories Data, Control and Operators provide typical primitives of imperative programming languages. Sensing category handles inputs/outputs and manages the usual peripherals: mouse, keyboard, webcam, and clock. Events category allows event-driven programming and a parallel way of thinking. Categories Looks and Sound provide the user with multi-media blocks. The Essence kernel [START_REF]Object Management Group, Essence -Kernel and Language for Software Engineering Methods, Version 1.1[END_REF] issued from SEMAT initiative [START_REF] Jacobson | The essence of software engineering: the SEMAT kernel[END_REF] helps to assess projects progress and health. Essence yields an actionable and extensible kernel of universal elements, called alphas. Each alpha has a specific set of states that codify points along a progress dimension addressed by the alpha [START_REF] Jacobson | A new software engineering[END_REF]. The first round of "girls who…" produced introductory courses for Scratch, some courses based on the history of Charlie and the chocolate factory [START_REF] Dahl | Charlie et la chocolaterie[END_REF]. Exemplary lessons have been tested in five elementary schools around Brest, France. The development and the delivery of the course were performed in five sprints using Trello boards. After sprints, we performed an Essence assessment, presented in this paper, which revealed several weaknesses of the project and helped to improve the facility. Essence was chosen because we share the SEMAT concerns: to refund software engineering based on a solid theory, proven principles and best practices. Section II describes the project according to alphas of Customer and Solution areas of concern: opportunities, stakeholders, requirements and software system. After having discussed time, space, actions and matter of "girls who…" section III presents the Endeavor area of concern: work, team and way-of-working. Finally, section IV concludes. II. ASSESSMENT OF A PROJECT PROGRESS AND HEALTH A unique feature of the Essence kernel is the way that it handles "things we always work with" through alphas. An alpha is an essential element of the software engineering endeavor -one that is relevant to an assessment of its progress and health. Alphas move through a set of states. Each state has a checklist that specifies the criteria needed to reach the state. A. Opportunity In France, the 2016 reform for elementary and secondary schools defined a common grounding of knowledge, skills and culture, structured in five areas [START_REF]Le socle commun de connaissances, de compétences et de culture[END_REF]. The common grounding acts as a reference for the school curriculum from 6 to 16 years. Learning Scratch programming and algorithmic thinking was introduced into the "languages to think and communicate" area. According to its authors [START_REF] Resnick | Scratch: programming for all[END_REF], Scratch is a creative design environment that appeals to people who hadn't imagined themselves as programmers. As a technological tool, it supports the learning of many skills of the common ground. Since 1996, within the framework of the ASTEP, scientific students accompany primary school teachers during science and technology lessons. This accompaniment associates the teacher, the scientific student and pupils around the scientific and technological practice, for a mutual enrichment and natural division of competences. In Brest University, each second year student of any Bachelor of Science degree attends a course about her/his "Preparation to professional life" with an internship. Students' objectives have two points, regarding her/his professional integration: 1 -the student must carry out a task or a set of specific tasks inside the internship organization or company; 2 -the student must be able to show her/his understanding of a professional environment. For students considering a teacher or an academic career, the ASTEP is a unique opportunity to experiment the teaching situation and area of concern. In Western countries, there is girl disaffection for STEM area (Science, Technology, Engineering, and Mathematics). However, new generations of students are easily investing in communication and information technologies and appreciate the ludic and creative aspects of Scratch. The "girls who…" facility allows elementary schools to be technologically accompanied in Scratch by second year Bachelor girls. Students are demonstrating an example of sciences by women to the elementary school pupils. Combination of a) Scratch introduction in the elementary school, b) existence of the French ASTEP program and c) commitment of second year students in the "Preparation to professional life" course establishes an identified opportunity. It corresponds to the first progress level of alpha Opportunity [1, Table 8.3 -Checklist for Opportunity]: the idea is clear and the stakeholders (detailed in the following section) are identified and interested. The second level for the alpha Opportunity is to have a solution needed. In our facility, "girls who…" participation is made of three distinct activities: design and realization of science or technology learning sessions (based on Scratch); deliver learning sessions in the elementary schools; participate to a scientific project with primary schools class under the responsibility of a PhD female student. The whole facility is called the factory, and constitutes the solution. All items of the checklist Solution Needed of alpha Opportunity [1, Table 8.3 -Checklist for Opportunity] are not achieved yet: although the need for a solution is confirmed and that a first solution is identified and suggested, the needs for the stakeholders are not entirely established and all problems and their root causes are not completely identified. B. Stakeholders As mentioned in the previous section, second year students have to perform a practical experience through an internship and gain the associated course credits. Parallel to their research work, PhD students have also to gain credits. With primary education pupils, three groups are part of an example chain towards scientific professions. Each group has a role in the factory: • Apprentices: elementary school pupils (girls and boys) who are learning Scratch and performing projects. • Workers: second year female students of any Bachelor of Science program who like to prepare Scratch exercises and examples, deliver learning sessions to apprentices and accompany teachers. • Tutors: PhD female students who animate science project for apprentices with workers' help and who might assist workers on their disciplinary learning. Several categories of Ministry of National Education employees are also stakeholders. Educational institution employees, called resource persons, provide the factory with various services. Elementary school teachers engage their class in Scratch learning and science projects. School district inspectors (inspecteurs de circonscription, in French) are teachers' supervisor; they agree students' participation in schools and represent the institutional guarantee. Inside the university, heads of bachelors of science degree and PhD advisors facilitate students' participation as "girls who…" Stakeholders groups are identified, key stakeholders are represented and responsibilities are defined; one reached the first alpha level: stakeholders are recognized [1, Table 8.2 -Checklist for Stakeholders]. The second level is to have stakeholders represented. In the "girls who…" facility, the stakeholders' representatives are authorized and responsibilities are agreed. As detailed in section III.C, the collaborative approach is largely agreed. Consequently, to achieve the second level, the remaining task is to ensure that way-of-working is supported and respected. C. Requirements "Use cases have been around for almost 30 years as a requirements approach and have been part of the inspiration for more recent techniques such as user stories [START_REF] Jacobson | Use-case 2.0[END_REF]." Some stories are presented below. A resource person creates, on a Trello board, a schedule list for a school, then a card for each learning sequence where s/he seizes sequence description, fills the expiry date and allocates the workers. A worker consults a Trello board related to daily tasks, decides to take again the realization of a sequence and consults the task card. She downloads the Moodle resources associated with the card, works on the instructions and the project, and updates the Moodle resources. A worker arrives in a school to animate a learning sequence. She connects herself to Moodle and set up resources for the sequence on each work station. She starts the presentation of the sequence. Once apprentices are working, she explains the work to be carried out using the presentation and supervises the apprentices' activities. An apprentice starts working. S/he chooses one activity on her/his current sequence. Then, s/he loads the related Scratch project and get on the task. All stakeholders are not familiar with requirements engineering. The choice of "user stories" for sketching the requirements makes it possible to communicate them and to effectively share them between the stakeholders. User stories are represented using cards, called story cards, which constitute the smallest grain of requirements. As noted by [START_REF] Sharp | The Role of Story Cards and the Wall in XP teams: a distributed cognition perspective[END_REF], story cards and tasks to be implemented together with the wall where they are displayed are two tangible artefacts of the distributed knowledge of the team. The need for a facility such as the factory has been agreed by stakeholders. Users (pupils) are identified. Funding and opportunity are clear. The first level for alpha Requirements is reached [1, Table 8.4 -Checklist for Requirements]: requirements are conceived. The second level aims to have requirements bounded. Workers and tutors are main stakeholders involved in factory development. All stakeholders agree on the factory purpose -Scratch learning -and the factory will be successful if apprentices learn effectively. The stakeholders have a shared understanding of the ASTEP program. Requirements are managed using a Trello board, with a list for each user stories made of pieces materialized by story cards. Assumptions about the system changed (see next section) but are now clearly stated. We are struggling with two items of the Requirements bounded checklist: is the prioritization scheme clear; are constraints identified and considered? In the short history of "girls who…" each new school joining the factory brought new circumstances and new needs, and workers adapted consequently requirements. In order to move to the third level -Requirements coherent -one needs to find a way to manage new constraints and prioritize requirements. D. System The initial idea behind the system was to build an online training of Scratch. The first source of inspiration was the web site code.org. It offers many well-organized activities from simple to complex in terms of teaching programming. Kalelioglu studied the effects of learning programming with code.org regarding reflexive thought, problem resolution, and gender issues. She stated: "Students developed a positive attitude towards programming, and female students showed that they were as successful as their male counterparts, and that programming could be part of their future plans [START_REF] Kalelioğlu | A new way of teaching programming skills to K-12 students: code.org[END_REF]." During preliminary discussions with elementary school teachers and inspectors, the need moved from a remote on-line course to classroom training sessions. The first round of "girls who…" established the proof of concept thanks to five elementary schools. We kept the code.org principle of learning sequences divided into increasingly complex activities. Strength of code.org is an integrated environment, providing the user with learning instructions, a workspace where you can do programming, on-line help and feedback with learning. This integration remains a requirement of the system, but in a future version. Moreover, some learning sequences use small robots called mbot. Robotics programming is performed in the mblock environment, an open-source Scratch extension. The architecture of the system is thus made up of: • Several Trello boards (https://trello.com). • Prezi presentations (https://prezi.com). • A Moodle server from Brest University, dedicated to online courses and accessible by external users (https://moodlespoc.univ-brest.fr). • School computers or tablets with Scratch and mblock. The architecture selection criteria have been agreed on, hardware platforms identified and technologies selected. System boundary is known and key technical risks agreed to. Significant decisions about the system have been made: opensource software and free contents. All criteria of the first level for alpha System are fulfilled [1, Table 8.5 -Checklist for System Software] and the architecture is selected. The second level is to have a demonstrable system. According to the checklist associated with this level, one needs to demonstrate architectural characteristics, critical hardware, critical interfaces and integration with other existing systems. One needs to exercise the system and measure its performance. Then the relevant stakeholders will agree that the demonstrated architecture is appropriate. All verifications and measurements will be realized in the next round where "girls who…" will run courses in many schools; the goal is to demonstrate that the architecture of the system is fit for purpose and supports testing. III. THE ENDEAVOUR AREA OF CONCERN The Endeavour area of concern contains everything to do with the team, and the way that they approach their work. Endeavour contains three alphas: Team, Work and Way-ofworking. Key aspects of the facility are presented: time, space, actions and matter then alphas' assessment is drafted. A. Time of girls who… Each year, "girls who…" will welcome new workers (second-year students) and new tutors (starting PhD students). "Girls who…" go in schools and have three kinds of contributions: Scratch discovery sessions (in May/June), Scratch learning sequences (from October to February), and science project accompaniments (from March to June). A Trello board equivalent to the product backlog, called the service backlog, contains a list by school. Each school visit is materialized by a card and its attributes: date, members, checklist, and resources. During her first period in the factory (1-4 weeks, see sections II.A and III.D), each worker produces contents that will be integrated in one of the factory courses. Workers work in pair and the time unit is the week: the product backlog (a Trello board) contains a list per week. The list keeps cards (tasks or sequences in schools) organized in their various stages of progress. B. Spaces of girls who… The factory managed by "girls who…" is a common property (like Wikipedia or an open-source software) under a double license, GNU GPL and CC BY-SA. Digital spaces of the project are numerous: Moodle courses and Prezi presentations, product and service backlogs. The entry point is a public Trello board, in French (https://trello.com/b/83omDwmt/les-filles-qui). The integration of new workers in the factory occurs during the first internship (1-4 weeks) in the facility. Thanks to a partnership with an engineer school, École Nationale Supérieure de Techniques Avancées -ENSTA Bretagne, new workers are accommodated in a dedicated room at ENSTA Bretagne. Thereafter, "girls who…" will not be always on the same space-time: girls telework, go in schools by pairs, exchange synchronously and asynchronously. However, this integration space-time where "girls who…" meet presentially is a key element of network dynamics. It makes it possible to acquire tools and methods usage which will be then used remotely. A distributed common facility such as our factory has to schedule gathering space-time events throughout the year to keep girls trained, to strengthen network membership and to ensure continuity between presence and distance. C. Actions of girls who… "Girls who…" work is ruled by two key principles: setting an example of science and gender equality. "Girls who…" act and are models for other girls who act and so on. A worker achieves various tasks. She contributes to the facility organization and digital spaces management. She designs and realizes learning sequences for Scratch, Scratch junior, mbot programming and carry out assessments. She accompanies elementary school classes within the national program ASTEP (see section II.A), either delivering learning sequence or participating to science projects. She contributes to a research action endeavor related to the introduction of programming and algorithmic thinking in elementary schools. A tutor achieves targeted action. Upon teachers' request, she leads science projects for elementary school pupils. She might assist workers during their Bachelor studies. At the end of the first round, three courses has been developed and partly tested: a 6-sequence course of Scratch programming for 10-12 years, a 6-sequence course (sharing a few sequences with the previous course) of mbot robots programming for 10-12 years, a 3-sequence introductory course of Scratch junior for 6-8 years. Scratch courses use characters inspired of the book "Charlie and the chocolate factory" of Roald Dahl [START_REF] Dahl | Charlie et la chocolaterie[END_REF]. D. Matter of girls who… E. A first endeavor During the first five-weeks round, nine workers realized the programming courses described above and are co-authors of this paper. Whereas the initial idea was to prepare courses for next year, it has been quite clear that "girls who…" need to test courses in schools. Generally speaking, the design, the realization and courses delivery occurred in short cycles (one week) and built incrementally, the next week incorporating the current week feedback. A regular health evaluation of alphas Work and Team guided the project progress. The alpha Wayof-Working was barely assessed. 1) Work. A Trello board was used for definition, planning and assignment of tasks: a list per week and a card per workday with workers present this day. Card descriptions were informal, but for recurring tasks, helped to feed various to-do lists. The lists, as indicated in the section III.A, control tasks and schools' accompaniments and constitute the product backlog and the service backlog of the project. Required results, constraints, funding sources and priorities are clear; stakeholders are known; initiators are identified. All criteria of the first level of alpha Work [1, Table 8.7 -Checklist for Work] are met and work is initiated. The second level is to have the work prepared. Commitment of faculty is made; a credible plan and funding is in place and renewable each year. Resource availability (mainly workers) is understood; cost and effort of the work are roughly estimated. Work is broken down sufficiently in learning sequences for productive work to start; tasks are identified and prioritized; at least one of members (the Science Faculty) is ready. Exposure to the risks is minimal thus one will admit that it is understood. There are two points to fulfill to have the work prepared. Criteria of acceptance have to be established, for instance to constitute pupils cohorts and to perform measurements. Integration points have to be defined, in contents terms -how to enrich the poll of courses, and in schools terms -how to insert a new school in the facility. 2) Team. Seven workers of the first round came from Bachelor of Mathematics and Bachelor of Informatics. They know each other, some students committed immediately to the project and this was the key to decide the others. The project evolved from an online course to face-to-face courses; however the team mission and composition are now defined; constraints are known; required competences in programming and teaching are identified. Student themselves outlined their responsibilities and the need for a clear level of commitment. "Girls who…" initiators received a training intended to animate collaborative projects (http://animacoop.net). Training was carried out part-time over three months. Training facilitated the selection of a distributed and co-operative leadership model. Training helped to set rules of governance which is shared between workers, tutors, heads of Bachelor studies and network animators. The greatest difficulty arose from the two latter items of the checklist: definition of mechanisms to grow the team and determination of team size. An identified risk is that schools demand increases too fast; during the first round, we selected five pilot schools; for the 2017-2018 academic year, 18 elementary classes are already part of the program. Due to introduction of Scratch into the elementary school curriculum, it can lead to a demand exceeding supply. In French Faculties of Science, biology studies are largely invested by girls, and there is no need for exemplary models for Biology. However "girls who…" make biology can be interested by Scratch technology and by going in the schools as well. We have to evolve in a way that "girls who…" make biology can bring forces to the facility and find an interest for their scientific discipline. This is the goal of science project accompaniments, thus all criteria of the first level of alpha Team are met [1, Table 8.6 -Checklist for Team] and team is seeded. The second level is to have a formed team. Workers know their role and their place in the team; individual and team responsibilities are accepted and aligned with competencies. As volunteers, members accept work and it was underlined in the previous section that members must commit to the work and the team. Team communication mechanisms have been defined: Trello for backlog, planning and tasks and Moodle as a Learning Management System. "Girls who…" use Slack as instant messaging and Loomio for collective decision making. The unknown factor relates to the growth: will be enough workers available and did we fail to identify external collaborators? The points will be addressed at the next round. 3) Way-of-working The first level of alpha Way-of-Working is to have principles established [1, Table 8.8 -Checklist for Way-Of-Working]. The criteria are: the team actively supports the principles, the stakeholders agree with principles, the needs for tool are agreed, the approach is recommended, the operational context is understood and the constraints of practice and tool are understood. The first round of "girls who…" did not have established principles before its start, but workers defined their way-ofworking and made the following decisions. A pair of workers is assigned to a school; design and realization of learning sequences of training is assigned individually; Prezi presentations are used to present learning instructions and exercises interactively with pupils; Slack, Trello and Loomio usage rules are defined; tools and resources required to be available in schools are identified; a back-up solution is needed and implemented, for content availability as well as for software tools. Foundations are laid and at the next round, we will see if the new "girls who…" will follow these principles. We will check if the first level is reached and principles established. IV. CONCLUSION The first round of "girls who…" shaped a facility with a double goal: to set the example of "girls who…" do sciences and technology and to support the introduction of Scratch and algorithmic thinking in the elementary schools. "Girls who…" prepared Scratch lessons, delivered courses in five pilot schools and organized the system for next rounds. Kids loved when "girls who…" came in schools and learned Scratch and robots programming with enthusiasm. For 2017-2018 year, 18 elementary classes are already part of the program. The facility evolved considerably since we wrote initial specifications nine months ago. The facility was led with a double help: training intended to animate collaborative projects and use of the Essence kernel to assess the state of different areas of concern and to define next steps to complete. Essence provides an understanding of the progress and health of development efforts and how practices can be combined into an effective way of working. The use of Essence can help teams, among others, to detect systemic problems early and take appropriate action; evaluate the completeness of the set of practices selected, and understand the strengths and weaknesses of the way of working; keep an up-to-date record of the team's way of working, and share information and experiences with other teams [START_REF] Jacobson | Agile and SEMAT: perfect partners[END_REF]. Essence assessment gave us a clear vision of the project state and raised critical weaknesses of the facility. Indeed, we are using Essence to assess our research projects health and progress. Fig. 1 . 1 Fig. 1. Illustrations by Lisa, inspired by "Charlie and the chocolate factory" the novel by Roald Dahl and the eponym movie by Tim Burton. Acknowledgments Many persons facilitated the birth of "girls who…" including Pascale Huret-Cloastre, Hélène Klucik, Isabelle Queré, Corinne Tarits, Yann Ti-Coz.
27,122
[ "13930", "1017730", "981453", "748404", "742" ]
[ "300314", "388505", "497668", "388505", "390402", "489718" ]
01756186
en
[ "info" ]
2024/03/05 22:32:10
2017
https://hal.univ-brest.fr/hal-01756186/file/PID5031297.pdf
Vincent Leildé email: v.leilde@gmail.com Vincent Ribaud email: ribaud@univ-brest.fr Does process assessment drive process learning? The case of a Bachelor capstone project Keywords: process assessment, ability model, capstone project In order to see if process assessment drives processes learning, process assessments were performed in the capstone project of a Bachelor in Computer Science. Assessments use an ability model based on a small subset of ISO/IEC 15504 processes, its main Base Practices and Work Products. Students' point of view was also collected through an anonymous questionnaire. Self-assessment using a competency model helps students to recognize knowledge, skills and experience gained over time and in diverse contexts. The capstone project offered a starting point. Students' self-assessment and external assessment are correlated to some point but are not correlated for topics unaddressed in the curriculum or unknown by students. I. INTRODUCTION At Brest University, the Bachelor capstone lasts roughly 100 hours during the Bachelor last semester. Students learn by doing a simplified cycle of software development through a somewhat realistic project. A rainbow metaphor is used to position phases in the cycle, divided in four main phases: requirement (red), design (yellow), construction (blue) and integration (indigo). Three phases are used to transition between main phases: requirements analysis (orange), detailed design (green), validation (violet). A side-effect goal of the capstone project is to be exposed to some kind of process assessment. A process is "a set of interrelated or interacting activities which transforms inputs into outputs [START_REF]Information technology --Process assessment[END_REF]." Process assessment is applicable by or on behalf of an organization with the objective of understanding the state of its own processes for process improvement. Our hypothesis is that it is worth it for students, if they are conscious of the processes underlying their way-of-working. It is a common assertion that assessment drives learning. Our research question examines whether process assessment drives process learning. 25 students consented to use an ability model based on a small subset of the ISO/IEC 12207:2008 Process Model [START_REF]Information technology --Process assessment[END_REF] and self-assessed skills proficiency test using a 4-point Likert scale. Teachers assessed students using the same ability model and Likert scale, and for two processes, a 3 rd -party assessment was also performed. Volunteer collected the results including a satisfaction questionnaire, gathered students' and teachers' assessment and anonymized data. Results are used to present the achievement degree of students together with a possible correlation between self and external assessments. Some initial findings were presented in [START_REF] Ribaud | Process Assessment Issues in a Bachelor Capstone Project[END_REF]. The structure of the paper is: section 2 overviews process assessment; section 3 presents different assessments we carried out; section 4 presents the questionnaire results and we finish with a discussion and a conclusion. II. PROCESS ASSESSMENT A. ISO/IEC 15504 standard ISO/IEC 15504 standard uses a Process Assessment Model which is a two-dimensional model of process capability. In the process dimension, processes are defined and classified into process categories. In the capability dimension, a set of process attributes grouped into capability levels is defined. Process Attributes are features of a process that can be evaluated on a scale of achievement, providing a measure of the capability of the process. The process dimension uses a Process Reference Model (PRM) replicated from ISO/IEC 12207:2008. A PRM is a model comprising definitions of processes in a life cycle described in terms of process purpose and outcomes, together with an architecture describing the relationships between processes. The capability dimension defines process assessment that is essentially a measurement activity of process capability on the basis of evidences related to assessment indicators. There are two types of assessment indicators: process capability indicators, which apply to all capability levels and process performance indicators, which apply exclusively to capability level 1. The process performance indicators are Base Practices (BP) and Work Products (WP). "A base practice is an activity that, when consistently performed, contributes to achieving a specific process purpose", and "a work product is an artifact associated with the execution of a process" [START_REF]Information technology --Process assessment[END_REF]. B. Ability Model Computer From an individual human perspective, this subset can be seen as a competency model related to the knowledge, skills and attitudes involved in a software development project. A competency model (also called an ability model) defines and organizes the elements of a curriculum and their relationships. A hierarchical model is easier to manage and use. We kept the hierarchical decomposition issued from the 15504: process groups -process -base practices and products. A competency model is decomposed into competency areas (mapping to process groups); each area roughly corresponding to one of the main division of the profession or of a curriculum. Each area organizes the competencies into families (mapping to processes). A family roughly corresponds to main activities of the area. Each family is made of a set of knowledge and abilities (mapping to base practices), eventually called competencies; each of these entities being represented by a designation and a detailed description. The ability model and its associated tool eCompas has been presented in [START_REF] Ribaud | Towards an ability model for software engineering apprenticeship[END_REF]. C. Process Assessment ISO 15504 [START_REF]Information technology --Process assessment[END_REF] defines a measurement framework for the assessment of process capability defined on a six point ordinal scale which represents increasing capability of the implemented process, from not achieving the process purpose through to meeting current and projected business goals [START_REF]Information technology --Process assessment[END_REF]. Capability Level 0 denotes an incomplete process, either not performed at all, or for which there is little or no evidence of systematic achievement of the process purpose [START_REF]Information technology --Process assessment[END_REF]. Capability Level 1 denotes a performed process that achieves its process purpose through the performance of necessary actions and the presence of appropriate input and output work products which, collectively, ensure that the process purpose is achieved [START_REF]Information technology --Process assessment[END_REF]. We are not interested in higher levels than 1 in this study. Evidence of performance of the base practices, and the presence of work products with their expected work product characteristics, provide objective evidence of the achievement of the purpose of the process. Achievement is characterized on a defined rating scale: N Not Achieved, P Partially Achieved, L Largely Achieved, F Fully Achieved. If students are able to perform a process, it denotes a successful learning of software processes, and teachers' assessments rate this capability. The research question aims to state if self-assessment and external assessment help students to improve their software processes practice and how students, teacher and third-party assessment are correlated. III. THE CAPSTONE PROJECT A. Overview 1) Schedule The curriculum is a 3-year Bachelor of Computer Science. The project happens the third year before students' internship. The project is performed during a dedicated period of two weeks. Before the dedicated weeks, a dozen of two-hour labs are conducted all the semester along and some homework is required. According to students' estimates, they spent an average of 102 hours on the capstone project. Each phase is driven by main Base Practices of the related software process and ends up with the delivery of few related Work Products. Each deliverable can be reviewed as much as needed by the teacher that provides students with comments and suggestions. When the dedicated period starts, students are familiar with the Author-Reader cycle and have performed the requirements and architectural design processes. 2) Statement of work The software is made of 2 sub-systems: PocketAgenda that manages an address book and an agenda and interfaces with a central directory; WhoIsWho manages the directory and a social network. PocketAgenda is implemented with Java and JSF relying on an Oracle RDBMS. WhoIsWho is implemented in Java using a small RDBMS. Both sub-systems interact in a client-server mode and communicate with a protocol established using UDP. 3) Students' consent Students were advised that they can freely participate in the experiment described in this paper. The class contains 29 students, all agreed to participate; 4 did not complete the project and do not take part to the study. Students have to regularly update the competency model consisting of the ENG process group, the 6 processes above and their Base Practices and main Work Products and self-assess on the N-P-L-F scale. There are also teacher and 3 rd -party assessments that have to be attached to self-assessments by volunteer students. 4) Statistics Process assessment was continuous and communicated to students regularly; hence they were made aware of their progression very often and adjusted their effort. Table 1 presents teacher's assessment. BP and WP rating are aggregated using an all-or-none principle: if all BPs or WPs in a process are rated at least Not, Partially, Largely or Fully, BPs or WPs are rated Not, Partially, Largely or Fully. B. Writing and analyzing requirements Before the 2-weeks dedicated period starts, students have to capture, write and manage requirements through use cases. It is a non-technical task unfamiliar to students. Students report that eliciting and writing requirements were a difficult task and the Author-Reader cycle helped to produce complete and usable use cases and to acquire a writing style. In most cases, three cycles were required to achieve the task. According to students' estimates average, they spent 20 hours (roughly a fifth of the total hours) to capture, write and manage use cases. This period refers to red/orange colors corresponding to the ENG.1 and ENG.2 processes. A 4-hour lecture about use cases was delivered in January at the beginning of the semester, then the iterative process of writing and being reviewed by the teacher started. 2 presents also the average assessment for each assessed item. The correlation coefficient of each item relates 25 selfassessment measures with the corresponding teacher assessment measures; the overall correlation coefficient relates 25 * 5 = 125 couple of measures, its value r = 0.64 indicates a correlation. Thanks to the Author-Reader cycle, the final mark given to almost all Interface requirements and Software requirement documents was Largely Achieved or Fully Achieved. Students were made aware of their mark and reproduced the mark during the self-assessment; obviously the correlation between students and teacher assessments is complete. However, students mistook documents assessment for practices assessment. Documents were improved through the authorreader cycle, but only reflective students improve their practices accordingly. Other students self-assess their practices at the same level that the corresponding work products, but the teacher did not; see for example BP1: Specify software requirements. Also, students failed the self-assessment for BP4: Ensure consistency. Most students neglected traceability and self-assessed at a higher level that the teacher did. An abnormal set of values can bias a correlation coefficient; if we remove the BP4: Ensure consistency assessment, we get r = 0.89, indicating an effective correlation. However, a bias still exists because students are assessing themselves using the continuous feedback they got from teachers during the Author-Reader cycle. C. Architecture and integration: a client-server endeavor The ACM/IEEE joint Computer Science Curriculum [START_REF]CS Curriculum Guidelines for Undergraduate Degree Programs in Computer Science[END_REF] states about the capstone project "Such projects should challenge students by […] requiring work on a larger scale than typical course projects. Students should have opportunities to develop their interpersonal communication skills as part of their project experience." The first week was used for these purposes: students have to work closely in pairs, to produce interface specification and to integrate the client and server sub-systems, each sub-system being designed, developed and tested by one student. The scope of the week is bounded and defined by 4 use cases, previously written by students. The architectural design has been already done during the semester using SADT and E-R models. The week schedule follows a kind of agile sprint schedule: agreement on requirements and high-level architecture (0.5 day), pair working interface design and low-level decisions (1-1.5 days), individual sub-systems development and test unit (2.5-3 days). Some pairs chose continuous integration, some other pairs performed integration the last day of the week. Both scheme worked but 4 pairs failed to work together and split, consequently lone students worked alone and have to develop both sub-systems, with or without success. We designed the assessment with the intent to minimize the bias mentioned in the previous section: students interpret the teacher's feedback to self-assess accordingly. Also, we wished to focus on the pair work and the architectural design / integration relationship. Hence, we focus the first week assessment on ENG.3 System architectural design process (yellow) and ENG.7 Software integration process (indigo). One author worked several years in a software company and had some experience in 15504 assessments, and then he did not participate to the week and acted as a third-party assessor. The other author and another teacher coached and assessed students' BPs and WPs during the whole week. 1) Architectural design UML modeling and object-oriented design are taught in dedicated lectures (30 hours each). However, nearly all students had no idea how to perform architecture and interface design. Architectural design was taught by example: teachers performed a step-by-step design for one of the four use cases; students reproduced the scheme for the remaining use cases. For the ENG.3 System architectural design, Table 3 presents the correlation coefficient between student and teacher assessments and the correlation coefficient between student and third-party assessments. Assessment relies on 3 Base Practices and 2 Work Products. Table 3 presents also the average assessment for each assessed item. The overall correlation coefficient between self-assessment and teacher assessment measures is r1 = 0.28 and the overall correlation coefficient between self-assessment and third-party assessment measures is r2 = 0.24. There are no real differences and indeed no indication for a correlation. In Table 3, we see that item correlation is poor, except for database design and interface design, but these topics are deeply addressed in the curriculum. An half of students performed a superficial architectural work because they are eager to jump to the code. They believe that the work is fair enough and teachers as well, the external assessor does not. The BP4.Ensure consistency is a traceability matter suffering the same problem described above. Teachers also have a weak awareness of the topic; they over-assess students; and there is no correlation with students' and teachers' assessments. Students reported that requirement analysis with SADT greatly helped to figure out the system behavior and facilitated the design phase and interface specification. However, students had never really learnt architectural design and interface between sub-systems, indeed it explains the lower 3 rd -party assessment average for BPS and WPs. 2) Integration The integration topic is not addressed in the Bachelor curriculum. In the best case, students respected their interface specifications and few problems arose when they integrated client and server code. In some cases, they were unable to perform the integration and the integration merely failed. ENG.7 Software integration (indigo) is assessed with 6 Base Practices and 2 Work Products. The overall correlation coefficient between self and teacher assessments is r1 = -0.03 and the overall correlation coefficient between self and 3 rd -party assessments is r2 = 0.31. However, several BPs or WPs were assessed by the 3 rd -party assessor with the same mark for all students (N or P): the standard deviation is zero and the correlation coefficient is biased and is not used. Table 4 presents the different assessments average. All BPs and WPs related to integration and test are weakly third-party assessed, indicating that students are not really aware of these topics, a common hole in a Bachelor curriculum. Some students were aware of the poor maturity of the integrated product, partly due to the lack of testing. D. Construction : information system development The second week was devoted to perform a continuous and significant development endeavor. Students have to work loosely in pairs; each of them developed separate components of the information system and has been assessed individually. Unfortunately, a teacher quit the capstone project for strong personal reasons; consequently the 3 rd party assessor moved back to be a teacher and an internal assessor. 1) Construction JDeveloper is a Java IDE for the Oracle Application Development Framework (ADF). ADF is an end-to-end development framework, built on top of the Enterprise Java platform, and providing integrated solutions including data access, business services development, a controller layer, a JSF tag library implementation. Most of the application logic relies on the database schema, without the need to write code. During the semester, 16 labs hour were devoted to learning the framework, a few for mastering the IDE but enough for a start. Java, database, network and SQL programming are taught in dedicated lectures during the curriculum (60 hours each). Despite of this amount, a third of students self-judged as having a poor knowledge of SQL and Java. Students have almost no idea of test-driven development and a lack of a test strategy; hence units were poorly tested. Although the Junit framework has been taught during the first semester, no student used it. These points have to be improved in the future. Table 5 presents the correlation coefficient r between student and teacher assessment for the ENG.6 Software construction process (blue). It relies on 4 Base Practices and 2 Work Products. Table 5 presents also the average assessment for each assessed item. The overall correlation coefficient is r = 0.10 and there is no indication for a correlation. Our bachelor students have little awareness of the importance of testing, including test specification and bugs reporting. Consequently, BPs and WPs related to unit testing were assessed by teachers with almost the same mark for all students (N or P), biasing the correlation coefficient. If we remove BPs and WPs related to unit testing management (17-14 Test cases specification; 15-10 Test incidents report; BP1: Develop unit verification procedures), we get r = 0.49, indicating a plausible correlation. Students reported that the ENG.6 Software construction process raised a certain anxiety because students had doubt about their ability to develop a stand-alone server interoperating with a JDeveloper application and two databases but most students succeeded. For some students, a poor Java literacy compromised the project progress. 2) Qualification testing On average, students spent less than 10% of the total hours to perform integration and qualification tests of the software. These topics are unaddressed in the curriculum and because they mostly occur at the end of the project, no time is available to complete the learning. For the ENG.8 Software Testing process, 2 WPs were assessed: 08-21 Software test plan and 15-11 Defect report. Teachers' assessment was Not Achieved for each product; however students' assessment average is 1.56 for the Test Plan (one half Partially and one half Largely) and 1.32 for the Defect Report (two third Partially and one third Largely). Only 2 students self-assessed Not Achieved to both products. The discrepancy might come from the lack of lectures dedicated to testing and from the misunderstanding of the topic. Hence, both WPs are not considered here. We were interested to get an unformal feedback. An anonymous questionnaire let students express their opinions about the capstone project, which are presented in Table 7. Although one project objective is to relate to previous lectures and to mobilize knowledge and skills gained during the bachelor studies, it was not effective and rather seen as a new learning experience for the half of students. We were surprised with the poor use and interest for reviewing facilities. Students' comment. Students appreciated that each project phase has been explained from experience and through examples. Students were convinced of the usefulness of the different phases performed in a software project and that it might be applied to other type of projects. Shared documents could be an alternative to mail exchange and might trigger the use of reviewing facilities that some students misused. Students asked to be exposed to a whole picture of the project at the beginning, not piece by piece. Some students found the work load too heavy and time devoted to the project too short. V. DISCUSSION AND CONCLUSION The first research question aims to see if process assessment fosters process learning. The interest of a competency model (process/BPs/WPs) is to supply a reference framework for doing the job. Software professionals may benefit from self-assessment using a competency model in order to record abilities gained through different projects, to store annotations related to new skills, to establish snapshots in order to evaluate and recognize knowledge, skills and experience gained over long periods and in diverse contexts, including in non-formal and informal settings. Students committed to self-assessment but we don't know if they will integrate this reflective practice in their usage or if they did as a part of the capstone project. However, it is a starting point. The second research question examines how students' selfassessment and external assessment [by a teacher or a thirdparty] are correlated. This is not true for topics not addressed in the curriculum or unknown by students. For known topics, assessments are correlated roughly for the half of the study population. It might indicate that in a professional setting, where employees are skilled for the required tasks, selfassessment might be a good replacement to external assessment. A 3 rd -party assessment did not prove to be useful. Third-party assessment is too harsh and tends to assess almost all students with the same mark. Science bachelor (CS) students are generally focused either on technical topics or theoretical subjects. Little attention is paid to software engineering in a CS Bachelor curriculum. The PRM we use is a small subset of the ISO/IEC 15504:2006 Process Reference Model, mainly the Softwarerelated Processes of the ENG Process. Process Purpose, Process Objectives and Base Practices have been kept without any modification; Input and Outputs Work Products have been reduced to main products. To foster a practical understanding of SE, we use a colored cycle and we slightly rearrange the ENG Software-related Processes Process Group. The cycle is ENG.1 Requirements elicitation: red; ENG.2 System requirements analysis: orange; ENG.3 System architectural design: yellow; ENG.5 Software design: green; ENG.6 Software construction: blue; ENG.7 Software integration: indigo; ENG.8 Software testing: violet. TABLE I I . TEACHER'S ASSESSMENTS (AGGREGATED) Base Practices Work Products Rating N P L F N P L F ENG.1/2 Requirement 0 5 19 1 0 5 17 3 ENG.3/5 Design 0 8 17 0 0 5 17 3 ENG.6 Construction 0 4 21 0 17 7 1 0 ENG.7 Integration 0 5 12 8 1 1 17 6 ENG.8 Testing 3 19 3 0 25 0 0 0 TABLE II . II ENG.1/2 ASSESSMENT (SELF AND TEACHER) Stud. Teach. r BP1: Specify software requirements 2.12 1.84 0.31 BP3: Develop criteria for software testing 1.76 1.76 1.00 BP4: Ensure consistency 1.92 0.88 0.29 17-8 Interface requirements 1.88 1.88 1.00 17-11 Software requirements 2.08 2.08 1.00 Table 2 2 presents the correlation coefficient r between student and teacher assessment for the ENG.1 Requirements elicitation and ENG.2 Requirements analysis processes. It relies on 3 Base Practices and 2 Work Products. Table TABLE III . III ENG.3 (SELF, TEACHER AND THIRD-PARTY) Std. avg Tch. avg 3 rd avg r Std-Tch r Tch-3 rd BP1: Describe system architect. 2.24 2.02 1.68 -0.22 0.18 BP3. Define interfaces 1.96 2.16 1.56 0.48 0.36 BP4. Ensure consistency 2 1.72 0.88 0 0.44 04-01 Database design 2.48 2.2 1.88 0.49 0.35 04-04 High level design 2.12 1.84 1.64 0.37 -0.11 TABLE IV IV . ENG.7 AVERAGE (SELF, TEACHER AND THIRD-PARTY) Stnd. Teac. 3 rd - avg. avg. avg. BP1: Develop software integration strategy 1.56 1.20 0.40 BP2: Develop tests for integrated software items 2.08 1.08 0.52 BP3: Integrate software item 2.00 2.12 1.76 BP4: Test integrated software items 2.00 1.80 1.16 BP5. Ensure consistency 1.76 1.20 0.72 BP6: Regression test integrated software items 1.64 0.52 0.2 08-10 Software integration test plan 1.44 0.88 0.00 11-01 Software product 2.04 2.12 1.48 TABLE V V . ENG.6 ASSESSMENT (SELF AND TEACHER) Stud. Teach. r BP1: Develop unit verification procedures 1.84 0.40 0.05 BP2: Develop software units 1.92 1.84 0.37 BP3: Ensure consistency 1.92 0.92 0.25 BP4: Verify software units 1.96 1.00 -0.2 17-14 Test cases specification 1.80 0.36 0.07 15-10 Test incidents report 1.52 0.12 -0.45 TABLE VI VI . ENG.8 ASSESSMENT (SELF AND TEACHER) Stud. Teach. r BP1: Develop tests for integrated software 1.96 1.00 0.53 product BP2: Test integrated software product 1.84 1.08 0.27 BP3: Regression test integrated software 1.52 0.56 -0.03 Table 6 6 presents the correlation coefficient r between student and teacher assessment for the ENG.8 Software testing process (violet). It relies on 3 Base Practices. Table6presents also the average assessment for each assessed item. The overall correlation coefficient is r = 0.30 and there is little indication for a correlation. Students are not familiar with regression tests and BP3.Regression test integrated software has been assessed by the teacher Not achieved for the half of students and Partially Achieved for the other. If we remove the BP3, we get r = 0.41, indicating a possible correlation. IV. STUDENTS' VIEWPOINT TABLE VII. STUDENTS' SELF-PERCEPTION ABOUT THE PRACTICUM The Agenda project strg agr neu- dsgr strg agr tral dsgr I had the time to learn and do 8 6 3 3 2 the project. I found the project complex. 5 10 5 1 1 I committed to perform the 10 10 2 0 0 project. I found the project realistic. 11 7 2 0 2 I understand relationships 10 6 5 1 0 between specifications, design, building and tests. I had to deepen my knowledge 10 7 2 1 2 and skills to perform the project. My work for the project helped 5 3 8 3 3 me to understand lectures. I used a lot the reviewing 2 8 7 2 3 facilities. I made progress thanks to the 3 7 7 2 3 reviewing facilities. I improved my working 5 10 3 2 2 methods thanks to the project. Acknowledgment We thank all the students of the 2016-2017 final year of Bachelor in Computer Science to their agreement to participate to this study, and especially Maxens Manach and Killian Monot who collected and anonymized the assessments. We thank Laurence Duval, a teacher that coached and assessed half of the students during the first week.
28,589
[ "12595", "742" ]
[ "497668", "489718" ]
00175627
en
[ "phys", "info" ]
2024/03/05 22:32:10
2007
https://inria.hal.science/hal-00175627/file/BPitsc-final.pdf
Cyril Furtlehner Jean-Marc Lasgouttes Arnaud De La Fortelle A Belief Propagation Approach to Traffic Prediction using Probe Vehicles This paper deals with real-time prediction of traffic conditions in a setting where the only available information is floating car data (FCD) sent by probe vehicles. Starting from the Ising model of statistical physics, we use a discretized space-time traffic description, on which we define and study an inference method based on the Belief Propagation (BP) algorithm. The idea is to encode into a graph the a priori information derived from historical data (marginal probabilities of pairs of variables), and to use BP to estimate the actual state from the latest FCD. The behavior of the algorithm is illustrated by numerical studies on a simple simulated traffic network. The generalization to the superposition of many traffic patterns is discussed. I. INTRODUCTION With an estimated 1% GDP cost in the European Union (i.e. more than 100 billions euros), congestion is not only a time waste for drivers and an environmental challenge, but also an economic issue. Today, some urban and interurban areas have traffic management and advice systems that collect data from stationary sensors, analyze them, and post notices about road conditions ahead and recommended speed limits on display signs located at various points along specific routes. However, these systems are not available everywhere and they are virtually non-existent on rural areas. In this context, the EU-funded REACT project developed new traffic prediction models to be used to inform the public and possibly to regulate the traffic, on all roads. The REACT project combines a traditional traffic prediction approach on equipped motorways with an innovative approach on nonequipped roads. The idea is to obtain floating car data from a fleet of probe vehicles and reconstruct the traffic conditions from this partial information. Two types of approaches are usually distinguished for traffic prediction, namely data driven (application of statistical models to a large amount of data, for example regression analysis) and model based (simulation or mathematical models explaining the traffic patterns). Models (see e.g. [START_REF] Chowdhury | Statistical physics of vehicular traffic and some related systems[END_REF], [START_REF] Klar | Mathematical models for vehicular traffic[END_REF] for a review) may range from microscopic description encoding drivers behaviors, with many parameters to be calibrated, to macroscopic ones, based on fluid dynamics, mainly adapted to highway traffic and subject to controversy [START_REF] Daganzo | Requiem for second order fluid approximation of traffic flow[END_REF], [START_REF] Aw | Resurrection of "second order" models of traffic flow[END_REF]. Intermediate kinetic C. Furtlehner is with Project Team TAO at INRIA Futurs, Université de Paris-Sud 11 -Bâtiment 490, 91405 Orsay Cedex, France. (cyril.furtlehner@inria.fr) J.-M. Lasgouttes is with Project Team IMARA at INRIA Paris-Rocquencourt, Domaine de Voluceau -BP 105, 78153 Rocquencourt Cedex, France. (jean-marc.lasgouttes@inria.fr) A. de La Fortelle is with INRIA Paris-Rocquencourt and École des Mines de Paris, CAOR Research Centre, 60 boulevard Saint-Michel, 75272 Paris Cedex 06, France. (arnaud.de la fortelle@ensmp.fr) description including cellular automata [START_REF] Nagel | A cellular automaton model for freeway traffic[END_REF] are instrumental for powerful simulation and prediction systems in equipped road networks [START_REF] Chrobok | Traffic forecast using simulations of large scale networks[END_REF]. On the other hand, the statistical approach mainly focuses on time series analysis on single road links, with various machine learning techniques [START_REF] Guozhen | Traffic flow prediction based on generalized neural network[END_REF], [START_REF] Chun-Hsin | Travel-time prediction with support vector regression[END_REF], while global prediction systems on a network combine data analysis and model simulations [START_REF] Chrobok | Traffic forecast using simulations of large scale networks[END_REF], [START_REF] Kanoh | Short-term traffic prediction using fuzzy c-means and cellular automata in a wide-area road network[END_REF]. For more information about traffic prediction methods, we refer also the reader to [START_REF] Benz | Information supply for intelligent routing servicesthe INVENT traffic network equalizer approach[END_REF], [START_REF] Versteegt | PredicTime -state of the art and functional architecture[END_REF]. We propose here a hybrid approach, by taking full advantage of the statistical nature of the information, in combination with a stochastic modeling of traffic patterns and a powerful message-passing inference algorithm. The beliefpropagation algorithm, originally designed for bayesian inference on tree-like graphs [START_REF] Pearl | Probabilistic Reasoning in Intelligent Systems: Network of Plausible Inference[END_REF], is widely used in a variety of inference problem (e.g. computer vision, coding theory. . . ) but to our knowledge has not yet been applied in the context of traffic prediction. The purpose of this paper is to give the first principles of such an approach, able to exploit both space and time correlation on a traffic network. The main focus is on finding a good way to encode some coarse information (typically whether traffic on a segment is fluid or congested), and to decode it in the form of real-time traffic reconstruction and prediction. In order to reconstruct the traffic and make predictions, we propose to use the socalled Bethe approximation of an underlying disordered Ising model (see e.g. [START_REF] Mézard | Spin Glass Theory and Beyond[END_REF]), to encode the statistical fluctuations and stochastic evolution of the traffic and the belief propagation (BP) algorithm, to decode the information. Those concepts are familiar to the computer science and statistical physics communities since it was shown [START_REF] Yedidia | Constructing free-energy approximations and generalized belief propagation algorithms[END_REF] that the output of BP is in general the Bethe approximation. The paper is organized as follows: Section II describes the model and its relationship to the Ising model and the Bethe approximation. The inference problem and our strategy to tackle it using the belief propagation approach are stated in Section III. Section IV is devoted to a more practical description of the algorithm, and to numerical results illustrating the method. Finally, some new research directions are outlined in Section V. II. TRAFFIC DESCRIPTION AND STATISTICAL PHYSICS We consider a road network for which we want to both reconstruct the current traffic conditions and predict future evolutions. To this end, all we have is partial information in the form of floating car data arriving every minute or so. The difficulty is of course that the up-to-date data only covers part of the network and that the rest has to be inferred from this. In order to take into account both spatial and temporal relationships, the graph on which our model is defined is made of space-time vertices that encode both a location (road link) and a time (discretized on a few minutes scale). More precisely, the set of vertices is V = L ⊗ Z + , where L corresponds to the links of the network and Z + to the time discretization. To each point α = ( , t) ∈ V, we attach an information τ α ∈ {0, 1} indicating the state of the traffic (1 if congested, 0 otherwise). On such a model, the problems of prediction and reconstruction are equivalent, since they both amount to estimating the value of a subset of the nodes of the graph. The difference, which is obvious for the practitioner, lies mainly in the nature (space or time) of the correlations which are most exploited to perform the two tasks. Each vertex is correlated to its neighbors (in time and space) and the evaluation of this local correlation determines the model. In other words, we assume that the joint probability distribution of τ V def = {τ α , α ∈ V} ∈ {0, 1} V is of the form p({τ α , α ∈ V}) = α∈V φ α (τ α ) (α,β)∈E ψ αβ (τ α , τ β ) (1) where E ⊂ V 2 is the set of edges, and the local correlations are encoded in the functions ψ and φ. V together with E describe the space-time graph G and V(α) ⊂ V denotes the set of neighbors of vertex α. The model described by [START_REF] Chowdhury | Statistical physics of vehicular traffic and some related systems[END_REF] is actually equivalent to an Ising model on G, with arbitrary coupling between adjacent spins s α = 2τ α -1 ∈ {-1, 1}, the up or down orientation of each spin indicating the status of the corresponding link (Fig. 1). The homogeneous Ising model (uniform coupling constants) is a well-studied model of ferro (positive coupling) or anti-ferro (negative coupling) material in statistical physics. It displays a phase transition phenomenon with respect to the value of the coupling. At weak coupling, only one disordered state occurs, where spins are randomly distributed around a mean-zero value. Conversely, when the coupling is strong, there are two equally probable states that correspond to the onset of a macroscopic magnetization either in the up or down direction: each spin has a larger probability to be oriented in the privileged direction than in the opposite one. From the point of view of a traffic network, this means that such a model is able to describe three possible traffic regimes: fluid (most of the spins up), congested (most of the spins down) and dense (roughly half of the links are congested). For real situations, we expect other types of congestion patterns, and we seek to associate them to the possible states of an inhomogeneous Ising model with possibly negative coupling parameters, referred to as spin glasses in statistical physics [START_REF] Mézard | Spin Glass Theory and Beyond[END_REF]. The practical information of interest which one wishes to extract from ( 1) is in the form of local marginal distributions p α (τ α ), once a certain number of variables have been fixed by probe vehicles observations. They give the probability for a given node to be saturated at a given time and in turn can be the basis of a travel time estimation. From a computational viewpoint, the extraction cost of such an information from an Ising model on a multiply connected graph is known to scale exponentially with the size of the graph, so one has to resort to some approximate procedure. As we explain now such an approximation exists for dilute graphs (graphs with a tree-like local structure). On a simply connected graph, the knowledge of p α (τ α ) the one-vertex and p αβ (τ α , τ β ) the two-vertices marginal probabilities is sufficient [START_REF] Pearl | Probabilistic Reasoning in Intelligent Systems: Network of Plausible Inference[END_REF] to describe the measure [START_REF] Chowdhury | Statistical physics of vehicular traffic and some related systems[END_REF]. p(τ V ) = (α,β)∈E p αβ (τ α , τ β ) α∈V p α (τ α ) qα-1 = α∈V p α (τ α ) (α,β)∈E p αβ (τ α , τ β ) p α (τ α )p β (τ β ) , (2) where q α denotes the number of neighbors of α. Since our space time graph G is multi-connected, this relationship between local marginals and the full joint probability measure can only be an approximation, which in the context of statistical physics is referred to as the Bethe approximation. This approximation is provided by the minimum of the socalled Bethe free energy, which, based on the form (2), is an approximate form of the Kullback-Leibler distance, D(b p) def = τV b(τ V ) ln b(τ V ) p(τ V ) , between the reference measure p and an approximate one b. This rewrites in terms of a free energy as D(b p) = F(b) -F(p), where F(b) def = U(b) -S(b), (3) with the respective definitions of the energy U and of the entropy S U(b ) def = - (α,β)∈E τα,τ β b αβ (τ α , τ β ) log ψ αβ (τ α , τ β ) - α∈V τα b α (τ α ) log φ α (τ α ), S(b) def = - (α,β)∈E τα,τ β b αβ (τ α , τ β ) log b αβ (τ α , τ β ) + α∈V τα (q α -1)b α (τ α ) log b α (τ α ). The set of single vertex b α and and two-vertices b αβ marginal probabilities that minimize (3) form the Bethe approximation of the Ising model. For reasons that will become evident in Section III-B, these will also be called beliefs. It is known that the quality of the approximation may deteriorate in the presence of short loops. In our case, the fact that the nodes are replicated along the time axis alleviates this problem. In practice, what we retain from an inhomogeneous Ising description is the possibility to encode a certain number of traffic patterns in a statistical physics model. This property is also shared by its Bethe approximation (BA) and it is actually easier to encode the traffic patterns in this simplified model rather than the original one. Indeed, it will be shown in Section III-C that the computation of the BA from the marginal probabilities is immediate. The data collected from the probe vehicles is used in two different ways. The most evident one is that the data of the current day directly influences the prediction. In parallel, this data is collected over long periods (weeks or months) in order to estimate the model [START_REF] Chowdhury | Statistical physics of vehicular traffic and some related systems[END_REF]. Typical historical data that is accumulated is • pα (τ α ): the probability that vertex α is congested (τ α = 1) or not (τ α = 0); • pαβ (τ α , τ β ): the probability that a probe vehicle going from α to β ∈ V(α) finds α with state τ α and β with state τ β . The computation of pα and pαβ requires a proper congestion state indicator τ α that we assume to be the result of the practitioner's pretreatment of the FCD. The definition of this indicator is a problem of its own and is outside of the scope of this article. A relevant FCD variable is instantaneous speed. An empirical threshold may be attached to each link in order to discriminate (in a binary or in a continuous manner) between a fluid and a congested situation. Another approach is to convert the instantaneous speed in a probability distribution of the local car density, when an empirical fundamental diagram is known for a given link. Aggregation over a long period of these estimators yields then the desired historical data. The edges (α, β) of the space time graph G are constructed based on the presence of a measured mutual information between α and β, which is the case when pαβ (τ α , τ β ) = pα (τ α )p β (τ β ). III. THE RECONSTRUCTION AND PREDICTION ALGORITHM A. Statement of the inference problem We turn now to our present work concerning an inference problem, which we set in general terms as follows: a set of observables τ V = {τ α , α ∈ V}, which are stochastic variables are attached to the set V of vertices of a graph. For each edge (α, β) ∈ E of the graph, an accumulation of repetitive observations allows to build the empirical marginal probabilities {p αβ }. The question is then: given the values of a subset τ V * = {τ α , α ∈ V * }, what prediction can be made concerning V * , the complementary set of V * in V? There are two main issues: • how to encode the historical observations (inverse problem) in an Ising model, such that its marginal probabilities on the edges coincide with the pαβ ? • how to decode in the most efficient manner-typically in real time-this information, in terms of conditional probabilities P (τ α |τ V * )? The answer to the second question will somehow give a hint to the first one. B. The belief propagation algorithm BP is a message passing procedure [START_REF] Pearl | Probabilistic Reasoning in Intelligent Systems: Network of Plausible Inference[END_REF], which output is a set of estimated marginal probabilities (the beliefs b αβ and b α ) for the measure (1). The name "belief" reflects the artificial intelligence roots of the algorithm. The idea is to factor the marginal probability at a given site in a product of contributions coming from neighboring sites, which are the messages. The messages sent by a vertex α to β ∈ V(α) depends on the messages it received previously from other vertices: m α→β (τ β ) ← τα∈{0,1} n α→β (τ α )φ α (τ α )ψ αβ (τ α , τ β ), (4) where n α→β (τ α ) def = γ∈V(α)\{β} m γ→α (τ α ). (5) The messages are iteratvely propagated into the network with a parallel, sequential or random policy. If they converge to a fixed point, the beliefs b α are then reconstructed according to b α (τ α ) ∝ φ α (τ α ) β∈V(α) m β→α (τ α ), (6) and, similarly, the belief b αβ of the joint probability of (τ α , τ β ) is given by b αβ (τ α , τ β ) ∝ n α→β (τ α )n β→α (τ β ) × φ α (τ α )φ β (τ β )ψ αβ (τ α , τ β ). (7) In the formulas above and in the remainder of this paper, the symbol ∝ indicates that one must normalize the beliefs so that they sum to 1. A simple computation shows that equations ( 6) and ( 7) are compatible, since (4)-( 5) imply that τα∈{0,1} b αβ (τ α , τ β ) = b β (τ β ). In most studies, it is assumed that the messages are normalized so that τ β ∈{0,1} m α→β (τ β ) = 1. holds. The update rule (4) indeed indicates that there is an important risk to see the messages converge to 0 or diverge to infinity. It is however not immediate to check that the normalized version of the algorithm has the same fixed points as the original one (and therefore the Bethe approximation). This point has been analyzed in [START_REF] Furtlehner | Belief propagation and Bethe approximation for traffic prediction[END_REF] and the conclusion is that the fixed points of both version of the algorithm coincide, except possibly when the graph has a unique cycle. We can safely expect that it is not the case in practical situations. It has been realized a few years ago [START_REF] Yedidia | Generalized belief propagation[END_REF] that the fixed points of the BP algorithm coincide with stable points of the Bethe free energy (3), and that moreover stable fixed points correspond to local minima of (3) [START_REF] Heskes | Stable fixed points of loopy belief propagation are minima of the Bethe free energy[END_REF]. BP is therefore a simple and efficient way to compute the Bethe approximation of our inhomogeneous Ising model. We propose to use the BP algorithm for two purposes: estimation of the model parameters (the functions ψ αβ and φ α ) from historical data and reconstruction of traffic from current data. C. Setting the model with belief propagation The fixed points of the BP algorithm (and therefore the Bethe approximation) allow to approximate the joint marginal probability p αβ when the functions ψ αβ and φ α are known. Conversely, it can provide good candidates for ψ αβ and φ α from the historical values pαβ and pα . To set up our model, we are looking for a fixed point of the BP algorithm satisfying ( 4)-( 5) and such that b αβ (τ α , τ β ) = pαβ (τ α , τ β ) and therefore b α (τ α ) = pα (τ α ). It is easy to check that the following choice of φ and ψ, ψ αβ (τ α , τ β ) = pαβ (τ α , τ β ) pα (τ α )p β (τ β ) , (8) φ α (τ α ) = pα (τ α ), (9) leads (1) to coincide with [START_REF] Klar | Mathematical models for vehicular traffic[END_REF]. They correspond to a normalized BP fixed point for which all messages are equal to 1/2. It has been shown in [START_REF] Furtlehner | Belief propagation and Bethe approximation for traffic prediction[END_REF] that this form of φ and ψ is in some sense canonical: any other set of functions yielding the same beliefs is equal to ( 8)-( 9), up to a change of variable. This equivalence holds for the beliefs at the other fixed points of the algorithm and their stability properties. This crucial result means that it is not needed to learn the parameters of the Ising model, but that they can be readily recovered from the Bethe approximation. The message update scheme (4) of previous section can therefore be recast as m α→β (τ β ) ← τα∈{0,1} n α→β (τ α )p αβ (τ α |τ β ) (10) and the beliefs are now expressed as b α (τ α ) ∝ pα (τ α ) γ∈V(α) m γ→α (τ α ), (11) b αβ (τ α , τ β ) ∝ pαβ (τ α , τ β )n α→β (τ α )n β→α (τ β ). ( 12 ) There is no guarantee that the trivial constant fixed point is stable. However, the following theorem, proved in [START_REF] Furtlehner | Belief propagation and Bethe approximation for traffic prediction[END_REF], shows that this can be decided from the mere knowledge of the marginal which we want to model. Theorem 1: The fixed point {p} is stable if, and only if, the matrix defined, for any pair of oriented edges (α, β) ∈ E, (α , β ) ∈ E, by the elements J α β αβ = pαβ (1|1) -pαβ (1|0) 1 1 {α ∈V(α)\{β}, β =α} , has a spectral radius (largest eigenvalue in norm) smaller than 1. A sufficient condition for this stability is therefore pαβ (1|1) -pαβ (1|0) < 1 q α -1 , for all α ∈ V, β ∈ V(α). In addition, on a dilute graph, the knowledge of the Jacobian coefficient distribution and the connectivity distribution of the graph is enough to determine the stability property by a mean field argument [START_REF] Furtlehner | Belief propagation and Bethe approximation for traffic prediction[END_REF]. D. Traffic reconstruction and prediction Let V * be the set of vertices that have been visited by probe vehicles. Reconstructing traffic from the data gathered by those vehicles is equivalent to evaluating the conditional probability p α (τ α |τ V * ) = p α,V * (τ α , τ V * ) p V * (τ V * ) , (13) where τ V * is a shorthand notation for the set {τ α } α∈V * . The BP algorithm applies to this case if a specific rule is defined for vertices α ∈ V * : since the value of τ α is known, there is no need to sum over possible values and ( 10) becomes m α→β (τ β ) ← n α→β (τ α )p αβ (τ α |τ β ). (14) IV. PRACTICAL CONSIDERATIONS AND SIMULATION The algorithm outlined in Section III can be summarized by the flowchart of Fig. 2. It is supposed to be run in real time, over a graph which corresponds to a time window (typically a few hours) centered around present time, with probe vehicle data added as it is available. In this perspective, the reconstruction and prediction operations are done simultaneously on an equal footing, the distinction being simply the time-stamp (past for reconstruction or future for prediction) of a given computed belief. The output of the previous run can be used as initial messages for a new run, in order to speedup convergence. Full re-initialization (typically a random set of initial messages) has to be performed within a time interval of the order but smaller than the time-scale of typical traffic fluctuations. We have tested the algorithm on the artificial traffic network shown on the program's screenshot of Fig. 3. To this end, we used a simulated traffic system which has the advantage to yield exact empirical data correlations. For real data, problems may arise because of noise in the historical information used to build the model; this additional difficulty will be treated in a separate work. The simulator implements a queueing network, where each queue represents a link of the traffic network (a single-way lane) and has a finite capacity. To each link, we attach a variable ρ ∈ [0, 1], the car density, which is represented by a color code on the user interface snapshot. As already stated in Section II, the physical traffic network is replicated, to form a space time graph, in which each reconstruction algorithm ∀α ∈ V, β ∈ V(α) m α→β ← 1 it ← 0 ∀α ∈ V, β ∈ V(α) m old α→β ← m α→β it ← it + 1 Choose randomly α 0 ∈ V Q Q Q Q Q Q Q Q Q Q n y α 0 ∈ V * ? Compute m α 0 →β for all β ∈ V(α 0 ) using [START_REF] Benz | Information supply for intelligent routing servicesthe INVENT traffic network equalizer approach[END_REF] Compute m α 0 →β for all β ∈ V(α 0 ) using ( 14) Q Q Q Q Q Q Q Q Q Q n y it = itmax or m old -m < Compute beliefs using [START_REF] Chrobok | Traffic forecast using simulations of large scale networks[END_REF] and [START_REF] Guozhen | Traffic flow prediction based on generalized neural network[END_REF] end Fig. 2. The traffic reconstruction algorithm. The parameters, besides those that have already been defined elsewhere, are the total number of iterations itmax, the maximal error > 0 and a norm • on the set of messages, which choice is up to the implementer. Likewise, the update policy described here is "random", but parallel or sequential updates can also be used. vertex α = ( , t) corresponds to a link at a given time t of the traffic graph. To any space-time vertex α, we associate a binary congestion variable τ α ∈ {0, 1}. The statistical physics description amounts to relating the probability of saturation P (τ α = 1) to the density ρ α . For the sake of simplicity, we consider a linear relation and build our historical p according to some time averaging procedure. In practice, the car density would not be available from the FCD and a preprocessing of information would be necessary. In our oversimplified setting, the single-vertex beliefs yield directly an estimation of the car density. Nevertheless, more realistic data collection and modeling would be completely transparent w.r.t. the algorithm. To estimate the quality of the traffic restoration we use the following estimator: reconstruction rate def = 1 |V| α∈V 1 1 {|bα-ρα|<0.2} , which computes the fraction of space-time nodes α for which the belief b α does not differ by more than an arbitrary threshold of 0.2 from ρ α . A typical prediction time series is shown in Fig. 4. The overall traffic level, characterized by some real number comprised between 0 and 1, oscillates between dense and fluid conditions with a certain amount of noise superimposed. In this setting, we observe first that BP has three fixed points, among which the reference b = p (see Section III-C), which is is in fact unstable because it is a superposition of distinct measures. The two additional fixed points represent actually the dense and fluid traffic conditions. These additional states appear spontaneously and some fine tuning is required to control saturation effects [START_REF] Furtlehner | Belief propagation and Bethe approximation for traffic prediction[END_REF]. This is reflected in the sudden drops of the reconstruction rate when the algorithm jumps from one state to the other during transient phases. A selection criteria based on free energy measurements may be used to choose the most relevant fixed point [START_REF] Furtlehner | Belief propagation and Bethe approximation for traffic prediction[END_REF]. Concerning the dependence of the reconstruction rate with respect to the number of probe vehicles, Fig. 5 indicates that the knowledge on less than 10% of the links (10 vehicles for 122 links) is sufficient in this setting to identify most of the time correctly the traffic regime. However, when the number of probes is increased, the reconstruction rate given by our algorithm saturates around 80%. In addition to the fact that time correlations are not incorporated in our model, the main reason for this saturation is that in our practical implementation [START_REF] Furtlehner | Belief propagation and Bethe approximation for traffic prediction[END_REF] correlations between probe vehicles are neglected when imposing (13). V. CONCLUSION AND PERSPECTIVES We have presented a novel methodology for reconstruction and prediction of traffic using the belief propagation algorithm on floating car data. We have shown how the underlying Ising model can be determined in a straightforward manner and that it is unique up to some change of variables. The algorithm has been implemented and illustrated using an artificial traffic model. While our main focus is currently on testing on more realistic simulations, several generalizations are considered for future work, using the extension of the results of Section III to a more general factor graph setting done in [START_REF] Furtlehner | Belief propagation inference with a prescribed fixed point[END_REF]: Firstly, the binary description corresponding to the underlying Ising model is arbitrary. Traffic patterns could be represented in terms of s different inference states. A Potts model with s-states variables would leave the belief propagation algorithm and its stability properties structurally unchanged. However, since the number of correlations to evaluate for each link is s 2 -1, this number of states should be subject to an optimization procedure. Secondly, our way of encoding traffic network information might need to be augmented to cope with real world situations. This would simply amount to use a factor-graph used to propagate this information. In particular it is likely that a great deal of information is contained in the correlations of local congestion with aggregate traffic indexes, corresponding to sub-regions of the traffic network. Taking these correlations into account would result in the introduction of specific variables and function nodes associated to these aggregate traffic indexes. These aggregate variables would naturally lead to a hierarchical representation of the factor graph, which is necessary for inferring the traffic on large scale network. Additionally, time dependent correlations which are needed for the description of traffic, which by essence is an out of equilibrium phenomenon, could be conveniently encoded in these traffic index variables. Ultimately, for the elaboration of a powerful prediction system, the structure of the information content of a trafficroad network has to be elucidated through a specific statistical analysis. The use of probe vehicles, based on modern communications devices, combined with a belief propagation approach, is in this respect a very promising approach. Fig. 1 . 1 Fig. 1. Traffic network (a) and Ising model (b) on a random graph. Up (resp. down) arrows correspond to fluid (resp. congested) traffic. Fig. 3 .Fig. 4 .Fig. 5 . 345 Fig. 3. Traffic network as produced by the simulator. The continuous color code represents the traffic index from 0 (green/light) to 1 (red/dark). There are 35 physical nodes and 122 physical links (road segments), simulated on 40 time steps, which yields a time-space graph G with 4880 nodes.
30,270
[ "838835", "3706", "12611" ]
[ "56056", "2423", "2423", "27997" ]
01006927
en
[ "spi" ]
2024/03/05 22:32:10
2012
https://hal.science/hal-01006927/file/giraud2012.pdf
Eliane Giraud email: eliane.giraud@ulg.ac.be Michel Suery Michel Coret M Suéry High temperature compression behavior of the solid phase resulting from drained compression of a semi-solid 6061 alloy High temperature compression behavior of the solid phase resulting from drained compression of a semi-solid 6061 alloy Introduction The rheological behavior of alloys in the solidification range is now studied more and more extensively [1][2][3][4][5] owing to the importance of processes during which solid and liquid phases are coexisting. This coexistence occurs obviously during conventional solidification of castings, ingots or billets but also during liquid phase sintering, welding and forming such as rheocasting or thixocasting [6][7][8][9][10]. The knowledge of this behavior is indeed important for modeling purposes to avoid numerous trial and error experiments. For example, during semi-continuous casting of Al billets, defects like hot tears or macrosegregations [11][12][13] can form depending on alloy composition and process parameters. Their prediction requires modeling the whole process by considering the behavior of the semi-solid alloy taking into account both the deformation of the solid and the flow of the liquid. Criteria for hot tear formation are then introduced based on critical solid deformation or cavitation pressure in the liquid [14,15]. To determine the rheological behavior of the solid, it is generally assumed that its composition is not far from that of the alloy. Experiments are then carried out at temperatures close to but below the solidus temperature of the alloy to determine the various parameters of the constitutive equation. In this temperature range, viscoplastic behavior is generally a good approximation so that the constitutive equation is mainly determined by the strain rate sensitivity parameter and by the activation energy [16][17][18]. This procedure therefore does not take into account the fact that the composition of the solid phase is not that of the alloy and also that it is changing with temperature and thus with the solid volume fraction present in the alloy. Indeed, during solidification or partial melting of a binary alloy, solid and liquid are coexisting with compositions given by the phase diagram and proportions by the lever rule in equilibrium conditions. The composition and the proportion of these two phases thus continuously change with temperature. To determine the constitutive equation of the solid phase, it is therefore necessary to test alloys with various compositions corresponding to those of the solid phase at various temperatures below the solidus. Extrapolation of the results at the solidus temperature for each composition allows determining the behavior of the solid phase in a semi-solid alloy at various temperatures. This procedure can be quite easily considered in the case of a binary alloy but it is hardly possible in a multi-constituent alloy. In this case indeed, the composition of the solid phase is not simply given by the phase diagram and even the solid can be constituted of several phases. Specific softwares are thus required which are able to give the composition and the proportion of the various phases as a function of temperature for various solidification or partial melting conditions. The next step of the procedure would be to prepare alloys with the composition of the solid phase at various temperatures and to test them. Another simpler procedure can be considered, i.e. drainage of the liquid from the semi-solid alloy and testing of the remaining solid. If all the liquid present at a given temperature can be drained out of a sample, the remaining solid has the exact composition of the solid phase at this temperature. Repeating this procedure at various temperatures would then lead to various specimens having the composition of the solid phase at these temperatures. One way to carry out this drainage of the liquid is to use drained oedometric compression, already performed on aluminum alloys [2,19]. It consists in compressing a semi-solid sample placed in a container with a filter on top of it by applying the compression using a hollow piston to drain the liquid through the filter. This procedure allows one to measure also the compressibility of the solid skeleton provided that the pressure required to drain the liquid is low compared to that required to compress the solid. In the present work, the feasibility of the procedure exposed above has been studied in the case of a 6061 alloy. Drained compressive tests have been carried out at various temperatures within the solidification range. These tests allow measuring the compressibility of the solid for various temperatures and obtaining specimens representative of the solid phase found in the semi-solid state. Then compressive tests at high temperature (below the solidus temperature) have been carried out on these drained specimens in order to determine the constitutive equation of the solid. Finally, a comparison with the behavior of the non-drained 6061 alloy within the same temperature range has been performed. Experimental procedure The alloy used for this investigation is a 6061 alloy provided by Almet (France) in the form of a rolled plate of 50 mm thickness in the heat treated T6 condition. In order to drain the liquid from a semi-solid sample, a drained compression apparatus has been designed. It consists in a container of 35 mm diameter in which the alloy is placed while being still solid (Fig. 1). During the test, the alloy is initially melted, and then partially solidified at a cooling rate of 20 K/min until a given solid fraction is reached. At this stage, the temperature is kept constant, and a downward vertical displacement is imposed to the hollow piston. Two displacement rates of the piston have been studied: 0.008 and 0.015 mm/s. A system of filtration allows the liquid to flow out of the sample, and, consequently, the solid fraction increases. In fact this system is constituted of two different filters as shown in Fig. 1: a rigid stainless steel filter with quite large holes (about 2 mm diameter) associated with a much thinner stainless steel filter with very small holes (about 200 m diameter). The first filter allows transmitting the load from the hollow piston whereas the second is required for the filtration of the liquid. This test can be seen as a means to impose solidification mechanically at constant temperature. Assuming that the solid fraction evolves only by liquid drainage, the following equation can be used to relate the imposed axial strain ε z to the solid fraction g s : g s = g s0 exp(-ε z ) (1) where g s0 is the initial solid fraction before any strain is imposed. Two values of this initial solid fraction (0.6 and 0.8) have been selected by using two temperatures (910 K and 897 K, respectively) at which oedometric compression has been carried out. These two temperatures have been determined by using the ProPhase software (from ALCAN CRV, France). The compression of the sample is pursued until g s is theoretically equal to 1, which means that all the liquid should have been drained from the semi-solid sample. The test was thus stopped when ε z satisfied this condition. After drained compression, the solid remaining in the container has been cut in two pieces through the axis of the cylinder to observe the microstructure of the alloy after drainage of the liquid. These two pieces have been thereafter machined in order to get compression specimens of 8 mm in diameter and 6 mm in height. These specimens have been used for compression tests carried out at high temperature by using the strain rate jump procedure. Four temperatures have been investigated: 723, 773, 803 and 823 K as well as five strain rates: 10 -4 , 2.5 × 10 -4 , 10 -3 , 2.5 × 10 -3 and 6 × 10 -3 s -1 . Similar compression tests have been also carried out for comparison on samples machined in the non-drained 6061 rolled plate. Experimental results Drained compression tests Influence of initial solid fraction Fig. 2 shows the variation of the applied stress as a function of the theoretical solid fraction (given by Eq. ( 1)) for the two tests carried out with different initial solid fractions at a displacement rate of the piston of 0.015 mm/s. The applied stress obviously increases with increasing strain i.e. with increasing solid fraction. It increases slowly initially when the initial solid fraction is small (0.60) and much more rapidly when the initial solid fraction is larger (0.80). It should be noted that the two curves are coming close to each other at large solid fractions but the curve corresponding to the larger initial solid fraction is always below that for the lower initial solid fraction. In addition, it is surprising to observe that the stress does not increase very sharply when the solid fraction is close to 1 although the solid is not compressible. Influence of displacement rate Fig. 3 shows the variation of the applied stress as a function of the solid fraction for the two tests carried out with different displacement rates of the piston and for an initial solid fraction of 0.8. The displacement rate has an influence on the stress required to drain the liquid from the specimen: a high strain rate leads to a higher measured stress at a given solid fraction. This strain rate sensitivity is correlated to the viscoplastic behavior of the solid network in this solid fraction range as already observed for tensile [20] and shear [21] experiments. 3.2. Compressive tests at high temperature (below the solidus temperature) Compressive tests at constant strain rate on drained samples Since most of the solidification defects form in the last stage of solidification (i.e. g s > 0.8), only samples machined in the remaining solid obtained after a drained experiment at a high initial solid fraction (i.e. g s0 = 0.8) have been tested in order to determine the high temperature behavior of the solid phase. Fig. 4 shows the stress-strain curves obtained at the various temperatures and for the different strain rates applied successively from 10 -4 s -1 to 6 × 10 -3 s -1 . For a given strain rate, stress increases until a plateau where it remains relatively constant, thus showing negligible strain hardening during deformation of the solid material. Moreover, stress obviously increases with increasing strain rate and decreasing temperature, which highlights the viscoplastic behavior of the solid material. Compressive tests at constant strain rate on non-drained 6061 samples The same type of compression experiments at various temperatures in the solid state as for drained samples has been performed on non-drained 6061 samples. Fig. 5 shows the stress-strain curves obtained at the various temperatures and for the different strain rates applied successively from 2.5 × 10 -4 s -1 to 6 × 10 -3 s -1 . Negligible strain hardening and viscoplastic behavior of the solid material are again found. Discussion The behavior of the semi-solid alloy during the drained experiments will be discussed first. Then the behavior observed during compression at high temperature of the solid phase will be examined in order to determine the appropriate rheological law. Finally, a comparison with the behavior at high temperature of the nondrained 6061 alloy will be performed. Behavior of the semi-solid alloy during drained compressive tests The stress required for the drained oedometric compression of the partially solidified specimen increases with increasing strain or solid fraction (Figs. 2 and3). This stress is due to both the densification of the solid skeleton and the liquid flow out of the sample which becomes more and more difficult as the remaining liquid volume fraction decreases. Indeed, a decrease of the liquid volume fraction allows increasing the number of contacts between dendrites and decreasing the interdendritic spaces thus decreasing the permeability of the solid skeleton. Adopting Darcy's law to describe liquid flow and assuming homogeneous deformation in the specimen, its contribution to the stress measured during the drained compression test can be evaluated. The maximal interstitial pressure P L , which corresponds to the pressure at the bottom of the container is then given by [22]: P L = V p • Á 2 • K • H 0 • h 2 + P 0 ( 2 ) where V p is the velocity of the piston, Á is the liquid viscosity, K is the solid skeleton permeability, H 0 and h are the initial and current positions of the piston from the bottom of the container and P 0 is the pressure due to the drained liquid located above the filters. The variation of the position of the piston during the test is given by: h = H 0 • g S0 /1g L where g S0 is the initial solid fraction and g L is the current liquid fraction. To determine the solid skeleton permeability, the Kozeny-Carman equation can be used [23,24]: K = g 3 L • 3 1 /5 where 1 is the primary dendrite arm spacing. 1 is assumed equal to that given by Flemings for pure aluminum in [START_REF] Flemings | Solidification Processing[END_REF] since, to measure this parameter during our experiments, it would have been necessary to cool with a very high rate the specimen when a solid fraction of 0.8 is reached, which is not possible. P 0 is given by: P 0 = (H 0h) • L • g with L the liquid density and g the gravity acceleration. The variation of interstitial pressure with liquid fraction, calculated using Eq. ( 2), is shown in Fig. 6: it increases slowly up to a solid fraction of 0.97 and then very sharply when the solid fraction exceeds 0.97 which corresponds to the coalescence solid fraction [20,21]. This sharp increase is due to the fact that, for solid fractions larger than that for coalescence (>0.97), the number of solid bridges increases drastically thus leading to a drop of the solid skeleton permeability and then to more and more difficult liquid flow. However, the interstitial pressure remains much lower (0.07 MPa) than the experimental measured stress (12 MPa as shown in Figs. 2 and3) so that it can be neglected during the deformation of the semisolid alloy. The mushy material therefore behaves during drained compression like a solid without liquid. Metallographic observation of the part of the specimen which has been drained through the filter (Fig. 7) and of the part which remained in the container (Fig. 8) shows that liquid has been effectively drained from the specimen. The solidified drained liquid (Fig. 7) contains much more eutectic and intermetallics than the non-drained 6061 alloy [20]. Since this liquid was drained from the specimen when the solid fraction was equal to 0.8, it is possible to determine its composition by using the ProPhase software for the solidification conditions of the experiment (20 K/min) (Table 1). The liquid is theoretically enriched in Mg, Si, Fe and Cu which are the main secondary elements of the 6061 alloy. An analysis by X-ray diffraction of the drained liquid, by using the Cu K␣ radiation, allows detecting various phases which contain these elements such as: Mg 2 Si, CuAl 2 , Al 15 (FeMn) 3 Si 2 and Al 4 Cu 2 Mg 8 Si 7 in addition to the Al-rich matrix (Fig. 9). This result confirms that liquid has been drained from the specimen. The solid part which remains in the container after drainage contains some intermetallics homogeneously distributed everywhere in the specimen (Fig. 8). Their concentration is not high enough to allow detection by X-ray diffraction: as shown in Fig. 9, only the Al matrix is detected. Since these intermetallics are observed after cooling, it is necessary to wonder whether they were already present in the material before liquid drainage or they formed upon cooling after liquid drainage. Prophase calculation indicates that the first intermetallics form in the 6061 alloy when the volume fraction of the primary phase is 0.84. Therefore, the intermetallics present in the solid part are more likely to result from the solidification of the liquid which was not drained from the specimen. Thus, the drainage of the liquid has not been complete which seems to be in contradiction with the stress-solid fraction curves (Figs. 2 and3): these curves indicate that the drained compression tests were stopped when the solid fraction was theoretically very close to 1, so that almost no liquid should have remained in the specimen. The fact that some liquid remains can be explained by two factors: the filter has been deformed slightly at the end of the compression test so that the real axial strain applied to the specimen was smaller than expected, and/or some solid was extruded through the filter. These assumptions allow us to explain the behavior observed when the apparent solid fraction reaches 1. Indeed, since the solid is not compressible, the stress should have increased asymptotically to infinite, which is not observed experimentally. In addition to the viscoplastic behavior of the solid phase shown in Fig. 3, the influence of the initial solid fraction and of the accumulated strain on the mechanical behavior of the semi-solid material has been investigated. Fig. 2 also shows that stress increases more or less rapidly with increasing strain depending on the initial solid fraction. This is due to the initial morphology of the solid phase: when the initial solid fraction is high, the solid grains are more connected so that drained compression involves extensive deformation of the solid with less rearrangement thus leading to an important increase of stress with increasing strain. Fig. 2 shows also that, at a given solid fraction, the measured stress is higher when the initial solid fraction is low. This result can be explained by the level of accumulated strain required to reach this solid fraction, which is larger when the initial solid fraction is low. A larger accumulated strain obviously leads to a more deformed microstructure and consequently to a larger number of solid-solid contacts. However, when the strain is such that the solid fraction approaches 1, the microstructure is largely deformed with almost no liquid remaining, but, as strain hardening is not observed in this temperature range, the stress no longer depends on the initial solid fraction. Behavior of the drained solid at high temperature Fig. 4 has shown that the solid phase exhibits a viscoplastic behavior with negligible strain hardening. The most common approximation used to describe the mechanical behavior of such material consists in using a classical creep law as follows [16,18]: ε = A • n • exp - Q RT ( 3 ) where ε is the strain rate, is the measured stress, n is the stress sensitivity parameter, Q is the activation energy, A is a material constant, T is the temperature and R is the ideal gas constant. The behavior of the solid material is thus described by two main parameters: n and Q. The slope of the curves showing the variation of stress as a function of strain rate in logarithmic scales (Fig. 10) allows determining the strain rate sensitivity parameter m which is close to 0.08 whatever the temperature. The stress sensitivity parameter n is then equal to 12 (n = 1/m). From the curves showing the variation of ln( ε/ n ) as a function of -1/RT (Fig. 11), it is possible to deduce the activation energy Q which is equal to 347 kJ/mol. The values obtained for n and Q are very high. Indeed, as a general rule, the stress sensitivity parameter for materials deformed at high temperature is between 3 and 5. An exponent of 3 is generally attributed to dislocation glide controlled by viscous drag whereas a value of 5 corresponds to climb controlled dislocation glide [START_REF] Kloc | [END_REF]. A value of 8 can be sometimes observed when the substructure does not evolve with stress. The activation energy is usually around 130-140 kJ/mol which is the activation energy for self-diffusion in 4) for specimens resulting from drained compression of the 6061 alloy at an initial solid fraction of 0.8. Table 2 Parameters for the rheological law (4) and for specimens resulting from drained compression of the 6061 alloy at an initial solid fraction of 0.8. aluminum [27,28]. Thus, the classical creep law (3) is not adapted to describe the behavior of the solid phase of the 6061 alloy since the values of the various parameters have no physical meaning. Another possibility consists in using a hyperbolic sine law. Indeed, some alloys can exhibit a stronger stress increase than that predicted by an exponential law when exceeding a given stress or strain rate level [18,[29][30][31][32]. This leads to the presence of two regimes: a power-law region and a power-law breakdown region, presumably governed by different physical processes not already completely known. For intermediate strain rates and stresses, the alloy exhibits a behavior governed by a power law, whereas, for high strain rates and stresses, a hyperbolic sine law intervenes. Therefore, due to the stress level reached during the experiments, it can be assumed that the experimental conditions lead to the typical behavior of the power-law breakdown regime. In this case, the following equation can be used to relate the strain rate ε to the experimental stress [18,29,30]: ε = A • G • b k • T • exp -Q RT • [sinh(˛• )] n (4) where b is the burgers vector (2.86 × 10 -10 m), k is the Boltzman constant (1.38 × 10 -23 J/K), A and ˛are material constants and G is the shear modulus. It is assumed that the shear modulus depends on temperature according to the equation suggested in [33]: G = 3.022 × 10 4 -16 × T (in MPa). The standard procedure to determine the parameters values can be divided into two main steps: (i) determination of the activation energy Q and the stress sensitivity parameter n by applying Eq. (3) on experimental data within the power-law regime and (ii) determination of the material constants A and ˛by applying Eq. (4) on experimental data within the power-law breakdown regime and by using the values of Q and n calculated in (i). This procedure assumes that sufficient data for each regime are available, which is not the case in this study. Therefore, the values of Q and n have been fixed to classical values for hot deformation of aluminum alloys [34,35] i.e. 142 kJ/mol and 5, respectively. Parameter's and material's constants A and ˛have then been determined so that the variation of ε • k • T • exp(Q/RT )/G • b as a function of sinh(˛• ) in a logarithmic scale exhibits a slope equal to 5, as shown in Fig. 12. The values of the different parameters of Eq. ( 4) are summarized in Table 2. It can be noticed that the material constant ˛is in agreement with classical values of the literature for aluminum alloys: from 0.01 to 0.08 [29][30][31]36], which tends to show that a hyperbolic sine law is relatively appropriate to describe the behavior of the solid phase of the 6061 alloy. The same study as for the drained alloy has been performed on the 6061 alloy in order to determine the most appropriate rheological law describing its behavior at high temperature. Since Eq. ( 3) still leads to high values for the activation energy Q (about 287 kJ/mol) and the stress sensitivity parameter (about 11), an identical procedure has been followed to estimate the values of the parameters for Eq. ( 4). These values are summarized in Table 3. Although the behavior of the non-drained 6061 alloy is also governed by a hyperbolic sine law, it can be noticed, by comparing Tables 2 and3, that the values of the constitutive parameters are different depending on the type of tested material. Therefore, the deformation at high temperature of the non-drained 6061 alloy differs from the deformation of the drained alloy, which confirms the necessity to study the behavior of the material with the appropriate composition to determine the response of the solid phase in the semi-solid state. Conclusion In this study, an original technique to determine the behavior at high temperature of the solid phase of a multi-constituent alloy was presented. It consists in: (i) drainage of the liquid present at a given temperature in order to obtain a solid with the exact composition of the solid phase at this temperature and (ii) deformation of the solid at high temperature to determine the rheological law. The densification behavior of the 6061 alloy in the mushy state has been investigated thanks to drained compressive tests. These tests consist in a mechanical solidification of the semi-solid alloy at a constant temperature resulting from liquid drainage through filters which leads to a nearly complete densification of the solid phase. The results have shown that the compression behavior of the semi-solid alloy depends on strain rate, on the initial morphology of the solid skeleton and on the accumulated strain. Indeed, at a given solid fraction achieved during compression, the pressure required to densify the solid increases with decreasing initial solid fraction and thus increasing accumulated strain. Since strain hardening is not present in this temperature range, this result is correlated to the liquid distribution in the specimen which depends on the accumulated strain. Specimens resulting from drained compression have been further tested in simple compression at high temperature to determine the constitutive equation of the solid phase present in a semi-solid 6061 alloy at a given temperature and to compare it with that of the 6061 alloy. Compression tests with strain rate jumps at various temperatures have shown that the behavior of the two materials is governed by a hyperbolic sine law. The values of the parameters of the constitutive equation are however different. This thus shows that the behavior of the solid phase in the 6061 alloy differs from that of the alloy. It is therefore necessary to study the behavior of the material with the appropriate composition if the behavior of the solid phase must be known within the solidification range. Fig. 1 . 1 Fig. 1. Schematic view of the oedometric compression apparatus (a) with picture of the two filters (b). Fig. 2 . 2 Fig. 2. Variation of the applied stress during drained compression as a function of the solid fraction present in the specimen. The two curves correspond to two different initial solid fractions. Fig. 3 . 3 Fig. 3. Variation of the applied stress during drained compression as a function of the solid fraction present in the specimen. The two curves correspond to two different displacement rates of the piston. Fig. 4 . 4 Fig. 4. Stress-strain curves at various temperatures and various strain rates applied by compression on specimens resulting from drained compression of the 6061 alloy at an initial solid fraction of 0.8. Fig. 5 . 5 Fig. 5. Stress-strain curves at various temperatures and various strain rates applied by compression on non-drained 6061 specimens. Fig. 6 . 6 Fig.6. Pressure necessary to drain the liquid through a porous solid mediumcalculated using Eq. (2). Fig. 7 . 7 Fig. 7. Microstructure of the liquid part of the 6061 specimen which has been drained through the filters for an initial solid fraction equal to 0.8: overall view (a) and magnified view (b). Fig. 8 . 8 Fig. 8. Microstructure of the solid part of the 6061 specimen remaining in the container after a drained compression at an initial solid fraction of 0.8: in the vicinity of the filters (a) and in the bottom of the specimen (b). Fig. 9 . 9 Fig. 9. X-ray diffraction patterns on the liquid part of the 6061 specimen which has been drained through the filters (a) and on the solid part which remained in the container (b) for an initial solid fraction equal to 0.8. Fig. 10 .Fig. 11 . 1011 Fig. 10. Stress-strain curves in a logarithmic scale at various temperatures for specimens resulting from drained compression of the 6061 alloy at an initial solid fraction of 0.8. Fig. 12 . 12 Fig. 12. Determination of the material constants A and ˛from Eq. (4) for specimens resulting from drained compression of the 6061 alloy at an initial solid fraction of 0.8. 4. 3 . 3 Comparison with the behavior at high temperature of the non-drained 6061 alloy Table 1 1 Composition (in wt%) of the liquid in a 6061 alloy when the solid fraction is equal to 0.8 obtained by ProPhase calculation. Si Mg Cu Fe Mn Cr 2.5 2.9 1.1 1.2 0.2 0.06 Table 3 3 Parameters for the rheological law (4) and for non-drained 6061 specimens. n Q (kJ/mol) ˛(MPa -1 ) A (m 2 s -1 ) 5 142 0.055 6 × 10 -15 Acknowledgements One of the authors (EG) is grateful to CNRS (French National Center for Scientific Research) and AREVA for financial support through a scholarship. The authors thank Cédric Gasquères, ALCAN CRV (France), for providing the ProPhase calculations and Stéphane Coindeau, CMTC (France), for the analysis by X-ray diffraction.
28,933
[ "14299" ]
[ "32956", "32956", "31214" ]
00125398
en
[ "phys" ]
2024/03/05 22:32:10
2007
https://hal.science/hal-00125398v3/file/1_over_f_noise_alkyl_on_Si_-_Clement_et_al_-PRB.pdf
Nicolas Clement Stephane Pleutin Oliver Seitz Stephane Lenfant Dominique Vuillaume email: dominique.vuillaume@iemn.univ-lille1.fr Stephane Pleutin Stephane Lenfant Nicolas Clément Stéphane Pleutin Stéphane Lenfant /f Tunnel Current Noise through Si-bound Alkyl Monolayers Keywords: Td ; 81, 07, Nb à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. I. INTRODUCTION Molecular electronics is a challenging area of research in physics and chemistry. Electronic transport in molecular junctions and devices has been widely studied from a static (dc) point of view. 1,2 More recently electron -molecular vibration interactions were investigated by inelastic electron tunneling spectroscopy. 3 In terms of the dynamics of a system, fluctuations and noise are ubiquitous physical phenomena. Noise is often composed of 1/ f noise at low frequency and shot noise at high frequency. Although some theories about shot noise in molecular systems were proposed, 4 it is only recently that it was measured, in the case of a single D 2 molecule. 5 Low frequency 1/ f noise was studied in carbon nanotube transistors, 6 but, up to now, no study of the low frequency current noise in molecular junctions (e.g., electrode/short molecules/electrode) has been reported. Low frequency noise measurements in electronic devices usually can be interpreted in terms of defects and transport mechanisms. 7 While it is obvious that 1/ f noise will be present in molecular monolayers as in almost any system, only a detailed study can lead to new insights in the transport mechanisms, defect characterization and coupling of molecules with electrodes. We report here the observation and detailed study of the 1/ f γ power spectrum of current noise through organic molecular junctions. n-Si/C 18 H 37 /Al junctions were chosen for these experiments because of their very high quality, which allows reproducible and reliable measurements. 8 The noise current power spectra (S I ) are measured for different biases. Superimposed on the background noise, we observe noise bumps over a certain bias range and propose a model that includes trap-induced tunnel current, which satisfactorily describes the noise behaviour in our tunnel molecular junctions. II. CURRENT-VOLTAGE EXPERIMENTS Si-C linked alkyl monolayers were formed on Si(111) substrates (0.05-0.2 Ω.cm) by thermally induced hydrosilylation of alkenes with Si:H, as detailed elsewhere. 8,9 50 nm thick aluminium contact pads with different surface areas between 9x10 -4 cm 2 and 4x10 -2 cm 2 were deposited at 3 Å/s on top of the alkyl chains. The studied junction, Sin/C 18 H 37 /Al, is shown in Fig. 1-a (inset). Figure 1-a shows typical current densityvoltage (J-V) curves. We measured 13 devices with different pad areas. The maximum deviation of the current density between the devices is not more than half an order of magnitude. It is interesting to notice that although devices A and C have different contact pad areas (see figure caption), their J-V curves almost overlap. This confirms the high quality of the monolayer. 9 Figure 1-b shows a linear behaviour around zero bias and we deduce a surface-normalized conductance of about 2-3x10 -7 S.cm -2 . For most of the measured devices, the J-V curves diverge from that of device C at V > 0.4 V, with an increase of current that can reach an order of magnitude at 1 V (device B). Taking into account the difference of work functions between n-Si and Al, considering the level of doping in the Si substrate (resistivity ~ 0.1 Ω.cm), there will be an accumulation layer in the Si at V > -0.1 V. [START_REF] Sze | Physics of Semiconductor Devices[END_REF] From capacitance-voltage (C-V) and conductance-frequency (G-f) measurements (not shown here), we confirmed this threshold value (± 0.1 V). As a consequence, for positive bias, we can neglect any large band bending in Si (no significant voltage drop in Si). The J-V characteristics are then calculated with the Tsu-Esaki formula [START_REF] Tsu | [END_REF] that can be recovered from the tunnelling Hamiltonian. [START_REF] Mahan | Many-Particle Physics[END_REF] Assuming the monolayer to be in between two reservoirs of free quasi-electrons and the system to be invariant with respect to translation in the transverse directions (parallel to the electrode plates) we get ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ + + = - - - ∞ + ∫ ) ( ) ( 0 3 2 1 1 ln ) ( 2 ) ( E eV E B e e E T dE emk V J µ β µ β π θ (1) where e is the electron charge, m the effective mass of the charge carriers within the barrier, k B the Boltzmann constant, ħ the reduced Planck constant, µ the Fermi level and β = 1/ k B Θ (Θ the temperature in K). T(E) is the transfer coefficient for quasi-electrons flowing through the tunnel barrier with longitudinal energy E. The total energy, E T , of quasi-electrons is decomposed into a longitudinal and a transverse component, E T =E+E t ; E t was integrated out in Eq. ( 1). The transfer coefficient is calculated for a given barrier height, Ф, and thickness, d, and shows two distinct parts: T(E)=T 1 (E)+T 2 (E). T 1 (E) is the main contribution to T(E) that describes transmission through a defect-free barrier. T 2 (E) contains perturbative corrections due to assisted tunnelling mechanisms induced by impurities located at or near the interfaces. The density of defects is assumed to be sufficiently low to consider the defects as independent from each other, each impurity at position i r interacting with the incoming electrons via a strongly localized potential at energy U i , ) ( i i r r U - δ . The value of U i is random. We write ∑ = = imp N i i U E T E T 1 2 2 ) , ( ) ( , with N imp being the number of impurities and T 2 (E,U i ) the part of the transmission coefficient due to the impurity i. The two contributions of T(E) are calculated following the method of Appelbaum and Brinkman. [START_REF] Appelbaum | [END_REF] Using Eq. ( 1), we obtain a good agreement with experiments. The theoretical J-V characteristic for device C and B are shown in Fig. 1-a. The best fits are obtained with Ф = 4.7 eV, m = 0.614 m e (m e is the electron mass), 10 10 traps/cm² uniformly distributed in energy for device C and additional 10 13 traps/cm 2 for device B distributed according to a Gaussian peaked at 3 eV. The transfer coefficients T 2 (E,U i ) show pronounced quasi-resonances at energies depending on U i that explain the important increase of current. The thickness is kept fixed, d = 2 nm (measured by ellipsometry 8 ). III. NOISE BEHAVIOR The difference observed in the J-V curves are well correlated with specific behaviours observed in the low frequency noise. Figure 2 shows the low frequency current noise power spectrum S I for different bias voltages from 0.02 V to 0.9 V. All curves are almost parallel and follow a perfect γ f / 1 law with γ = 1 at low voltages, increasing up to 1.2 at 1 V. We could not observe the shot noise because the high gains necessary for the amplification of the low currents induce a cut-off frequency of our current preamplifier lower than the frequency of the 1/ f -shot noise transition. At high currents, 1/ f γ noise was observed up to 10 kHz. The low frequency 1/ f current noise usually scales as I 2 , where I is the dc tunnel current, 14 as proposed for example by the standard phenomenological equation of Hooge 15 S I =α H I 2 /N c f where N c is the number of free carriers in the sample and α H is a dimensionless constant frequently found to be 2x10 -3 . This expression was used with relative success for homogeneous bulk metals 14,15 and more recently also for carbon nanotubes. 6 Similar relations were also derived for 1/ f noise in variable range hopping conduction. 16 In Fig. 3-a we present the normalized current noise power spectrum (S I /I 2 ) at 10 Hz (it is customary to compare noise spectra at 10 Hz) as a function of the bias V for devices B and C. Device C has a basic characteristic with the points following the dashed line asymptote. We use it as a reference for comparison with our other devices. We basically observed that S I /I² decreases with |V|. For most of our samples, in addition to the background normalized noise, we observe a local (Gaussian with V) increase of noise at V > 0.4 V. The amplitude of the local increase varies from device to device. This local increase of noise is correlated with the increase of current seen in the J-V curves. The J-V characteristics (Fig. 1-a) of device B diverge from those of device C at V > 0.4 V and this is consistent with the local increase of noise observed in Fig. 3. The observed excess noise bump is likely attributed to this Gaussian distribution of traps centred at 3 eV responsible for the current increase. Although the microscopic mechanisms associated with conductance fluctuations are not clearly identified, it is believed that the underlying mechanism involves the trapping of charge carriers in localized states. 17 . The nature and origin of these traps is however not known. We can hypothesis that the low density of traps uniformly distributed in energy may be due to Si-alkyl interface defects or traps in the monolayer, while the high density, peaked in energy, may be due to metal-induced gap states (MIGS) 18 or residual aluminum oxide at the metal-alkyl interface. The difference in the noise behaviours of samples B and C simply results from inhomogeneities of the metal deposition, i.e. of the chemical reactivity between the metal and the monolayer, or is due to the formation of a residual aluminum oxide due to the presence of residual oxygen in the evaporation chamber. More 1/f noise experiments on samples with various physical and chemical natures of the interfaces are in progress to figure out how the noise behaviour depends on specific conditions such as the sample geometry, the metal or monolayer quality, the method used for the metal deposition and so forth. IV. TUNNEL CURRENT NOISE MODEL To model the tunnel current noise in the monolayers, we assume that some of the impurities may trap charge carriers. Since we do not know the microscopic details of the trapping mechanisms and the exact nature of these defects, we use a qualitative description that associates to each of them an effective Two-Level Tunnelling Systems (TLTS) characterized by an asymmetric double well potential with the two minima separated in energy by 2 i ε . We denote as i ∆ the term allowing tunneling from one well to the other, and get, after diagonalization, two levels that are separated in energy by 2 2 i i i E ∆ + = ε . Since we are interested in low frequency noise, we focus on defects with very long trapping times i.e. defects for which i i ε << ∆ . The lower state (with energy - i E ) corresponds to an empty trap, the upper state (with energy + i E ) to a charged one. The relaxation rate from the upper to the lower state is determined by the coupling with the phonons and/or with the quasi-electrons giving θ τ B i i i k E E 2 coth 2 1 ∆ ∝ - and θ τ B i i i k E E 2 coth 2 1 ∆ ∝ -, respectively. In all cases, the time scale of the relaxation, τ, is very long compared to the duration of a scattering event. This allows us to consider the TLTS with a definite value at any instant of time. We then consider the following spectral density of noise for each TLTS 19 θ τ ϖ τ B i i I k E I I f S 2 Cosh 1 ) ( ) ( 2 2 2 2 - + - + - = (2) where f π ϖ 2 induced by the trapped quasi-electron that produces a shift in the applied bias, V δ . We write ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ + + ∂ ∂ + + ≅ - - - - ∞ + - + - ∫ ) ( ) ( 2 0 3 2 1 1 ln ) , ( 2 ) ( ) ( E V eV E i E i i B e e E U U E T dE emk V V I V I i δ µ β µ β π θ δ A ( 3 ) where A is the junction (metal electrode) area. The first term in the right hand side is due to the fluctuating applied bias, the second to the change in the impurity energy. Since T 2 (E) is already a perturbation, the second contribution is in general negligible but becomes important to explain the excess noise. We focus first on the background noise and therefore we keep only the first term of Eq. ( 3). We assume for simplicity that all the charged impurities give the same shift of bias Α = TJ C e V / δ , where TJ C is the capacitance of the tunnel junction per unit surface. Capacitance-voltage measurements (not shown) indicate that TJ C is constant for positive bias. By using usual approximations regarding the distribution in relaxation times, τ, and energies, E i , [START_REF] Kogan | Electronic noise and fluctuations in solids[END_REF] we get f C e V I N E S TJ imp I 1 1 2 2 2 * * ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ∂ ∂ Α ∝ - . (4) We assume that the distribution function of ε i and ∆ i , P(ε i ,∆ i ) is uniform to get the 1/ f dependence. In this last expression, the derivative of the current is evaluated for the lower impurity state, * imp N is the impurity density per unit energy and surface area. We have The quasi resonances of T 2 (E) are at the origin of the local increase. The Gaussian distribution selects defects for which T 2 (E,U i ) shows quasi resonance in the appropriate range of energy. These traps may be associated to a non-uniform contribution to the distribution function P(ε i ,∆ i ) that would break the 1/ f dependence of S I above certain bias. This is what is observed in Fig. 2, with γ changing from 1 to 1.2. max * E E = , the maximum of i E , if θ B k E << V. CONCLUSION In summary, we have reported the study of low frequency (1/ f γ ) current noise in molecular junctions. We have correlated the small dispersion observed in dc J-V characteristics and the local increase of normalized noise at certain biases (mainly at V > 0.4 V). A theoretical model qualitatively explains this effect as due to the presence of an energy-localized distribution of traps. The model predicts that the power spectrum of the background current noise is proportional to 2 ) / ( V I ∂ ∂ as observed in our experiments. [START_REF] Alers | A similar behavior have been some time observed in SiO 2 tunnel devices[END_REF] We also show that the power spectrum of the current noise should be normalized as S I /I 1.7 . The background noise is associated with a low density of traps uniformly distributed in energy that may be due to Si-alkyl interface defects or traps in the monolayer. The local increase of noise for bias V>0.4V is ascribed to a high density of traps, peaked in energy, probably induced by the metal deposition on the monolayer. 4) with a uniform defect distribution (dashed line), with adding a Gaussian energy-localized distribution of defects (thin solid line), and keeping the two terms of Eq. ( 3) with E * =5eδV (bold solid line). An ad-hoc multiplicative factor has been applied to the theoretical results. I 8 i 8 is the tunnel current for the empty (charged) impurity state. In this equation, we consider the average of ( ) + --I I over the TLTSs, having similar ε and i ∆ . The difference between the two levels of current has two different origins. The first one is the change in energy of the impurity level that directly affects ) the change in the charge density at the interfaces of the molecular junction N Fig. 3 decreases with V. The appropriate normalization factor to obtain flat background Fig. 1 : 1 Fig.1: (a) Experimental J-V curves at room temperature for n-Si/C 18 H 37 /Al junctions. The contact areas are 0.36 mm 2 for device A and 1 mm 2 for devices B and C. The voltage V is Fig. 2 : 2 Fig.2: Low frequency (1/f γ ) power spectrum current noise for device C. Although we Fig. 3 : 3 Fig.3: (A) Normalized power spectrum current noise S I /I 2 as a function of bias V for Fig. 4 : 4 Fig.4: S I -) / ( V I ∂ ∂ curve for device C on a log-log scale. The dashed line represents the Fig. 1 14Fig. 2 12 Fig.1 Fig. 3 16Fig. 4 34 Fig.3 ACKNOWLEDGEMENTS We thank David Cahen for many valuable discussions. N.C. and S.P. acknowledge support from the "ACI nanosciences" program and IRCICA. We thank Hiroshi Inokawa, Frederic Martinez for helpful comments.
16,174
[ "742625", "738520", "746141" ]
[ "1296", "1296", "98491", "1296", "1296" ]
00175653
en
[ "phys" ]
2024/03/05 22:32:10
2007
https://hal.science/hal-00175653/file/article_fermion.pdf
Xavier Baillard Mathilde Hugbart Rodolphe Le Targat Philip G Westergaard Arnaud Lecallier Frédéric Chapelet Michel Abgrall Giovanni D Rovera Philippe Laurent Peter Rosenbusch Mathilde Fouché Sébastien Bize Giorgio Santarelli André Clairon Pierre Lemonde email: pierre.lemonde@obspm.fr Gesine Grosche Burghard Lipphardt Harald Schnatz An Optical Lattice Clock with Spin-polarized 87Sr Atoms come Introduction The possibility to build high accuracy optical clocks with 87 Sr atoms confined in an optical lattice is now well established. Since this idea was published [2], experiments rapidly proved the possibility to obtain narrower and narrower resonances with atoms in the Lamb-Dicke regime [3,4,5]. The narrowest observed resonances now have a width in the Hz range [6] and the corresponding potential fractional frequency instabilities are better than 10 -16 over 1 s of averaging time. On the other hand, systematic effects were also shown to be highly controllable. It was theoretically demonstrated that the residual effects of the atomic motion could be reduced down to the 10 -18 level for a lattice depth as small as 10 E r [7], with E r the recoil energy associated with the absorption or emission of a lattice photon. Higher order frequency shifts due to the trapping light were then also shown to be controllable at that level [5] 1 . Altogether, the accuracy of the frequency measurement of the 1 S 0 -3 P 0 clock transition of Sr has steadily improved by four orders of magnitude since its first direct measurement in 2003 [8]. Three independent measurements performed in Tokyo university [9], JILA [4] and SYRTE [10] were reported with a fractional uncertainty of 10 -14 giving excellent agreement. Recently, the JILA group improved their uncertainty down to 2.5 × 10 -15 [1], a value that is comparable to the frequency difference between the various primary standards throughout the world [START_REF] Wolf | Comparing high accuracy frequency standards via TAI[END_REF]. We report here a new and independent measurement of this clock transition with an accuracy of 2.6 × 10 -15 . The major modification as compared to our previous evaluation is an improved control of the Zeeman effect. By applying a bias field of typically 0.1 mT and pumping atoms into extreme Zeeman states (we alternate measurements with m F = +9/2 and m F = -9/2) we cancel the first order Zeeman effect while getting a real time measurement of the actual magnetic field seen by the atoms [9]. The measured frequency of the 1 S 0 → 3 P 0 clock transition of 87 Sr is 429 228 004 229 873.6 (1.1) Hz. This value differs from the one of Ref. [1] by 0.4 Hz only. 2 Experimental setup Atom manipulation The apparatus is derived from the one described in Ref. [10]. The clock is operated sequentially, with a typical cycle duration of 400 ms. We use a dipole trap formed by a vertical standing wave at 813.428 nm inside an enhancement Fabry-Pérot cavity. The depth of the wells of the resulting lattice is typically 100 µK. A beam of 87 Sr atoms from an oven at a temperature of about 450 • C is sent through a Zeeman slower and then loaded into a magneto-optical trap (MOT) based on the 1 S 0 → 1 P 1 transition at 461 nm. The MOT temperature is about 1 mK. The dipole trap laser is aligned in order to cross the center of the MOT. Two additional laser beams tuned to the 1 S 0 → 3 P 1 and 3 P 1 → 3 S 1 transitions, at 689 nm and 688 nm respectively, are superimposed on the trapping laser. The atoms that cross these beams are therefore drained into the metastable states 3 P 0 and 3 P 2 at the center of the trap, and those with a small enough kinetic energy remain confined in the potential wells forming the lattice. The MOT and drain lasers are then switched off and atoms are optically pumped back to the ground state, where they are further cooled using the narrow 1 S 0 → 3 P 1 transition. About 95 % of the atoms are cooled down to the ground state of the trap (see Fig. 1), corresponding to a temperature of 3 µK. They are optically pumped into either the ( 1 S 0 , m F = 9/2) or ( 1 S 0 , m F = -9/2) Zeeman sub-state. This is achieved by means of a bias magnetic field of about = 10 -4 T and a circularly polarized laser (σ + or σ -depending on the desired m F state) tuned to the 1 S 0 (F = 9/2) → 3 P 1 (F = 9/2) transition. This transition is power-broadened to a few hundreds of kHz. The magnetic field can then be switched to a different value (up to a fraction of a mT) for the clock transition interrogation. We use a π-polarized laser at 698 nm to probe the 1 S 0 → 3 P 0 transition with adjustable frequency to match the desired (m F = ±9/2 → m F = ±9/2) transition. Finally the populations of the two states 1 S 0 and 3 P 0 are measured by laser induced fluorescence using two blue pulses at 461 nm separated by a repumping pulse. Measurement scheme The spectroscopy of the clock transition is performed with an extended-cavity diode laser at 698 nm which is pre-stabilized with an interference filter [START_REF] Baillard | [END_REF]. The laser is stabilized to an ultrastable cavity of finesse F = 25000, and its frequency is constantly measured by means of a femtosecond fiber laser [13,14] referenced to the three atomic fountain clocks FO1, FO2 and FOM. The femtosecond fiber laser is described in paragraph 2.3. The fountain ensemble used in this measurement is described extensively in [15,16]. The three atomic fountains FO1, FO2 and FOM are used as primary frequency standards measuring the frequency of the same ultra-low noise reference derived from a cryogenic sapphire oscillator. Practically, the reference signal at 11.98 GHz is divided to generate a 100 MHz reference which is disseminated to FO1 and FOM. Being located in the neighboring lab, FO2 benefits from using the 11.98 GHz directly. Another 1 GHz reference is also generated from the 11.98 GHz signal and sent through a fiber link as a reference for the fiber femtosecond optical frequency comb. The 100 MHz signal is also compared to the 100 MHz output of a H-maser. A slow phase-locked loop (time constant of 1000 s) is implemented to ensure coherence between the reference signals and the H-maser to avoid long term frequency drift of the reference signals. During the 15 days of measurement reported here, the three fountains are operated continuously as primary frequency standards measuring the same reference oscillator. The overall frequency instability for this measurement is 3.5 × 10 -14 at 1 s for FO2, 4.2 × 10 -14 at 1 s for FO1 and 7.2 × 10 -14 at 1 s for FOM. The accuracy of these clocks are 4 × 10 -16 for FO1 and FO2 and 1.2 × 10 -15 for FOM. The fractional frequency differences between the fountain clocks are all consistent with zero within the combined 1-sigma error bar, which implies consistency to better than 10 -15 . The link between the cavity stabilized laser and the frequency comb is a fiber link of 50 m length with a phase noise cancellation system. The probe laser beam is sent to the atoms after passing through an acousto-optic modulator (AOM) driven at a computer controlled frequency. Each transition is measured for 32 cycles before switching to the transition involving opposite m F states. Two digital servo-loops to both atomic resonances therefore run in parallel with interlaced measurements. For each servo-loop we alternately probe both sides of the resonance peak. The difference between two successive transition probability measurements constitutes the error signal used to servo-control the AOM frequency to the atomic transition. In addition, we interlace sets of 64 cycles involving two different trapping depths. The whole sequence is repeated for up to one hour. This operating mode allows the independent evaluation of three clock parameters. The difference between the frequency measurements made for each Zeeman sub-state can be used to accurately determine the magnetic field. As we switch to the other resonance every 32 cycles, this gives a real-time calibration of the magnetic-field averaged over 64 cycles. The global average of the measurement is the value of the clock frequency and is independent on the first order Zeeman effect as the two probed transitions are symmetrically shifted. Finally, the two frequencies corresponding to two different dipole trap depths are used for a real-time monitoring of the possible residual light shift of the clock transition by the optical lattice. The frequency stability of the Sr lattice clock-FO2 fountain comparison is shown in Fig. 2. The Allan deviation is 6 × 10 -14 τ -1/2 so that the statistical uncertainty after one hour of averaging time is 10 -15 , corresponding approximately to 0.5 Hz. Fig. 2. Allan standard deviation of the frequency measurements for a magnetic field B = 0.87 G and a time of interrogation of 20 ms. The line is a fit to the data using a τ -1/2 law. The corresponding stability at 1 s is 6 × 10 -14 . The frequency comb For the absolute frequency measurement of the Sr transition we have used a fibre-based optical frequency comb which is based on a FC1500 optical frequency synthesizer supplied by Menlo Systems. The laser source for the FC1500 comb is a passively mode-locked femtosecond fibre laser which operates at a centre wavelength of approximately 1550 nm and has a repetition rate of 100 MHz. The repetition rate can be tuned over approximately 400 kHz, by means of an end mirror mounted on a translation stage controlled by a stepper motor and a piezoelectric transducer. The output power from the mode-locked laser is split and fed to three erbium-doped fiber amplifiers (EDFAs). These are used to generate three phase-coherent optical frequency combs whose spectral properties can be independently optimized to perform different functions. The output from the first EDFA is broadened using a nonlinear fibre to span the wavelength range from approximately 1000 nm to 2100 nm. This provides the octave-spanning spectrum required for detection of the carrier-envelope offset frequency f 0 using the self-referencing technique [17,18]. With proper adjustment of the polarisation controllers a beat signal with a signal to noise ratio SNR= 40 dB in a resolution bandwidth of 100 kHz is achieved. The electronics for the stabilization of the carrier offset frequency comprises a photo detector, a tracking oscillator, and a digital phase-locked loop (PLL) with adjustable gain and bandwidth. The offset frequency is stabilized by feedback to the pump laser diode current. The critical frequency to be measured is the pulse repetition frequency f rep since the optical frequency is measured as a very high multiple of this pulse repetition frequency. To overcome noise limitations due to the locking electronics and to enhance the resolution of the counting system, we detect f rep at a high harmonic using a fast InGaAs photodiode with a bandwidth of 25 GHz. Locking of the repetition frequency is provided by an analogue phase-locked loop comparing the 90 th harmonic of f rep with a microwave reference and controlling the cavity length. The subsequent use of a harmonic tracking filter further enhances the short term resolution of our counting system. For this purpose, the 9 GHz beat signal is down-converted with a low noise 9 GHz signal synthesized from the microwave frequency reference (CSO / hydrogen maser referenced to a Cs-fountain clock). The difference frequency is again multiplied by 128, reducing frequency counter digiti-zation errors to below the level of the noise of the microwave reference. Thereby, the frequency at which f rep is effectively measured is 1.15 THz. A second EDFA generates high power radiation which is frequency-doubled using a PPLN crystal, generating a narrow-band frequency comb around 780 nm. This is subsequently broadened in a nonlinear fiber to generate a comb spanning the range 600-900 nm. Light of the spectroscopy laser at 698 nm is superimposed on the output of the frequency comb using a beam splitter. To assure proper mode matching of the beams and proper polarization adjustment one output of the beam splitter is launched into a single mode fiber and detected with a DC photodetector. The other output is dispersed by means of a diffraction grating. A subsequent pinhole placed in front of the photodetector then selects a narrow range of the spectrum at 698 nm and improves the signal to noise ratio of the observed heterodyne beat. For the heterodyne beat signal with the Sr clock laser a SNR of 30-35 dB in a bandwidth of 100 kHz was achieved. Again, a tracking oscillator is used for optimal filtering and conditioning of the heterodyne signal. All beat frequencies and relevant AOMfrequencies were counted using totalizing counters with no dead-time; the counters correspond to Π-estimators [19] for the calculation of the standard Allan variance. First order Zeeman effect In presence of a magnetic field, both clock levels, which have a total momentum F = 9/2, are split into 10 Zeeman sub-states. The linear shift ∆ Z of a sub-state due to a magnetic field B is ∆ Z = m F g F µ B B/h, (1) where m F is the Zeeman sub-state (here 9/2 or -9/2), g F the Landé factor of the considered state, µ B the Bohr magneton, and h the Planck constant. Using the differential g-factor between 3 P 0 and 1 S 0 reported in Ref. [20]: ∆g = 7.77(3) × 10 -5 , we can determine the magnetic field by measuring two symmetrical resonances. Fig. 3 shows the typical resonances observed with a magnetic field B = 87 µT for both m F = ±9/2 sub-states. The linewidth is of the order of 30 Hz, essentially limited by Fourier broadening hence facilitating the lock of the frequency to each resonance. This linewidth corresponds to an atomic quality factor Q = 1.4 × 10 13 . At this magnetic field, two successive π-transitions are separated by 96 Hz, which is high enough to entirely resolve the Zeeman sub-structure with that type of resonance and to limit possible line-pulling effects to below 10 -15 (see section 4.3). The magnetic field used for pumping and detecting the atoms is provided by two coils in Helmoltz configuration to produce a homogeneous field at the center of the trap. They are fed by a fast computer-controlled power supply to reach the desired value in a few ms. This setup requires to accurately characterize the stability of the magnetic field, as the residual magnetic field fluctuations are a possible issue for the clock accuracy and stability. The Zeeman effect can provide a precise measurement of this field and its calibration when we measure the clock transition for two symmetrical transitions. When probing the two transitions for the m F = ±9/2 sub-states, the difference ∆ν between the two frequencies can be related to the magnetic field using Eq. 1: ∆ν = 9∆gµ B B/h. To evaluate the stability of the magnetic field, we chose a particular set of parameters (B = 87 µT and a modulation depth of the numerical servo-loop of 10 Hz) and repeated a large number of times the corresponding time sequence as described in section 2.2. The measured magnetic field is averaged over 64 cycles. We then concatenated all the averaged data and calculated the Allan standard deviation to determine the long term stability of the magnetic field. The result is plotted on Fig. 4. The deviation is 10 -2 in fractional units at 32 s, and going down following a τ -1/2 law for longer times. The deviation for long times is below 10 -3 . This measurement is totally dominated by the frequency noise of the Sr clock and no fluctuations of the field itself are visible at the present level of resolution. For a magnetic field of 87 µT, this represents a control of the magnetic field at the sub-µT level over long timescales. Frequency accuracy 4.1 Second order Zeeman effect The clock has been operated with different values of the bias magnetic field up to 0.6 mT. As explained before, our method of interrogation makes the measurements independent on the first order Zeeman effect. On the other hand, both resonances are shifted by the same quadratic Zeeman shift which has to be evaluated. From the calibration of the magnetic field, we can evaluate the dependence of the transition frequency as a function of this field. The results are plotted on Fig. 5. The line plotted on the graph represents the expected quadratic dependence of -23.3 Hz/mT 2 [21] where we adjusted only the frequency offset to fit the data. The statistical uncertainty on this fit is 0.2 Hz, and there is no indication for a residual first order effect to within less than 1 Hz at 0.6 mT. At B = 87 µT, where most of the measurements were done, the correction due to the quadratic Zeeman effect is 0.1 Hz only. Conversely, an experimental value for the quadratic Zeeman effect coefficient can be derived from the data plotted in Fig. 5 with a 7% uncertainty. We find -24.9(1.7) Hz/mT 2 , which is in agreement with theory. Residual lattice light shift The clock frequency as a function of the trapping depth is plotted on Fig. 6. Measurements have been done with depths ranging from 50 to 500 E r , corresponding to an individual light shift of both clock levels levels up to 1.8 MHz. Over this range, the scatter of points is less than 2 Hz and the statistical uncertainty of each point lower than 1 Hz. The control of this effect has been evaluated by fitting the data with a line. The slope represents a shift of 0.5(5) Hz at 500 E r . The differential shift between both clock states is therefore controlled at a level of 3 × 10 -7 . In ultimate operating conditions of the clock, a trapping depth of 10 E r is theoretically sufficient to cancel the motional effects down to below 10 -17 [7]. The light shift corresponding to this depth is 36 kHz for both level, or 8 × 10 -11 in fractional units. The kind of control demonstrated here would correspond, in these ultimate conditions, to a residual light shift below 2 × 10 -17 . Uncertainty budget Other systematic effects have been evaluated and included in the accuracy budget listed in Table 1. The line pulling by neighbouring transitions has been carefully evaluated. Two types of transitions should be considered: transverse motional sidebands and transitions between the various Zeeman states of the atoms. Transverse motional sidebands can be excited by the transverse k-content of the probe laser. For a lattice depth of 100 E r , the transverse oscillation frequency is about 150 Hz. Both diffraction and misalignement are below 1 mrad here so that the transverse dynamics is deeply in the Lamb-Dicke regime and the height of transverse sidebands are expected to be at most 5×10 -3 of the carrier (experimentally, they do not emerge from the noise of the measurements). The corresponding line pulling is therefore below 0.4 Hz. This is confirmed by the absence of pathological dependence of the clock frequency as a function of the lattice depth (Fig. 6). Unwanted Zeeman transitions result from the imperfection of the optical pumping process and of the polarization of the probe laser. In standard configuration the only visible stray resonance is the m F = ±7/2 -m F = ±7/2 transition, with a height that is about half of the one of the m F = ±9/2 -m F = ±9/2 resonance. It is difficult to set a realistic theoretical upper limit on this effect since we have no direct access to the level of coherence between the various m F states, nor on the degree of polarization of the probe laser. Experimentally however, several parameters can be varied to test the effect. The magnetic field dependence shown in Fig. 5 shows no deviation from the expected law to within the error bars. On the other hand, measurements performed with various depths of the servo-loop modulation also show no differences to within 0.5 Hz. Finally, we operated the clock with a probe laser polarization orthogonal to the bias field and using the m F = ±9/2 -m F = ±7/2 transitions as clock resonances. The clock frequency in this configuration is found 0.2(5) Hz away from the frequency in the standard configuration. These measurements also test for a possible line pulling from higher order stray resonances involving both a change of the transverse motion and of the internal Zeeman state which can Incidently, the combination of the measurements using σ and π transitions allows the derivation of the differential Landé factor between both clock states [20]. We find g( 3 P 0 ) -g( 1 S 0 ) = 7.90(7) × 10 -5 , a value that differs from the one reported in Ref. [20] by twice the combined 1-sigma uncertainty. The light shift due to the probe laser is essentially due to the off-resonant coupling of the 3 P 0 state with the 3 S 1 state. The typical light intensity used for the clock evaluation was of a few mW/cm 2 . By varying this intensity by a factor up to 2, no visible effect has been observed to within the uncertainty. A more precise evaluation was carried out using the bosonic isotope 88 Sr [22]. The measured light shift for an intensity of 6 W/cm 2 was in this case of -78 [START_REF] Wolf | Comparing high accuracy frequency standards via TAI[END_REF] Hz. The corresponding effect for our current setup, where the probe power is 3 orders of magnitude smaller, is about 0.1 Hz with an uncertainty in the 10 -2 Hz range. The Blackbody radiation shift is derived from temperature measurements of the vacuum chamber using two Pt resistors placed on opposite sides of the apparatus and using the accurate theoretical calculation reported in Ref. [23]. The Blackbody radiation shift in our operating conditions is 2.39(10) Hz. Finally a 1 Hz uncertainty is attributed to an effect that has not been clearly identified. After having varied all the parameters necessary for estimating the systematic effects, we decided to check the overall consistency by performing three series of measurements with fixed parameters. The two first ones were performed with a bias field of 87 µT and servo loop modulation depths of 7 and 10 Hz respectively. The third one with a larger field of 140 µT and a modulation depth of 7 Hz. The results of series 2 and 3 are shown in Fig. 7, where the error bars include the statistical uncertainty of each measurement only. The scatter of points of series 3 is clearly incompatible with the individual error bars (the reduced χ 2 of this distribution is 4.3). In addition, its average value is 1.5 Hz away. Having not clearly identified the reason for this behaviour (one possibility could be a problem in the injection locking of one of the slave lasers at 698 nm), we decided to keep this series of data. We also cannot ensure that the effect is not present (though at a smaller level) in the other measurements and decided to attribute an uncertainty of 1 Hz to this effect. Taking into account these systematic effects, the averaged clock frequency is determined to be ν clock = 429 228 004 229 873.6(1.1) Hz. The global uncertainty, 2.6 × 10 -15 in fractional units, corresponds to the quadratic sum of all the uncertainties of the systematic effects listed in Table 1. The statistical uncertainty is at the level of 0.1 Hz. Conclusion We have reported here a new measurement of the frequency of the 1 S 0 → 3 P 0 transition of 87 Sr with an uncertainty of 1.1 Hz or 2.6 × 10 -15 in fractional units. The result is in excellent agreement with the values reported by the JILA group with a similar uncertainty [1] and by the Tokyo group with a 4 Hz error bar [9]. Obtained in independent experiments with significant differences in their implementation, this multiple redundancy strengthens the obtained results and further confirms the possibility to build high accuracy clocks with cold atoms confined in an optical lattice. It also further assesses this transition as a possible candidate for a future redefinition of the second. SYRTE is Unité Associée au CNRS (UMR 8630) and a member of IFRAF. This work is supported by CNES and DGA. PTB acknowledges financial support from the German Science foundation through SFB 407. Fig. 1 . 1 Fig.1. Spectrum at high power of the carrier and the first two longitudinal sidebands of the trapped atoms. The ratio between both sidebands is related to the population of the ground state of the trap. 95 % of the atoms are in the lowest vibrational state of the lattice wells. Fig. 3 . 3 Fig. 3. Experimental resonances observed for the mF = 9/2 → mF = 9/2 (left) and mF = -9/2 → mF = -9/2 (right) transitions for a magnetic field B = 87 µT and an interrogation time of 20 ms. The lines are gaussian fits to the data. The asymmetry between both resonances results from the imperfection of the optical pumping. Fig. 4 . 4 Fig. 4. Allan standard deviation of the magnetic field. The line is a τ -1/2 fit to the data. Fig. 5 . 5 Fig. 5. (a) Clock frequency as a function of the applied magnetic field. The line represents a fit of the experimental data by a quadratic law with one adjustable parameter : the frequency offset. The linear term was set to 0 and the quadratic term to its theoretical value. (b) Clock frequency after correction for the second order Zeeman effect. The line is the average of the data. Fig. 6 . 6 Fig.6. Clock frequency as a function of the dipole trap depth in terms of recoil energies. On the upper scale is the corresponding light shift of the clock levels. The line is a linear fit to the data. The value of the light shift due to the trap at 500 Er is only 0.5(0.5) Hz. Fig. 7 . 7 Fig. 7. Two series of measurements performed with different clock parameters (see text). The series plotted on the right hand side of the figure clearly exhibits a scatter of points that is incompatible with the individual statistical error bars of the measurements. Its reduced χ 2 is 4.3. The other series behaves normally and is shown for reference. Table 1 . 1 Uncertainty budget. Effect Correction (Hz) Uncertainty (Hz) Fractional uncertainty (×10 -15 ) Zeeman 0.1 0.1 0.2 Probe laser Stark shift 0.1 < 0.1 < 0.1 Lattice AC Stark shift (100 Er) 0 0.2 0.4 Lattice 2nd order Stark shift (100 Er) 0 0.1 0.2 Line pulling (transverse sidebands) 0 0.5 1.1 Cold collisions 0.1 0.2 Blackbody radiation shift 2.39 0.1 0.1 See text 0 1 2.3 Fountain accuracy 0 0.2 0.4 Total 2.59 1.1 2.6 The first order shift can be made to vanish in this type of clocks at the so-called "magic wavelength".
26,424
[ "180181", "791511", "1241408", "739323", "829254" ]
[ "1365", "1365", "1365", "1365", "1365", "1365", "1365", "1365", "1365", "1365", "1365", "1365", "1365", "1365", "85845", "85845", "85845" ]
01383023
en
[ "math" ]
2024/03/05 22:32:10
2020
https://hal.science/hal-01383023v3/file/KS_main_30Mar2018.pdf
Shingo Kamimoto David Sauzin Iterated convolutions and endless Riemann surfaces Keywords: may come Introduction In this article, we deal with the following version of Écalle's definition of resurgence: Definition 1.1. A convergent power series φ ∈ C{ζ} is said to be endlessly continuable if, for every real L > 0, there exists a finite subset F L of C such that the holomorphic germ at 0 defined by φ can be analytically continued along every Lipschitz path γ : [0, 1] → C of length smaller than L such that γ(0) = 0 and γ (0, 1] ⊂ C \F L . We denote by R ⊂ C{ζ} the space of endlessly continuable functions. (j-1)! is an endlessly continuable function. In other words, the space of resurgent series is R := B -1 (C δ ⊕ R) ⊂ C[[z -1 ]], where B : C[[z -1 ]] → C δ ⊕ C[[ζ] ] is the formal Borel transform, defined by B φ := ϕ 0 δ + φ(ζ) in the notation of Definition 1.2. We will also treat the more general case of functions which are "endlessly continuable w.r.t. bounded direction variation": we will define a space R dv containing R and, correspondingly, a space R dv containing R, but for the sake of simplicity, in this introduction, we stick to the simpler situation of Definitions 1.1 and 1.2. Note that the radius of convergence of an element of R may be 0. As for the elements of R, we will usually identify a convergent power series and the holomorphic germ that it defines at the origin of C, as well as the holomorphic function which is thus defined near 0. Holomorphic germs with meromorphic or algebraic analytic continuation are examples of endlessly continuable functions, but the functions in R can have a multiple-valued analytic continuation with a rich set of singularities. The convolution product is defined as the Borel image of multiplication and denoted by the symbol * : for φ, ψ ∈ C [[ζ]], φ * ψ := B(B -1 φ • B -1 ψ), and δ is the convolution unit (obtained from (C [[ζ]], * ) by adjunction of unit). As is well known, for convergent power series, convolution admits the integral representation Our aim is to study the analytic continuation of the convolution product of an arbitrary number of endlessly continuable functions, to check its endless continuability, and also to provide bounds, so as to be able to deal with nonlinear operations on resurgent series. A typical example of nonlinear operation is the substitution of one or several series without constant term φ1 , . . . , φr into a power series F (w 1 , . . . , w r ), defined as (1.2) F ( φ1 , . . . , φr ) := k∈N r c k φk 1 1 • • • φkr r for F = k∈N r c k w k 1 1 • • • w kr r . One of our main results is Theorem 1.3. Let r ≥ 1 be an integer. Then, for any convergent power series F (w 1 , . . . , w r ) ∈ C{w 1 , . . . , w r } and for any resurgent series φ1 , . . . , φr without constant term, F ( φ1 , . . . , φr ) ∈ R. The proof of this result requires suitable bounds for the analytic continuation of the Borel transform of each term in the right-hand side of (1.2). Along the way, we will study the Riemann surfaces generated by endlessly continuable functions. We will also prove similar results for the larger spaces R dv and R dv . Resurgence theory was developed in the early 1980s, with [START_REF] Écalle | Les fonctions résurgentes[END_REF] and [START_REF] Écalle | Les fonctions résurgentes[END_REF], and has many mathematical applications in the study of holomorphic dynamical systems, analytic differential equations, WKB analysis, etc. (see the references e.g. in [START_REF]Nonlinear analysis with resurgent functions[END_REF]). More recently, there has been a burst of activity on the use of resurgence in Theoretical Physics, in the context of matrix models, string theory, quantum field theory and also quantum mechanics-see e.g. [START_REF] Aniceto | Nonperturbative ambiguities and the reality of resurgent transseries[END_REF], [START_REF] Aniceto | The resurgence of instantons in string theory[END_REF], [START_REF] Argyres | The semi-classical expansion and resurgence in gauge theories: new perturbative, instanton, bion, and renormalon effects[END_REF], [START_REF] Cherman | Decoding perturbation theory using resurgence: Stokes phenomena, new saddle points and Lefschetz thimbles[END_REF], [START_REF] Couso-Santamaría | Finite N from resurgent large N[END_REF], [START_REF] Dunne | Resurgence and trans-series in quantum field theory: the CP N -1 model[END_REF], [START_REF] Dunne | Uniform WKB, multi-instantons, and resurgent trans-series[END_REF], [START_REF] Garay | Resurgent deformation quantisation[END_REF], [START_REF] Mariño | Lectures on non-perturbative effects in large N gauge theories, matrix models and strings[END_REF]. In almost all these applications, it is an important fact that the space of resurgent series be stable under nonlinear operations: such stability properties are useful, and at the same time they account for the occurrence of resurgent series in concrete problems. These stability properties were stated in a very general framework in [START_REF] Écalle | Les fonctions résurgentes[END_REF], but without detailed proofs, and the part of [START_REF] Candelpergher | Approche de la résurgence[END_REF] which tackles this issue contains obscurities and at least one mistake. It is thus our aim in this article to provide a rigorous treatment of this question, at least in the slightly narrower context of endless continuability. The definitions of resurgence that we use for R and R dv are indeed more restrictive than Écalle's most general definition [START_REF] Écalle | Les fonctions résurgentes[END_REF]. In fact, our definition of R dv is almost identical to the one used by Pham et al. in [START_REF] Candelpergher | Approche de la résurgence[END_REF], and our definition of R is essentially equivalent to the definition used in [START_REF] Deleabaere | Endless continuability and convolution product[END_REF], but the latter preprint has flaws which induced us to develop the results of the present paper. These versions of the definition of resurgence are sufficient for a large class of applications, which virtually contains all the aforementioned ones-see for instance [START_REF] Kamimoto | Resurgence of formal series solutions of nonlinear differential and difference equations[END_REF] for the details concerning the case of nonlinear systems of differential or difference equations. The advantage of the definitions based on endless continuability is that they allow for a description of the location of the singularities in the Borel plane by means of discrete filtered sets or discrete doubly filtered sets (defined in Sections 2.1 and 2.5); the notion of discrete (doubly) filtered set, adapted from [START_REF] Candelpergher | Approche de la résurgence[END_REF] and [START_REF] Deleabaere | Endless continuability and convolution product[END_REF], is flexible enough to allow for a control of the singularity structure of convolution products. A more restrictive definition is used in [START_REF]Nonlinear analysis with resurgent functions[END_REF] and [START_REF] Mitschi | Divergent Series, Summability and Resurgence[END_REF] (see also [START_REF] Écalle | Les fonctions résurgentes[END_REF]): Definition 1.4. Let Σ be a closed discrete subset of C. A convergent power series φ is said to be Σ-continuable if it can be analytically continued along any path which starts in its disc of convergence and stays in C \Σ. The space of Σ-continuable functions is denoted by RΣ . This is clearly a particular case of Definition 1.1: any Σ-continuable function is endlessly continuable (take F L = { ω ∈ Σ | |ω| ≤ L }). It is proved in [MS16] that, if Σ ′ and Σ ′′ are closed discrete subsets of C, and if also Σ := {ω ′ + ω ′′ | ω ′ ∈ Σ ′ , ω ′′ ∈ Σ ′′ } is closed and discrete, then φ ∈ RΣ ′ , ψ ∈ RΣ ′′ ⇒ φ * ψ ∈ RΣ . This is because in formula (1.1), heuristically, singular points tend to add to create new singularities; so, the analytic continuation of φ * ψ along a path which does not stay close to the origin is possible provided the path avoids Σ. In particular, if a closed discrete set Σ is closed under addition, then RΣ is closed under convolution; moreover, in this case, bounds for the analytic continuation of iterated convolutions φ1 * • • • * φn are given in [START_REF]Nonlinear analysis with resurgent functions[END_REF], where an analogue of Theorem 1.3 is proved for Σ-continuable functions. The notion of Σ-continuability is sufficient to cover interesting applications, e.g. differential equations of the saddle-node singularity type or difference equations like Abel's equation for one-dimensional tangent-to-identity diffeomorphisms, in which cases one may take for Σ a one-dimensional lattice of C. However, reflecting for a moment on the origin of resurgence in differential equations, one sees that one cannot handle situations beyond a certain level of complexity without replacing Σ-continuability by a more general notion like endless continuability. (z) + b 1 (z)ϕ + b 2 (z)ϕ 2 + • • • with b(z, w) = b m (z)w m ∈ z -1 C{z -1 , w} given, we may expect a formal solution whose Borel transform φ has singularities at ζ = -nλ, n ∈ Z >0 (because, as an effect of the nonlinearity, the singular points tend to add), i.e. φ will be Σ-continuable with Σ = {-λ, -2λ, . . .} (see [START_REF]Mould expansions for the saddle-node and resurgence monomials[END_REF] for a rigorous proof of this), but in the multidimensional case, for a system of r coupled equations with left-hand sides of the form dϕ j dz λ j ϕ j with λ 1 , . . . , λ r ∈ C * , we may expect that the Borel transforms φj of the components of the formal solution have singularities at the points ζ = -(n 1 λ 1 + • • • + n r λ r ), n ∈ Z r >0 ; this set of possible singular points may fail to be closed and discrete (depending on the arithmetical properties of (λ 1 , . . . , λ r )), hence, in general, we cannot expect these Borel transforms to be Σ-continuable for any Σ. Still, this does not prevent them from being always endlessly continuable, as proved in [START_REF] Kamimoto | Resurgence of formal series solutions of nonlinear differential and difference equations[END_REF]. -Another illustration of the need to go beyond Σ-continuability stems from parametric resurgence [START_REF] Écalle | Cinq applications des fonctions résurgentes[END_REF]. Suppose that we are given a holomorphic function b(t) globally defined on C, with isolated singularities ω ∈ S ⊂ C, e.g. a meromorphic function, and consider the differential equation (1.3) dϕ dt -zλϕ = b(t), where λ ∈ C * is fixed and z is a large complex parameter with respect to which we consider perturbative expansions. It is easy to see that there is a unique solution which is formal in z and analytic in t, namely φ(z, t) := -∞ k=0 λ -k-1 z -k-1 b (k) (t) , and its Borel transform φ(ζ, t) = -λ -1 b(t + λ -1 ζ) is singular at all points of the form ζ t,ω := λ(-t + ω), ω ∈ S. Now, if we add to the right-hand side of (1.3) a perturbation which is nonlinear in ϕ, we can expect to get a formal solution whose Borel transform possesses a rich set of singular points generated by the ζ t,ω 's, which might easily be too rich to allow for Σ-continuability with any Σ; however, we can still hope endless continuability. These are good motivations to study endless continuable functions. As already alluded to, we will use discrete filtered sets (d.f.s. for short) to work with them. A d.f.s. is a family of sets Ω = (Ω L ) L∈R ≥0 , where each Ω L is a finite set; we will define Ω-continuability when Ω is a d.f.s., thus extending Definition 1.4, and the space of endlessly continuable functions will appear as the totality of Ω-continuable functions for all possible d.f.s. This was already the approach of [START_REF] Candelpergher | Approche de la résurgence[END_REF], and it was used in [START_REF] Deleabaere | Endless continuability and convolution product[END_REF] to prove that the convolution product of two endlessly continuable functions is endlessly continuable, hence R is a subring of C[[z -1 ]]. However, to reach the conclusions of Theorem 1.3, we will need to give precise estimates on the convolution product of an arbitrary number of endlessly continuable functions, so as to prove the convergence of the series of holomorphic functions c k φ * k 1 1 * • • • * φ * kr r (Borel transform of the right-hand side of (1.2)) and to check its endless continuability. We will proceed similarly in the case of endless continuability w.r.t. bounded direction variation, using discrete doubly filtered sets. Notice that explicit bounds for iterated convolutions can be useful in themselves; in the context of Σ-continuability, such bounds were obtained in [START_REF]Nonlinear analysis with resurgent functions[END_REF] and they were used in [K 3 16] in a study in WKB analysis, where the authors track the analytic dependence upon parameters in the exponential of the Voros coefficient. As another contribution to the study of endlessly continuable functions, we will show how to contruct, for each discrete filtered set Ω, a universal Riemann surface X Ω whose holomorphic functions are in one-to-one correspondence with Ω-continuable functions. The plan of the paper is as follows. -Section 2 introduces discrete filtered sets, the corresponding Ω-continuable functions and their Borel images, the Ω-resurgent series, and discusses their relation with Definitions 1.1 and 1.2. The case of discrete doubly filtered sets and the spaces R dv and R dv is in Section 2.5. -Section 3 discusses the notion of Ω-endless Riemann surface and shows how to construct a universal object X Ω (Theorem 3.2). -In Section 4, we state and prove Theorem 4.8 which gives precise estimates for the convolution product of an arbitrary number of endlessly continuable functions. We also show the analogous statement for functions which are endlessly continuable w.r.t. bounded direction variation. -Section 5 is devoted to applications of Theorem 4.8: the proof of Theorem 1.3 and even of a more general and more precise version, Theorem 5.2, and an implicit resurgent function theorem, Theorem 5.3. Some of the results presented here have been announced in [START_REF] Kamimoto | Nonlinear analysis with endlessly continuable functions[END_REF]. 2 Discrete filtered sets and Ω-continuability In this section, we review the notions concerning discrete filtered sets (usually denoted by the letter Ω), the corresponding Ω-allowed paths and Ω-continuable functions. The relation with endless continuability is established, and sums of discrete filtered sets are defined in order to handle convolution of enlessly continuable functions. Discrete filtered sets We first introduce the notion of discrete filtered sets which will be used to describe singularity structure of endlessly continuable functions (the first part of the definition is adapted from [START_REF] Candelpergher | Approche de la résurgence[END_REF] and [START_REF] Deleabaere | Endless continuability and convolution product[END_REF]): Definition 2.1. We use the notation R ≥0 = {λ ∈ R | λ ≥ 0}. 1) A discrete filtered set, or d.f.s. for short, is a family Ω = (Ω L ) L∈R ≥0 where i) Ω L is a finite subset of C for each L, ii) Ω L 1 ⊆ Ω L 2 for L 1 ≤ L 2 , iii) there exists δ > 0 such that Ω δ = Ø. 2) Let Ω and Ω ′ be d.f.s. We write Ω ⊂ Ω ′ if Ω L ⊂ Ω ′ L for every L. 3) We call upper closure of a d.f.s. Ω the family of sets Ω = ( Ω) L∈R ≥0 defined by (2.1) ΩL := ε>0 Ω L+ε for L ∈ R ≥0 . It is easy to check that Ω is a d.f.s. and Ω ⊂ Ω. Example 2.2. Given a closed discrete subset Σ of C, the formula Ω(Σ) L := { ω ∈ Σ | |ω| ≤ L } for L ∈ R ≥0 defines a d.f.s. Ω(Σ) which coincides with its upper closure. From the definition of d.f.s., we find the following Lemma 2.3. For any d.f.s. Ω, there exists a real sequence (L n ) n≥0 such that 0 = L 0 < L 1 < L 2 < • • • and, for every integer n ≥ 0, L n < L < L n+1 ⇒ ΩLn = ΩL = Ω L . Proof. First note that (2.1) entails (2.2) ΩL := ε>0 ΩL+ε for every L ∈ R ≥0 (because Ω L+ε ⊂ ΩL+ε ⊂ ΩL+2ε ). Consider the weakly order-preserving integer-valued function L ∈ R ≥0 → N (L) := card ΩL . For each L the sequence k → N (L + 1 k ) must be eventually constant, hence there exists ε L > 0 such that, for all L ′ ∈ (L, L + ε L ], N (L ′ ) = N (L + ε L ), whence ΩL ′ = ΩL+ε L , and in fact, by (2.2), this holds also for L ′ = L. The conclusion follows from the fact that R ≥0 = k∈Z N -1 (k) and each non-empty N -1 (k) is convex, hence an interval, which by the above must be left-closed and right-open, hence of the form [L, L ′ ) or [L, ∞). Given a d.f.s. Ω, we set (2.3) S Ω := (λ, ω) ∈ R × C | λ ≥ 0 and ω ∈ Ω λ and denote by S Ω the closure of S Ω in R × C. We then call (2.4) M Ω := R × C \ S Ω (open subset of R × C) the allowed open set associated with Ω. Lemma 2.4. One has S Ω = S Ω and M Ω = M Ω. Proof. Suppose (λ, ω) ∈ S Ω. Then ω ∈ Ω λ+1/k for each k ≥ 1, hence (λ + 1 k , ω) ∈ S Ω , whence (λ, ω) ∈ S Ω . Suppose (λ, ω) ∈ S Ω . Then there exists a sequence (λ k , ω k ) k≥1 in S Ω which converges to (λ, ω). If ε > 0, then λ k ≤ λ + ε for k large enough, hence ω k ∈ Ω λ+ε , whence ω ∈ Ω λ+ε (because a finite set is closed); therefore (λ, ω) ∈ S Ω. Therefore, S Ω = S Ω = S Ω and M Ω = M Ω . Ω-allowed paths When dealing with a Lipschitz path γ : [a, b] → C, we denote by L(γ) its length. We denote by Π the set of all Lipschitz paths γ : [0, t * ] → C such that γ(0) = 0, with some real t * ≥ 0 depending on γ. Given such a γ ∈ Π and t ∈ [0, t * ], we denote by γ |t := γ| [0,t] ∈ Π the restriction of γ to the interval [0, t]. Notice that L(γ |t ) is also Lipschitz continuous on [0, t * ] since γ ′ exists a.e. and is essentially bounded by Rademacher's theorem. Definition 2.5. Given a d.f.s. Ω, we call Ω-allowed path any γ ∈ Π such that γ(t) := L(γ |t ), γ(t) ∈ M Ω for all t. We denote by Π Ω the set of all Ω-allowed paths. Notice that, given t * ≥ 0, (2.5) if t ∈ [0, t * ] → γ(t) = λ(t), γ(t) ∈ M Ω is a piecewise C 1 path such that γ(0) = (0, 0) and λ ′ (t) = |γ ′ (t)| for a.e. t, then γ ∈ Π Ω . In view of Lemmas 2.3 and 2.4, we have the following characterization of Ω-allowed paths: Lemma 2.6. Let Ω be a d.f.s. Then Π Ω = Π Ω and, given γ ∈ Π, the followings are equivalent: 1) γ ∈ Π Ω , 2) γ(t) ∈ C \ ΩL(γ |t ) for every t, 3) for every t, there exists n such that L(γ |t ) < L n+1 and γ(t) ∈ C \ ΩLn (using the notation of Lemma 2.3). Proof. Obvious. Notation 2.7. For L, δ > 0, we set M δ,L Ω := (λ, ζ) ∈ R × C | dist (λ, ζ), S Ω ≥ δ and λ ≤ L , (2.6) Π δ,L Ω := γ ∈ Π Ω | L(γ |t ), γ(t) ∈ M δ,L Ω for all t , (2.7) where dist(• , •) is the Euclidean distance in R × C ≃ R 3 . Note that M Ω = δ,L>0 M δ,L Ω , Π Ω = δ,L>0 Π δ,L Ω . Ω-continuable functions and Ω-resurgent series Definition 2.8. Given a d.f.s. Ω, we call Ω-continuable function a holomorphic germ φ ∈ C{ζ} which can be analytically continued along any path γ ∈ Π Ω . We denote by RΩ the set of all Ω-continuable functions and define RΩ := B -1 C δ ⊕ RΩ ⊂ C[[z -1 ]] to be the set of Ω-resurgent series. Remark 2.9. Given a closed discrete subset Σ of C, the Σ-continuability in the sense of Definition 1.4 is equivalent to the Ω(Σ)-continuability in the sense of Definition 2.8 for the d.f.s. Ω(Σ) of Example 2.2. Remark 2.10. Observe that Ω ⊂ Ω ′ implies S Ω ⊂ S Ω ′ , hence M Ω ′ ⊂ M Ω and Π Ω ′ ⊂ Π Ω , therefore Ω ⊂ Ω ′ ⇒ RΩ ⊂ RΩ ′ . Remark 2.11. Notice that, for the trivial d.f.s. Ω = Ø, RØ = O(C), hence O(C) ⊂ RΩ for every d.f.s. Ω, i.e. entire functions are always Ω-continuable. Consequently, convergent series are always Ω-resurgent: C{z -1 } ⊂ RΩ . However, RΩ = O(C) does not imply Ω = Ø (consider for instance the d.f.s. Ω defined by Ω L = Ø for 0 ≤ L < 2 and Ω L = {1} for L ≥ 2). In fact, one can show RΩ = O(C) ⇔ ∀L > 0, ∃L ′ > L such that Ω L ′ ⊂ { ω ∈ C | |ω| < L }. Remark 2.12. In view of Lemma 2.6, we have RΩ = RΩ . Therefore, when dealing with Ωresurgence, we can always suppose that Ω coincides with its upper closure (by replacing Ω with Ω). We now show the relation between resurgence in the sense of Definition 1.2 and Ω-resurgence in the sense of Definition 2.8. Theorem 2.13. A formal series φ ∈ C[[z -1 ]] is resurgent if and only if there exists a d.f.s. Ω such that φ is Ω-resurgent. In other words, (2.8) R = Ω d.f.s. RΩ , R = Ω d.f.s. RΩ . Before proving Theorem 2.13, we state a technical result. Lemma 2.14. Suppose that we are given a germ φ ∈ C{ζ} that can be analytically continued along a path γ : [0, t * ] → C of Π, and that F is a finite subset of C. Then, for each ε > 0, there exists a path γ * : [0, t * ] → C of Π such that • γ * (0, t * ) ⊂ C \F , • L(γ * ) < L(γ) + ε, • γ * (t * ) = γ(t * ), the germ φ can be analytically continued along γ * and the analytic continuations along γ and γ * coincide. Proof of Lemma 2.14. Without loss of generality, we can assume that γ [0, t * ] is not reduced to {0} and that t → L(γ |t ) is strictly increasing. The analytic continuation assumption allows us to find a finite subdivision 0 = t 0 < • • • < t m = t * of [0, t * ] together with open discs ∆ 0 , . . . , ∆ m so that, for each k, γ(t k ) ∈ ∆ k , the analytic continuation of φ along γ |t k extends holomorphically to ∆ k , and γ [t k , t k+1 ] ⊂ ∆ k if k < m. For each k ≥ 1, let us pick s k ∈ (t k-1 , t k ) such that γ [s k , t k ] ⊂ ∆ k-1 ∩ ∆ k ; increasing the value of s k if necessary, we can assume γ(s k ) / ∈ F . Let us also set s 0 := 0 and s m+1 := t * , so that 0 ≤ k ≤ m ⇒                γ [s k , s k+1 ] ⊂ ∆ k , the analytic continuation of φ along γ |s k is holomorphic in ∆ k γ(s k ) / ∈ F except maybe if k = 0, γ(s k+1 ) / ∈ F except maybe if k = m. We now define γ * by specifying its restriction γ * | [s k ,s k+1 ] for each k so that it has the same endpoints as γ| [s k ,s k+1 ] and, -if the open line segment S := γ(s k ), γ(s k+1 ) is contained in C \F , then we let γ * | [s k ,s k+1 ] start at γ(s k ) and end at γ(s k+1 ) following S, by setting γ * (t) := γ(s k ) + t-s k s k+1 -s k γ(s k+1 ) -γ(s k ) for t ∈ [s k , s k+1 ], -if not, then S ∩ F = {ω 1 , . . . , ω ν } with ν ≥ 1 (depending on k); we pick ρ > 0 small enough so that πρ < min 1 2 |ω i -γ(s k )|, 1 2 |ω i -γ(s k+1 )|, 1 2 |ω j -ω i |, ε ν(m+1) | 1 ≤ i, j, ≤ ν, i = j and we let γ * | [s k ,s k+1 ] follow S except that it circumvents each ω i by following a half-circle of radius ρ contained in ∆ k . This way, γ * | [s k ,s k+1 ] stays in ∆ k ; the resulting path γ * : [0, t * ] → C is thus a path of analytic continuation for φ and the analytic continuations along γ and γ * coincide. On the other hand, the length of γ * | [s k ,s k+1 ] is < |γ(s k ) -γ(s k+1 )| + ε m+1 , whereas the length of γ| [s k ,s k+1 ] is ≥ |γ(s k ) -γ(s k+1 )|, hence L(γ * ) < L(γ) + ε. Proof of Theorem 2.13. Suppose first that Ω is a d.f.s. and φ ∈ RΩ . Then, for every L > 0, φ meets the requirement of Definition 1.1 with F L = ΩL , hence φ ∈ R. Thus RΩ ⊂ R, which yields one inclusion in (2.8). Suppose now φ ∈ R. In view of Definition 1.1, the radius of convergence δ of φ is positive and, for each positive integer n, we can choose a finite set F n such that (2.9) the germ φ can be analytically continued along any path γ : [0, 1] → C of Π such that L(γ) < (n + 1)δ and γ (0, 1] ⊂ C \F n . Let F 0 := Ø. The property (2.9) holds for n = 0 too. For every real L ≥ 0, we set Ω L := n k=0 F k with n := ⌊L/δ⌋. One can check that Ω := (Ω L ) L∈R ≥0 is a d.f.s. which coincides with its upper closure. We will show that φ ∈ RΩ . Pick an arbitrary γ : [0, 1] → C such that γ ∈ Π Ω . It is sufficient to prove that φ can be analytically continued along γ. Our assumption amounts to γ(t) ∈ C \Ω L(γ |t ) for each t ∈ [0, 1]. Without loss of generality, we can assume that γ [0, 1] is not reduced to {0} and that t → L(γ |t ) is strictly increasing. Let N := ⌊L(γ)/δ⌋. We define a subdivision 0 = t 0 < t 1 < • • • < t N ≤ 1 by the requirement L(γ |tn ) = nδ and set I n := [t n , t n+1 ) for 0 ≤ n < N , I N := [t N , 1]. For each integer n such that 0 ≤ n ≤ N , (2.10) t ∈ I n ⇒ nδ ≤ L(γ |t ) < (n + 1)δ, thus Ω L(γ |t ) = n k=0 F k , in particular (2.11) t ∈ I n ⇒ γ(t) ∈ C \F n . Let us check by induction on n that φ can be analytically continued along γ |t for any t ∈ I n . If t ∈ I 0 , then γ |t has length < δ and the conclusion follows from (2.9). Suppose now that 1 ≤ n ≤ N and that the property holds for n -1. Let t ∈ I n . By (2.10)-(2.11), we have L(γ |t ) < (n + 1)δ and γ [t n , t] ⊂ C \F n . -If γ (0, t n ) ∩ F n is empty, then the conclusion follows from (2.9). -If not, then, since C \F n is open, we can pick t * < t n so that γ [t * , t] ⊂ C \F n , and the induction hypothesis shows that φ can be analytically continued along γ |t * . We then apply Lemma 2.14 to γ |t * with F = F n and ε = (n + 1)δ -L(γ |t ): we get a path γ * : [0, t * ] → C which defines the same analytic continuation for φ as γ |t * , avoids F n and has length < L(γ |t * ) + ε. The concatenation of γ * with γ| [t * ,t] is a path γ * * of length < (n + 1)δ which avoids F n , so it is a path of analytic continuation for φ because of (2.9), and so is γ itself. Sums of discrete filtered sets It is easy to see that, if Ω and Ω ′ are d.f.s., then the formula (2.12) (Ω * Ω ′ ) L := { ω 1 + ω 2 | ω 1 ∈ Ω L 1 , ω 2 ∈ Ω ′ L 2 , L 1 + L 2 = L } ∪ Ω L ∪ Ω ′ L for L ∈ R ≥0 defines a d.f.s. Ω * Ω ′ . We call it the sum of Ω and Ω ′ . The proof of the following lemma is left to the reader. Lemma 2.15. The law * on the set of all d.f.s. is commutative and associative. The formula Ω * n := Ω * • • • * Ω n times (for n ≥ 1) defines an inductive system, which gives rise to a d.f.s. Ω * ∞ := lim -→ n Ω * n . As shown in [START_REF] Candelpergher | Approche de la résurgence[END_REF] and [START_REF] Deleabaere | Endless continuability and convolution product[END_REF], the sum of d.f.s. is useful to study the convolution product: Theorem 2.16 ([OD15] ). Assume that Ω and Ω ′ are d.f.s. and φ ∈ RΩ , ψ ∈ RΩ ′ . Then the convolution product φ * ψ is Ω * Ω ′ -continuable. Remark 2.17. Note that the notion of Σ-continuability in the sense of Definition 1.4 does not give such flexibility, because there are closed discrete sets Σ and Σ ′ such that Ω(Σ) * Ω(Σ ′ ) = Ω(Σ ′′ ) for any closed discrete Σ ′′ (take e.g. Σ = Σ ′ = (Z >0 √ 2) ∪ Z <0 ), and in fact there are Σ-continuable functions φ such that φ * φ is not Σ ′′ -continuable for any Σ ′′ . In view of Theorem 2.13, a direct consequence of Theorem 2.16 is that the space of endlessly continuable functions R is stable under convolution, and the space of resurgent formal series R is a subring of the ring of formal series C[[z -1 ]]. Given φ ∈ RΩ ∩ z -1 C[[z -1 ]], Theorem 2.16 guarantees the Ω * k -resurgence of φk for every integer k, hence its Ω * ∞ -resurgence. This is a first step towards the proof of the resurgence of F ( φ) for F (w) = c k w k ∈ C{w}, i.e. Theorem 1.3 in the case r = 1, however some analysis is needed to prove the convergence of c k φk in some appropriate topology. What we need is a precise estimate for the convolution product of an arbitrary number of endlessly continuable functions, and this will be the content of Theorem 4.8. In Section 5, the substitution problem will be discussed in a more general setting, resulting in Theorem 5.2, which is more general and more precise than Theorem 1.3. Discrete doubly filtered sets and a more general definition of resurgence We now define the spaces R dv and R dv which were alluded to in the introduction. We first require the notion of "direction variation" of a C 1+Lip path. We denote by Π dv the set of all C 1 paths γ belonging to Π, such that γ ′ is Lipschitz and never vanishes. By Rademacher's theorem, γ ′′ exists a.e. on the interval of definition [0, t * ] of γ and is essentially bounded. We can thus define the direction variation V (γ) of γ ∈ Π dv by t) with a real-valued Lipschitz function θ, and then Im γ ′′ (t) γ ′ (t) = θ ′ , hence V (γ) is nothing but the length of the path θ). Note that the function t → V (γ |t ) is Lipschitz. V (γ) := t * 0 Im γ ′′ (t) γ ′ (t) dt (notice that one can write γ ′ (t) = |γ ′ (t)| e iθ( Definition 2.18. A convergent power series φ ∈ C{ζ} is said to be endlessly continuable w.r.t. bounded direction variation (and we write φ ∈ R dv ) if, for every real L, M > 0, there exists a finite subset F L,M of C such that φ can be analytically continued along every path γ : [0, 1] → C such that γ ∈ Π dv , L(γ) < L, V (γ) < M , and γ (0, 1] ⊂ C \F L,M . We also set R dv := B -1 (C δ ⊕ R dv ). Note that R ⊂ R dv ⊂ C{ζ} and R ⊂ R dv ⊂ C[[z -1 ]]. Definition 2.19. A discrete doubly filtered set, or d.d.f.s. for short, is a family Ω = (Ω L,M ) L,M ∈R ≥0 that satisfies i) Ω L,M is a finite subset of C for each L and M , ii) Ω L 1 ,M 1 ⊆ Ω L 2 ,M 2 when L 1 ≤ L 2 and M 1 ≤ M 2 , iii) there exists δ > 0 such that Ω δ,M = Ø for all M ≥ 0. Notice that a d.f.s. Ω can be regarded as a d.d.f.s. Ω dv by setting Ω dv L,M := Ω L for L, M ≥ 0. For a d.d.f.s. Ω, we set S Ω := (µ, λ, ω) ∈ R 2 × C | µ ≥ 0, λ ≥ 0 and ω ∈ Ω λ,µ and M Ω := R 2 × C \ S Ω , where S Ω is the closure of S Ω in R 2 × C. We call Ω-allowed path any γ ∈ Π dv such that (2.13) γ dv (t) := V (γ |t ), L(γ |t ), γ(t) ∈ M Ω for all t. We denote by Π dv Ω the set of all Ω-allowed paths. Finally, the set of Ω-continuable functions (resp. Ω-resurgent series) is defined in the same way as in Definition 2.8, and denoted by R dv Ω (resp. R dv Ω ). Arguing as for Theorem 2.13, one obtains (2.14) R dv = Ω d.d.f.s. R dv Ω , R dv = Ω d.d.f.s. R dv Ω . The sum Ω * Ω ′ of two d.d.f.s. Ω and Ω ′ is the d.d.f.s. defined by setting, for L, M ∈ R ≥0 , (2.15) (Ω * Ω ′ ) L,M := { ω 1 + ω 2 | ω 1 ∈ Ω L 1 ,M , ω 2 ∈ Ω ′ L 2 ,M , L 1 + L 2 = L } ∪ Ω L,M ∪ Ω ′ L,M . 3 The endless Riemann surface associated with a d.f.s. We introduce the notion of Ω-endless Riemann surfaces for a d.f.s. Ω as follows: Definition 3.1. We call Ω-endless Riemann surface any triple (X, p, 0) such that X is a connected Riemann surface, p : X → C is a local biholomorphism, 0 ∈ p -1 (0), and any path γ : [0, 1] → C of Π Ω has a lift γ : [0, 1] → X such that γ(0) = 0. A morphism of Ω-endless Riemann surfaces is a local biholomorphism q : (X, p, 0) → (X ′ , p ′ , 0 ′ ) that makes the following diagram commutative: (X, 0) (X ′ , 0 ′ ) (C, 0) G G 0 0 ❁ ❁ ❁ ❁ ❁ ❁ ❁ ❁ ❁ ❁ Ð Ð ✂ ✂ ✂ ✂ ✂ ✂ ✂ ✂ ✂ ✂ q p p ′ In this section, we prove the existence of an initial object (X Ω , p Ω , 0 Ω ) in the category of Ω-endless Riemann surfaces: Theorem 3.2. There exists an Ω-endless Riemann surface (X Ω , p Ω , 0 Ω ) such that, for any Ωendless Riemann surface (X, p, 0), there is a unique morphism q : (X Ω , p Ω , 0 Ω ) → (X, p, 0). The Ω-endless Riemann surface (X Ω , p Ω , 0 Ω ) is unique up to isomorphism and X Ω is simply connected. Construction of X Ω We first define "skeleton" of Ω: Definition 3.3. Let V Ω ⊂ ∞ n=1 (C × Z) n be the set of vertices v := ((ω 1 , σ 1 ), • • • , (ω n , σ n )) ∈ (C × Z) n that satisfy the following conditions: 1) (ω 1 , σ 1 ) = (0, 0) and (ω j , σ j ) ∈ C ×(Z \{0}) for j ≥ 2, 2) ω j = ω j+1 for j = 1, • • • , n -1, 3) ω j ∈ ΩL j (v) with L j (v) := j-1 i=1 |ω i+1 -ω i | for j = 2, • • • , n. Let E Ω ⊂ V Ω × V Ω be the set of edges e = (v ′ , v ) that satisfy one of the following conditions: i) v = ((ω 1 , σ 1 ), • • • , (ω n , σ n )) and v ′ = ((ω 1 , σ 1 ), • • • , (ω n , σ n ), (ω n+1 , ±1)), ii) v = ((ω 1 , σ 1 ), • • • , (ω n , σ n )) and v ′ = ((ω 1 , σ 1 ), • • • , (ω n , σ n + 1)) with σ n ≥ 1, iii) v = ((ω 1 , σ 1 ), • • • , (ω n , σ n )) and v ′ = ((ω 1 , σ 1 ), • • • , (ω n , σ n -1)) with σ n ≤ -1. We denote the directed tree diagram (V Ω , E Ω ) by Sk Ω and call it skeleton of Ω. Notation 3.4. For v ∈ V Ω ∩ (C × Z) n , we set ω(v) := ω n and L(v) := L n (v). From the definition of Sk Ω , we find the following Lemma 3.5. For each v ∈ V Ω \ {(0, 0)}, there exists a unique vertex v ↑ ∈ V Ω such that (v, v ↑ ) ∈ E Ω . To each v ∈ V Ω we assign a cut plane U v , defined as the open set U v := C \ C v ∪ v ′ → i v C v ′ →v , where v ′ → i v is the union over all the vertices v ′ ∈ V Ω that have an edge (v ′ , v) ∈ E Ω of type i), ω( ) ω( ) ω( 1 ) ′ ⭡ ω( 2 ) ′ ω( 3 ) ′ 1 → ´ 2 → ´ 3 → ´Figure 1: The set U v . C v := Ø when v = (0, 0), {ω n -s(ω n -ω n-1 ) | s ∈ R ≥0 } when v = (0, 0), C v ′ →v := {ω n+1 + s(ω n+1 -ω n ) | s ∈ R ≥0 }. We patch the U v 's along the cuts according to the following rules: Suppose first that (v ′ , v) is an edge of type i), with v ′ = (v, (ω n+1 , σ n+1 )) ∈ V Ω . To it, we assign a line segment or a half-line ℓ v ′ →v as follows: If there exists u = (v, (ω ′ n+1 , ±1)) ∈ V Ω such that ω ′ n+1 ∈ C v ′ →v \ {ω n+1 }, take u (0) = (v, (ω (0) n+1 , ±1)) ∈ V Ω so that |ω (0) n+1 - ω n+1 ) | s ∈ (0, 1)} to (v ′ , v). Otherwise, we assign the open half-line ℓ v ′ →v := C v ′ →v \ {ω n+1 } to (v ′ , v). Since each Ω L (L ≥ 0) is finite, we can take a connected neighborhood U v ′ →v of ℓ v ′ →v so that (3.1) U v ′ →v \ ℓ v ′ →v = U + v ′ →v ∪ U - v ′ →v and U ± v ′ →v ⊂ U v ∩ U v ′ , where U ± v ′ →v := {ζ ∈ U v ′ →v | ±Im(ζ • ζ ′ ) > 0 for ζ ′ ∈ ℓ v ′ →v }. Then, if σ n+1 = 1, we glue U v and U v ′ along U - v ′ →v , whereas if σ n+1 = -1 we glue them along U + v ′ →v . Suppose now that (v ′ , v ) is an edge ot type ii) and iii). As in the case of i), if there exists u = (v, (ω ′ n+1 , ±1)) ∈ V Ω such that ω ′ n+1 ∈ C v \ {ω n }, then we take u (0) = (v, (ω (0) n+1 , ±1)) ∈ V Ω so that |ω (0) n+1 -ω n | is minimum and assign ℓ v ′ →v := {ω n + s(ω (0) n+1 -ω n ) | s ∈ (0, 1)} to (v ′ , v). Otherwise, we assign ℓ v ′ →v := C v \{ω n } to (v ′ , v). Then, we take a connected neighborhood U v ′ →v of ℓ v ′ →v satisfying (3.1), and glue U v and U v ′ along U - v ′ →v in case ii), and along U + v ′ →v in case iii). Patching the U v 's and the U v ′ →v 's according to the above rules, we obtain a Riemann surface X Ω , in which we denote by 0 Ω the point corresponding to 0 ∈ U (0,0) . The map p Ω : X Ω → C is naturally defined using local coordinates U v and U v ′ →v . ω( ) ′ ω( ) (0) → + → -- Figure 2: The set U v ′ →v . Let U e , ℓ e (e ∈ E Ω ) and U v (v ∈ V Ω ) respectively denote the subsets of X Ω defined by U e , ℓ e and U v . Notice that each ζ ∈ X Ω belongs to one of the ℓ e 's or U v 's (e ∈ E Ω or v ∈ V Ω ). Therefore, we have the following decomposition of X Ω : X Ω = v∈V Ω U v ⊔ e∈E Ω ℓ e . Definition 3.6. We define a function L : X Ω → R ≥0 by the following formula: L(ζ) := L(v) + |p(ζ) -ω(v)| when ζ ∈ U v ⊔ ℓ v→v ↑ . We call L(ζ) the canonical distance of ζ from 0 Ω . We obtain from the construction of L the following Lemma 3.7. The function L : X Ω → R ≥0 is continuous and satisfies the following inequality for every γ ∈ Π Ω : L(γ(t)) ≤ L(γ |t ) for t ∈ [0, 1]. We now show the fundamental properties of X Ω . Lemma 3.8. The Riemann surface X Ω constructed above is simply connected. Proof. We first note that, since Sk Ω is connected, X Ω is path-connected. Let γ : [0, 1] → X Ω be a path such that γ(0) = γ(1). Since the image of γ is a compact set in X Ω , we can take finite number of vertices {v j } p j=1 ⊂ V Ω and {e j } q j=1 ⊂ E Ω so that v 1 = (0, 0) and the image of γ is covered by {U v j } p j=1 and {U e j } q j=1 . Since each of {v j } p j=2 and {e j } q j=1 has a path to v 1 that contains it, interpolating finite number of the vertices and the edges if necessary, we may assume that the diagram Sk defined by {v j } p j=1 and {e j } q j=1 are connected in Sk Ω . Now, let U be the union of {U v j } p j=1 and {U e j } q j=1 . Since all of the open sets are simply connected and Sk is acyclic, we can inductively confirm using the van Kampen's theorem that U is simply connected. Therefore, the path γ is contracted to the point 0 Ω . It proves the simply connectedness of X Ω . Lemma 3.9. The Riemann surface X Ω constructed above is Ω-endless. Proof. Take an arbitrary Ω-allowed path γ and δ, L > 0 so that γ ∈ Π δ,L Ω . Let V δ,L Ω denote the set of vertices v = ((ω 1 , σ 1 ), • • • , (ω n , σ n )) ∈ V Ω that satisfy L δ (v) := L n (v) + n j=2 (|σ j | -1)δ ≤ L and set E δ,L Ω := {(v, v ↑ ) ∈ E Ω | v ∈ V δ,L Ω }. Notice that V δ,L Ω and E δ,L Ω are finite. We set for ε > 0 and v ∈ V δ,L Ω U δ,L,ε v := {ζ ∈ U v | inf (v ′ ,v)∈E Ω |ζ -ω(v ′ )| ≥ δ, D ε ζ ⊂ U v } ∩ D L-L δ (v) ω(v) , where D r ζ := { ζ ∈ C | | ζ -ζ| ≤ r} for ζ ∈ C, r > 0. We also set for ε > 0 and (v, v ↑ ) ∈ E δ,L Ω U δ,L,ε v→v ↑ := {ζ ∈ U v→v ↑ | min j=1,2 |ζ -ωj | ≥ δ, inf ζ∈ℓv→v ↑ |ζ -ζ| ≤ ε} ∩ D L-L δ (v ↑ ) ω(v) , where ω1 := ω(v) and ω2 is the other endpoint of ℓ v→v ↑ if it exists and ω2 := ω(v) otherwise. Since E δ,L Ω are finite set, we can take ε > 0 sufficiently small so that D ε ζ ⊂ U v→v ↑ for all ζ ∈ U δ,L,ε v→v ↑ and (v, v ↑ ) ∈ E δ,L Ω . We fix such a number ε > 0. Now, let I be the maximal interval such that the restriction of γ to I has a lift γ on X Ω . Obviously, I = Ø and I is open. Assume that I = [0, a) for a ∈ (0, 1]. We take b ∈ (0, a) so that L(γ |a ) -L(γ |b ) < ε. Then, notice that, since γ ∈ Π δ,L Ω and γ |b has a lift on X Ω , γ(b) is in U δ,L,ε v for v ∈ V δ,L Ω or U δ,L,ε e for e ∈ E δ,L Ω . Since D ε γ(b) ⊂ U v (resp., D ε γ(b) ⊂ U e ) when γ(b) ∈ U δ,L,ε v (resp., γ(b) ∈ U δ,L,ε e ), Proof of Theorem 3.2 We first show the following: Lemma 3.10. For all ε > 0 and ζ ∈ X Ω , there exists an Ω-allowed path γ such that L(γ) < L(ζ) + ε and its lift γ on X Ω satisfies γ(0) = 0 Ω and γ(1) = ζ. Proof. Let ζ ∈ U v for v = ((ω 1 , σ 1 ), • • • , (ω n , σ n )) . We consider a polygonal curve P 0 ζ obtained by connecting line segments [ω j , ω j+1 ] (j = 1, • • • , n), where we set ω n+1 := p Ω (ζ) for the sake of notational simplicity. Now, collect all the points ω j,k on (ω j , ω j+1 ) such that (L j,k , ω) ∈ S Ω , where L j,k := L j (v) + |ω j,k -ω j |. Since (3.2) S Ω ∩ {λ ∈ R ≥0 | |λ| ≤ L} × C is written for each L > 0 by the union of finite number of line segments of the form {λ ∈ R ≥0 | L ≤ λ ≤ L} × {ω} ( L > 0, ω ∈ C), such points are finite. We order ω j and ω j,k so that L j (v) and L j,k increase along the order and denote the sequence by (ω ′ 1 , ω ′ 2 , • • • , ω ′ n ′ ). We set L ′ j := j-1 i=1 |ω ′ i+1 -ω ′ i |. We extend v to v ′ = ((ω ′ 1 , σ ′ 1 ), • • • , (ω ′ n ′ , σ ′ n ′ )) by setting σ ′ j = 1 (resp., σ ′ j = -1) when (ω ′ j , L ′ j ) = (ω i,k , L i,k ) for some i, k and σ i+1 ≥ 1 (resp., σ i+1 ≤ -1). Then, in view of (3.2), we can take δ > 0 so that {(L ′ j + |ζ ′ -ω ′ j | + δ, ζ ′ ) | ζ ′ ∈ (ω ′ j , ω ′ j+1 )} ∩ S Ω = Ø, {(L ′ j + δ, ζ ′ ) | 0 < |ζ ′ -ω ′ j | < δ} ∩ S Ω = Ø hold for j = 1, • • • , n ′ . Let ω ′ j,-(resp., ω ′ j,+ ) be the intersection point of [ω ′ j-1 , ω ′ j ] (resp., [ω ′ j , ω ′ j+1 ]) and C ε ′ ω ′ j := {ζ ′ ∈ C | |ζ ′ -ω ′ j | = ε ′ } for sufficiently small ε ′ > 0. We replace the part [ω ′ j,-, ω ′ j ] ∪ [ω ′ j , ω ′ j,+ ] of ℓ with a path that goes anti-clockwise (resp., clockwise) along C ε ′ ω ′ j from ω ′ j,-to ω ′ j,+ and turns around ω ′ j (|σ ′ j | -1)-times when σ ′ j ≥ 1 (resp., when σ ′ j ≤ -1). Let P ε ′ ζ denote a path ζ defines an Ω-allowed path and its lift P ε ′ ζ on X Ω satisfies the conditions. Further, by taking ε ′ sufficiently small so that 2πε ′ n ′ j=2 |σ ′ j | < ε, we find L(P ε ′ ζ ) < L(ζ) + ε, hence one can take γ = P ε ′ ζ . When ζ ∈ ℓ e for an edge e = (v, v ↑ ) ∈ E Ω , we can construct such a path P ε ′ ζ ∈ Π Ω Q ε ′ ζ,ζ ′ ∈ Π Ω (ζ ′ ∈ U ζ ) of the path γ = P ε ′ ζ constructed in the proof of Lemma 3.10 such that L(Q ε ′ ζ,ζ ′ ) < L(ζ ′ ) + 2ε for each ζ ′ ∈ U ζ and the lift Q ε ′ ζ,ζ ′ on X Ω satisfies Q ε ′ ζ,ζ ′ (0) = 0 and Q ε ′ ζ,ζ ′ (1) = ζ ′ . Indeed, the deformation of P ε ′ ζ is concretely given as follows: - When ζ ∈ U v for v ∈ V Ω , taking a neighborhood U ζ ⊂ U v of ζ sufficiently small, we find that the family of the paths P ε ′ ζ ′ (ζ ′ ∈ U ζ ) constructed in the proof of Lemma 3.10 gives such a deformation. -When ζ ∈ ℓ e for e ∈ E Ω , we can take a neighborhood U ζ ⊂ U e of ζ so that [ω ′ n ′ ,+ (ζ ′ ), p Ω (ζ ′ )] ⊂ U e for all ζ ′ ∈ U ζ , where ω ′ n ′ ,+ (ζ ′ ) is the intersection point of [ω ′ n ′ , p Ω (ζ ′ )] and C ε ′ ω ′ n ′ . Define a deformation Q ε ′ ζ,ζ ′ (ζ ′ ∈ U e ) of P ε ′ ζ by continuously varying the arc of C ε ′ ω ′ n ′ from ω ′ n ′ ,-to ω ′ n ′ ,+ (ζ ′ ) and the line segment [ω ′ n ′ ,+ (ζ ′ ), p Ω (ζ ′ )] and fixing the other part of P ε ′ ζ . Then, shrinking U ζ if necessary, we find that Q ε ′ ζ,ζ ′ satisfies Q ε ′ ζ,ζ ′ ∈ Π Ω and L(Q ε ′ ζ,ζ ′ ) < L(ζ ′ ) + 2ε for each ζ ′ ∈ U ζ . Beware that, when the edge (v, v ↑ ) is the type i), Q ε ′ ζ,ζ ′ is different from P ε ′ ζ ′ for ζ ∈ ℓ v→v ↑ and ζ ′ ∈ U ζ ∩ U v ↑ . On the other hand, Q ε ′ ζ,ζ ′ = P ε ′ ζ ′ holds for ζ ′ ∈ U ζ ∩ U v . When the edge (v, v ↑ ) is the type ii) or iii), Q ε ′ ζ,ζ ′ = P ε ′ ζ ′ holds for ζ ∈ ℓ v→v ↑ and ζ ′ ∈ U ζ . Let (X, p, 0) be an Ω-endless Riemann surface. For each ζ ∈ X Ω , take γ ∈ Π Ω such that γ(1) = ζ and let γ X be its lift on X. Then, define a map q : X Ω → X by q(ζ) = γ X (1). We now show the well-definedness of q. For that purpose, it suffices to prove the following Proposition 3.12. Let γ 0 , γ 1 ∈ Π Ω such that γ 0 (1) = γ 1 (1). Then, there exists a continuous family (H s ) s∈[0,1] of Ω-allowed paths satisfying the conditions 1. H s (0) = 0 and H s (1) = γ 0 (1) for all s ∈ [0, 1], 2. H j = γ j for j = 0, 1. The proof of Proposition 3.12 is reduced to the following Lemma 3.13. For each γ ∈ Π Ω and ε ′ > 0 sufficiently small, there exists a continuous family ( Hs ) s∈[0,1] of Ω-allowed paths satisfying the following conditions: 1. L Hs ≤ L(γ |s ) and Hs (1) = γ(s) for all s ∈ [0, 1], 2. Hs = P ε ′ γ(s) for s = 0, 1. Notice that, since γ(0 ) = 0 Ω , P ε ′ γ(0) is the constant map P ε ′ γ(0) = 0. Reduction of Proposition 3.12 to Lemma 3.13. For each γ ∈ Π Ω and s ∈ (0, 1], define H s using Hs constructed in Lemma 3.13 as follows: H s (t) := Hs (t/s) when t ∈ [0, s], γ(t) when t ∈ [s, 1]. It extends continuously to s = 0 and gives a continuous family (H s ) s∈[0,1] of Ω-allowed paths satisfying the assumption in Proposition 3.12 with γ 0 = γ and γ 1 = P ε ′ γ(1) . Now, let γ 0 and γ 1 be the Ω-allowed paths satisfying the assumption in Proposition 3.12. Applying the above discussion to each of γ 0 and γ 1 , we obtain two families of Ω-allowed paths connecting them to P ε ′ γ 0 (1) and, concatenating the deformations at P ε ′ γ 0 (1) , we obtain a deformation (H s ) s∈[0,1] satisfying the conditions in Proposition 3.12. Proof of Lemma 3.13. Take δ, L > 0 so that γ ∈ Π δ,L Ω . We first show the following: (3.3) When γ(t 0 ) ∈ U v→(0,0) for t 0 ∈ (0, 1] and v = ((0, 0), (ω 2 , σ 2 )), the following estimate holds for t ∈ [t 0 , 1]: L(γ(t)) + |ω 2 | 2 + δ 2 -|ω 2 | ≤ L(γ |t ). Notice that, since γ ∈ Π δ,L Ω , the length L(γ |t 0 ) of γ |t 0 must be longer than that of the polygonal curve C obtained by concatenating the line segments [0, ω 2 + δe iθ ] and [ω 2 + δe iθ , γ(t 0 )], where θ = arg(ω 2 )σ 2 π/2. Then, we find that, for an arbitrary ε > 0, taking ε ′ > 0 sufficiently small, the path γε ′ obtained by concatenating the paths P ε ′ γ(t 0 ) and γ| [t 0 ,1] satisfies γε ′ ∈ Π Ω , γε ′ (t) = γ(t) and L(γ ε ′ |t ) ≤ L(γ 0 |t ) + ε for t ∈ [t 0 , 1]. Therefore, we have L(γ(t)) ≤ L(γ 0 |t ) for t ∈ [t 0 , 1] Since L(C) ≥ |ω 2 | 2 + δ 2 + |γ(t 0 ) -ω 2 |, we find L(γ |t ) = L(γ 0 |t ) + L(γ |t 0 ) -L([0, γ(t 0 )]) ≥ L(γ(t)) + |ω 2 | 2 + δ 2 -|ω 2 | holds for t ∈ [t 0 , 1], and hence, we obtain (3.3). Now, we shall construct (H s ) s∈[0,1] . Let ε > 0 be given. We assign the path P ε ′ t γ(t) (ε ′ t > 0) to each t ∈ [0, 1] and take a neighborhood U γ(t) of γ(t) and the deformation Q ε ′ t γ(t),ζ ′ (ζ ′ ∈ U γ(t) ) of P ε ′ t γ(t) constructed in Lemma 3.11. Then, we can cover [0, 1] by a finite number of intervals I j = [a j , b j ] (j = 1, 2, • • • , k) satisfying the following conditions: -The interior I • j of I j satisfies I • j 1 ∩ I • j 2 = Ø when |j 1 -j 2 | ≤ 1 and I j 1 ∩ I j 2 = Ø otherwise. -There exists t j ∈ I j such that t j < t j+1 for j = 1, • • • , k -1 and γ(I j ) ⊂ U γ(t j ) . Notice that, since U γ(t) is taken for each t ∈ [0, 1] so that it is contained in one of the charts U v (v ∈ V Ω ) or U e (e ∈ E Ω ), one of the followings holds: - γ(t j ) ∈ U v and γ(I j ) ⊂ U v (v ∈ V Ω ). -γ(t j ) ∈ ℓ e and γ(I j ) ⊂ U e (e ∈ E Ω ). We set ε ′ = min j {ε ′ t j | γ(t j ) / ∈ U (0,0) }. Then, P ε ′ γ(t j ) and its deformation Q ε ′ γ(t j ),ζ ′ (ζ ′ ∈ U γ(t j ) ) also satisfy the conditions in Lemma 3.10 and Lemma 3.11. Let J E ⊂ {1, • • • , k} denote the set of suffixes satisfying the condition that there exists e ∈ E Ω such that γ(t j ) ∈ ℓ e and let j 0 be the minimum of J E . Shrinking the neighborhood U γ(t) for each t ∈ [0, 1] at the first, we may assume without loss of generality that, -|γ(t) -γ(t j )| ≤ ε for t ∈ I j and j = 1, • • • , k, -if j, j + 1 ∈ J E , there exists an edge e ∈ E Ω such that γ(t j ), γ(t j+1 ) ∈ ℓ e . Recall that, from the construction of Q ε ′ ζ,ζ ′ , Q ε ′ γ(t j ),γ(t) = Q ε ′ γ(t j+1 ),γ(t) for t ∈ I j ∩ I j+1 except for the cases where there exists an edge e = (v, v ↑ ) ∈ E Ω of the type i) such that -γ(t j ) ∈ U e and γ(t j+1 ) ∈ U v ↑ , -γ(t j ) ∈ U v ↑ and γ(t j+1 ) ∈ U e . In the first case, the difference between Q ε ′ γ(t j ),γ(t) and Q ε ′ γ(t j+1 ),γ(t) is the part from ω t (v ↑ ) to γ(t), where ω t (v ↑ ) is the intersection point of C ε ′ ω(v ↑ ) and [ω(v ↑ ), γ(t)]: Let ω e,i (i = 0, • • • , m + 1) be the points on the line segment [ω(v ↑ ), ω(v)] satisfying the conditions (L e,i , ω e,i ) ∈ S Ω and L e,i < L e,i+1 , where L e,i := L(v ↑ ) + |ω e,i -ω(v ↑ )|. Then, the part of Q ε ′ γ(t j ),γ(t) from ω t (v ↑ ) to γ(t) is given by concatenating the arcs of C ε ′ ω e,i (i = 0, • • • , m + 1), the intervals of the line segment [ω(v ↑ ), ω(v)] and [ω t (v), γ(t)], where ω t (v) is the intersection point of C ε ′ ω(v) and [ω(v), γ(t)]. (See Figure 4 (a).) On the other hand, Q ε ′ γ(t j+1 ),γ(t) goes directly from ω t (v ↑ ) to γ(t). (See Figure 4 (d).) Now, let ω t i,+ (resp. ω t i,-) be the intersection point of C ε ′ ω e,i and [ω t (v ↑ ), ω t (v)] that is the closer to ω t (v) (resp. ω t (v ↑ )). While t moves on I j ∩ I j+1 , we first deform the part of Q ε ′ γ(t j ),γ(t) from ω t (v ↑ ) to ω t (v) to the line segment [ω t (v ↑ ), ω t (v)] by shrinking the part of Q ε ′ γ(t j ),γ(t) from ω t i,- to ω t i,+ (resp. from ω t i,+ to ω t i+1,-) to the line segment [ω t i,-, ω t i,+ ] (resp. [ω t i,+ , ω t i+1,-]) for each i. (See Figure 4 (b) and (c).) Then, further shrinking the polygonal line given by concatenating [ω t (v ↑ ), ω t (v)] and [ω t (v), γ(t)] to the line segment [ω t (v ↑ ), γ(t)], we obtain a continuous family of Ω-allowed paths Hs s∈[t j ,t j+1 ] satisfying the following conditions: -Hs = Q ε ′ γ(t j ),γ(s) when s ∈ [t j , t j+1 ] \ I j+1 , -Hs = Q ε ′ γ(t j+1 ),γ(s) when s ∈ [t j , t j+1 ] \ I j , -L Hs ≤ L Q ε ′ γ(t j ),γ(s) and Hs (1) = γ(s) when s ∈ I j ∩ I j+1 . ω ,0 ω ,1 ω ,2 γ( ) (a) (b) (c) (d) Figure 4: For the second case, we can also construct a continuous family of Ω-allowed paths Hs s∈[t j ,t j+1 ] satisfying the first and the second conditions above and -L Hs ≤ L Q ε ′ γ(t j+1 ),γ(s) and Hs (1) = γ(s) when s ∈ I j ∩ I j+1 . Then, we can continuously extend Hs to [0, 1] by interpolating it by Q ε ′ γ(t j ),γ(s) so that it satisfies (3.4) L Hs ≤ max j L Q ε ′ γ(t j ),γ(s) | s ∈ I j and Hs (1) = γ(s) for all s ∈ [0, 1]. Since I j 0 is taken so that |γ(t)γ(t j 0 )| ≤ ε holds on I j 0 , applying (3.3) with t 0 = t j 0 , we have the following estimates: L(γ(t)) + |ω 2 | 2 + δ 2 -|ω 2 | -ε ≤ L(γ |t ) for t ∈ [a j 0 , 1]. On the other hand, since γ(t) ∈ U (0,0) for t ∈ [0, a j 0 ], we find L Q ε ′ γ(t j ),γ(t) = L(γ(t)) holds for t ∈ I j and j < j 0 from the construction of Q ε ′ ζ,ζ ′ . Therefore, taking ε > 0 sufficiently small so that 3ε ≤ |ω 2 | 2 + δ 2 -|ω 2 |, we obtain the following estimates from Lemma 3.10 and (3.4): L Hs ≤ L(γ |s ) for s ∈ [0, 1]. Finally, from the construction of Hs , we find that Hs satisfies Hs = P ε ′ γ(s) for s = 0, 1. Since p Ω = p • q and p is isomorphic near 0, all the maps q : X Ω → X must coincide near 0 Ω , and hence, uniqueness of q follows from the uniqueness of the analytical continuation of q. Finally, X Ω is unique up to isomorphism because X Ω is an initial object in the category of Ω-endless Riemann surfaces. Supplement to the properties of X Ω Let O X denote the sheaf of holomorphic functions on a Riemann surface X and consider the natural morphism p * Ω : p -1 Ω O C → O X Ω induced by p Ω : X Ω → C . Since X Ω is simply connected, we obtain the following: Proposition 3.14. Let φ ∈ O C,0 . Then the followings are equivalent: i) φ ∈ O C,0 is Ω-continuable, ii) p * Ω φ ∈ O X Ω ,0 Ω can be analytically continued along any path on X Ω , iii) p * Ω φ ∈ O X Ω ,0 Ω can be extended to Γ(X Ω , O X Ω ). Therefore, we find p * Ω : RΩ ∼ -→ Γ(X Ω , O X Ω ). Notation 3.15. For L, δ > 0, using Π δ,L Ω of (2.7), we define a compact subset K δ,L Ω of X Ω by (3.5) K δ,L Ω := ζ ∈ X Ω | ∃γ ∈ Π δ,L Ω such that ζ = γ(1) . Notice that X Ω is exhausted by (K δ,L Ω ) δ,L>0 . Therefore, the family of seminorms • δ,L Ω (δ, L > 0) defined by f δ,L Ω := sup ζ∈K δ,L Ω | f (ζ)| for f ∈ Γ(X Ω , O X Ω ) induces a structure of Fréchet space on Γ(X Ω , O X Ω ). Definition 3.16. We introduce a structure of Fréchet space on RΩ by a family of seminorms • δ,L Ω (δ, L > 0) defined by φ δ,L Ω := |ϕ 0 | + p * Ω φ δ,L Ω for φ ∈ RΩ , where B( φ) = ϕ 0 δ + φ ∈ C δ ⊕ RΩ . Let Ω ′ be a d.f.s. such that Ω ⊂ Ω ′ . Since Π Ω ′ ⊂ Π Ω , X Ω is Ω ′ -endless. Therefore, Theorem 3.2 yields a morphism q : (X Ω ′ , p Ω ′ , 0 Ω ′ ) → (X Ω , p Ω , 0 Ω ), which induces a morphism q * : q -1 O X Ω → O X Ω ′ . Since q(K δ,L Ω ′ ) ⊂ K δ,L Ω , we have q * f δ,L Ω ′ ≤ f δ,L Ω for f ∈ Γ(X Ω , O X Ω ), and hence, φ δ,L Ω ′ ≤ φ δ,L Ω for φ ∈ RΩ . In view of Theorem 4.8 below, the product map RΩ × RΩ ′ → RΩ * Ω ′ is continuous and hence, when Ω * Ω = Ω, RΩ is a Fréchet algebra. 3.4 The endless Riemann surface associated with a d.d.f.s. In this section, we discuss the construction of the endless Riemann surfaces associated with an arbitrary d.d.f.s. Ω. Let us first define the skeleton of Ω: Definition 3.17. Let V Ω ⊂ ∞ n=1 (C × Z) n be the set of vertices v := ((ω 1 , σ 1 ), • • • , (ω n , σ n )) ∈ (C × Z) n that satisfy the conditions 1) and 2) in Definition 3.3 and 3') M j (v), L j (v), ω j ∈ S Ω for j = 2, • • • , n, with L j (v) := j-1 i=1 |ω i+1 -ω i | (j = 2, • • • , n), M j (v) :=        0 (j = 2), j-1 i=2 A i (v) + 2π(|σ i | -1) (j = 3, • • • , n), and A i (v) := |θ i | if θ i σ i ≥ 0, 2π -|θ i | if θ i σ i < 0, where θ i := arg ω i+1 -ω i ω i -ω i-1 is taken so that θ i ∈ (-π, π]. Let E Ω ⊂ V Ω × V Ω be the set of edges e = (v ′ , v) that satisfy one of the conditions i) ∼ iii) in Definition 3.3. We denote the directed tree diagram (V Ω , E Ω ) by Sk Ω and call it skeleton of Ω. Now, assigning a cut plane U v (resp. an open set U e ) to each v ∈ V Ω (resp. each e ∈ E Ω of type i)) defined by totally the same way with Section 3.1 and patching them as in Section 3.1, we obtain an initial object (X Ω , p Ω , 0 Ω ) in the category of Ω-endless Riemann surfaces associated with a d.d.f.s. Ω. We denote the lift of γ ∈ Π dv Ω on X Ω by γ. Estimates for the analytic continuation of iterated convolutions In this section, our aim is to prove the following theorem, which is the analytical core of our study of the convolution product of endlessly continuable functions. Theorem 4.1. Let δ, L > 0 be real numbers. Then there exist c, δ ′ > 0 such that, for every d.f.s. Ω such that Ω 4δ = Ø, for every integer n ≥ 1 and for every f1 , . . . , fn ∈ RΩ , the function 1 * f1 * • • • * fn (which is known to belong to RΩ * n ) satisfies (4.1) p * Ω * n 1 * f1 * • • • * fn (ζ) ≤ c n n! sup L 1 +•••+Ln=L p * Ω f1 δ ′ ,L 1 Ω • • • p * Ω fn δ ′ ,Ln Ω for ζ ∈ K δ,L Ω * n (with notation (3.5)). Using the Cauchy inequality, the identity d dζ (1 * f1 * • • • * fn ) = f1 * • • • * fn and the inverse Borel transform, one easily deduces the following Corollary 4.2. Let δ, L > 0 be real numbers. Then there exist c, δ ′ , L ′ > 0 such that, for every d.f.s. Ω such that Ω 4δ = Ø, for every integer n ≥ 1 and for every f1 , . . . , fn ∈ RΩ without constant term, the formal series f1 • • • fn (which is known to belong to RΩ * n ) satisfies f1 • • • fn δ,L Ω * n ≤ c n+1 n! f1 δ ′ ,L ′ Ω • • • fn δ ′ ,L ′ Ω . In fact, one can cover the case f1 ∈ RΩ 1 , . . . , fn ∈ RΩn with different d.f.s.'s Ω 1 , . . . , Ω n as well-see Theorem 4.8-, but we only give details for the case of one d.f.s. so as to lighten the presentation. Notations and preliminaries We fix an integer n ≥ 1 and a d.f.s. Ω. In view of Remark 2.12, without loss of generality, we can suppose that Ω coincides with its upper closure: (4.2) Ω = Ω. Let ρ > 0 be such that Ω 3ρ = Ø. We set (See [START_REF]Nonlinear analysis with resurgent functions[END_REF] for the notations and notions related to integration currents.) U := { ζ ∈ C | |ζ| < 3ρ }. As in [START_REF]Nonlinear analysis with resurgent functions[END_REF], our starting point will be Lemma 4.3. Let f1 , . . . , fn ∈ RΩ and β := (p * Ω f1 ) ζ 1 • • • (p * Ω fn ) ζ n dζ 1 ∧ • • • ∧ dζ n ,, where we denote by dζ 1 ∧ • • • ∧ dζ n the pullback by p ⊗n Ω : X n Ω → C n of the n-form dζ 1 ∧ • • • ∧ dζ n . Then 1 * f1 * • • • * fn (ζ) = D(ζ) # [∆ n ](β) for ζ ∈ U . Proof. This is just another way of writing the formula (4.4) 1 * f1 * • • • * fn (ζ) = ζ n ∆n f1 (ζs 1 ) • • • fn (ζs n ) ds 1 • • • ds n . See [START_REF]Nonlinear analysis with resurgent functions[END_REF] for the details. Notation 4.4. We set N (ζ) := ζ 1 , . . . , ζ n ∈ X n Ω | p Ω ζ 1 + • • • + p Ω ζ n = ζ for ζ ∈ C, (4.5) N j := ζ 1 , . . . , ζ n ∈ X n Ω | ζ j = 0 Ω for 1 ≤ j ≤ n. (4.6) γ-adapted deformations of the identity Let us consider a path γ : [0, 1] → C in Π Ω * n for which there exists a ∈ (0, 1) such that We now introduce the notion of γ-adapted deformation of the identity, which is a slight generalization of the γ-adapted origin-fixing isotopies which appear in [Sau15, Def. 5.1]. Definition 4.5. A γ-adapted deformation of the identity is a family (Ψ t ) t∈[a,1] of maps Ψ t : V → X n Ω , for t ∈ [a, 1], where V := D γ(a) (∆ n ) ⊂ X n Ω , such that Ψ a = Id, the map t, ζ ∈ [a, 1] × V → Ψ t ζ ∈ X n Ω is locally Lipschitz, and for any t ∈ [a, 1] and j = 1, . . . , n, (4.8) Ψ t V ∩ N γ(a) ⊂ N γ(t) , Ψ t V ∩ N j ⊂ N j (with the notations (4.5)-(4.6)). Let γ denote the lift of γ in X Ω starting at 0 Ω . The analytical continuation along γ of a convolution product can be obtained as follows: The following is the key estimate: Theorem 4.7. Let δ ∈ (0, ρ) and L > 0. Let γ ∈ Π δ,L Ω * n satisfy (4.7) and let (4.12) Proposition 4.6 ([Sau15]). If (Ψ t ) t∈[a,1] is a γ-adapted deformation of the identity, then (4.9) p * Ω * n 1 * f1 * • • • * fn γ(t) = Ψ t • D γ(a) # [∆ n ](β) for t ∈ [a, δ ′ (t) := ρ e -2 √ 2δ -1 L(γ| [a,t] ) , c(t) := ρ e 3δ -1 L(γ| [a,t] ) for t ∈ [a, 1]. Then there exists a γ-adapted deformation of the identity Proof that Theorem 4.7 implies Theorem 4.1. Let δ, L > 0. We will show that (4.1) holds with δ ′ := min δ, ρ e -4 √ 2(1+δ -1 L) , c := max 2ρ, ρ e 6(1+δ -1 L) , where ρ := 4 3 δ. Let Ω be a d.f.s. such that Ω 4δ = Ø. Without loss of generality we may suppose that Ω = Ω. (Ψ t ) t∈[a,1] such that (4.13) Ψ t • D γ(a) (∆ n ) ⊂ L 1 +•••+Ln=L(γ |t ) K δ ′ (t),L 1 Ω × • • • × K δ ′ (t) In view of formula (4.4), the inequality (4.1) holds for ζ ∈ K δ,L Ω * n ∩ U , where U is defined by (4.3), because the Lebesgue measure of ∆ n is 1/n!. Let ζ ∈ K δ,L Ω * n \ U . We can write ζ = γ(1) with γ ∈ Π δ,L Ω * n , is not C 1 , then we use a sequence of paths γ k ∈ Π δ/2,L+δ Ω * n such that γ k | [0,a] = γ| [0,a] , γ k (1) = γ(1), γ k | [a,1] is C 1 and sup t∈[a,1] |γ(t) -γ k (t)| → 0 as k → ∞; (4.15) p * Ω 1 * f1 * • • • * fn (ζ) ≤ c n n! sup L 1 +•••+Ln=L p * Ω 1 f1 δ ′ ,L 1 Ω 1 • • • p * Ωn fn δ ′ ,Ln Ωn for ζ ∈ K δ,L Ω . Proposition 4.10. Let ζ = L (ζ 1 ), . . . , L (ζ n ) ∈ V , i.e. ζ j = s j γ(a) with (s 1 , . . . , s n ) ∈ ∆ n . We define v := (|ζ 1 |, ζ 1 ), . . . , (|ζ n |, ζ n ) ∈ (R × C) n and Γ = (γ 1 , . . . , γn ) : [0, 1] → (R × C) n by t ∈ [0, a] ⇒ Γ(t) := t a (|ζ 1 |, ζ 1 ), . . . , t a (|ζ n |, ζ n ) , t ∈ [a, 1] ⇒ Γ(t) := Φ a,t ( v ). Then, for each j ∈ {1, . . . , n}, γj is a path [0, 1] → R × C whose C-projection γ j belongs to Π Ω , and the formula (4.21) Ψ t ζ := γ 1 (t), . . . , γ n (t) ∈ X n Ω for t ∈ [a, 1]. defines a γ-adapted deformation of the identity. Proof. We first prove that γ 1 , . . . , γ n ∈ Π Ω . In view of (2.5), we just need to check that, for each j ∈ {1, . . . , n}, the path γj = (λ j , γ j ) satisfies v ∈ (R × C) n | v j = v * j } is invariant by the maps Φ t 1 ,t 2 (because η(v j ) = 0 implies that X j = 0 on this submanifold), in particular Φ t,a (R × C) n \ M n Ω ⊂ (R × C) n \ M n Ω , whence (4.23) follows because Φ a,t and Φ t,a are mutually inverse bijections. Therefore the paths γ 1 , . . . , γ n are Ω-allowed and have lifts in X Ω starting at 0 Ω , which allow us to define the maps Ψ t by (4.21) on V . We now prove that (Ψ t ) t∈[a,1] is a γ-adapted deformation of the identity. The map (t, v ) → Ψ t ( v ) is locally Lipschitz because the flow map (4.20) is locally Lipschitz, and Ψ a = Id because Φ a,a is the identity map of (R × C) n ; hence, we just need to prove (4.8). We set which can be itself checked as follows: consider first an arbitrary initial condition v ∈ (R × C) n and the corresponding solution v(t) := Φ a,t ( v ), and let v 0 (t) := v 1 (t) + We now show that the γ-adapted deformation of the identity that we have constructed in Proposition 4.10 meets the requirements of Theorem 4.7. Ñ (w) := (v 1 , . . . , v n ) ∈ (R × C) n | v 1 + • • • + v n = w for w ∈ R × C, Ñj := (v 1 , . . . , v n ) ∈ (R × C) n | v j = (0, 0) for 1 ≤ j ≤ n. In view of (2.6)-(2.7) and (3.5), the inclusion (4.13) follows from Φ a,t Ṽ ⊂ L 1 +•••+Ln=L(γ |t ) M L 1 ,δ ′ (t) Ω × • • • × M Ln,δ ′ (t) Ω for all t ∈ [a, 1], with δ ′ (t) as in (4.12). Proof of Lemma 4.11. Let us consider an initial condition v ∈ Ṽ and the corresponding solution v(t) := Φ a,t ( v ), whose components we write as v j (t) = λ j (t), ζ j (t) for j = 1, . . . , n. We also have v j (a) = s j γ(a) for some (s 1 , . . . We first notice that . . . X n := η n (v n ) D(t, v ) γ′ (t), where η j (v) := dist v, {(0, 0)} ∪ S Ω j , D t, v := η 1 (v 1 ) + • • • + η n (v n ) + |γ(t) -(v 1 + • • • + v n )|. The case of endless continuability w.r.t. bounded direction variation In this subsection, we extend the estimates of Theorem 4.1 to the case of a d.d.f.s.. Let us fix an arbitrary d.d.f.s. Ω. We fix ρ > 0 such that Ω 3ρ,M = Ø for every M ≥ 0. We consider a path γ : [0, 1] → C in Π δ,M,L Ω * n , with arbitrary δ ∈ (0, ρ) and L > 0, satisfying the following condition: (see proof of Theorem 4 in [START_REF]Nonlinear analysis with resurgent functions[END_REF] for the detail). Since F (z, w) ∈ RΩ {w}, we obtain from Corollary 4.2 the following estimates: For every δ, L > 0, there exist δ ′ , L ′ , C > 0 such that F k δ ′ ,L ′ Ω ≤ C k+1 and Hm δ,L Ω * ∞ ≤ k≥1 (m + k -1)! m!k! n 1 +•••+n k =m+k-1 n 1 ,••• ,n k ≥1 C k+1 k! Fn 1 δ ′ ,L ′ Ω • • • Fn k δ ′ ,L ′ Ω ≤ k≥1 2 m+k n 1 +•••+n k =m+k-1 n 1 ,••• ,n k ≥1 C m+3k k! ≤ k≥1 2 2m+3k-2 C m+3k k! ≤ e 8C 3 (4C) m . This yields H(z, w) ∈ RΩ * ∞ {w}, whence, H(z, F0 (z)) ∈ RΩ * ∞ . Definition 1.2. A formal series φ(z) = ∞ j=0 ϕ j z -j ∈ C[[z -1 ]] is said to be resurgent if φ(ζ) = ∞ j=1 ϕ j ζ j-1 ) ψ(ζξ) dξ for ζ in the intersection of the discs of convergence of φ and ψ. Let us illustrate this point on two examples. -The equation dϕ dz λϕ = b(z), where b(z) is given in z -1 C{z -1 } and λ ∈ C * , has a unique formal solution in C[[z -1 ]], namely φ(z) := -λ -1 Id -λ -1 d dz -1 b, whose Borel transform is φ(ζ) = -(λ + ζ) -1 b(ζ); here, the Borel transform b(ζ) of b(z) is entire, hence φ is meromorphic in C, with at worse a pole at ζ = -λ and no singularity elsewhere. Therefore, heuristically, for a nonlinear equation dϕ dz λϕ = b 0 n+1ω n+1 | gives the minimum of |ω ′ n+1ω n+1 | for such vertices and assign an open line segment ℓ v ′ →v := {ω n+1 + s(ω we obtain a lift of γ| [0,a] by concatenating γ |b and γ| [b,a] in the coordinate. It contradicts the maximality of I, and hence, I = [0, 1]. Figure 3: . by totally the same discussion. Notice that, since the sequence v ′ in the proof of Lemma 3.10 is uniquely determined by ζ ∈ X Ω , the choice of the path P ε ′ ζ depends only on the radius ε ′ of the circles C ε ′ ω ′ j Further, from the construction of the path P ε ′ ζ , we can extend Lemma 3.10 as follows: Lemma 3.11. For all ε > 0 and ζ ∈ X Ω , there exist a neighborhood U ζ of ζ and, for ε ′ small enough, a continuous deformation For each ζ ∈ U , the path γ ζ : t ∈ [0, 1] → tζ is Ω-allowed and hence has a lift γ ζ on X Ω starting at 0 Ω . Then L (ζ) := γ ζ (1) defines a holomorphic function on U and induces an isomorphism(4.3) L : U ∼ -→ U , where U := L (U ) ⊂ X Ω , such that p Ω • L = Id.Let us denote by ∆ n the n-dimensional simplex∆ n := { (s 1 , . . . , s n ) ∈ R n ≥0 | s 1 + • • • + s n ≤ 1 }with the standard orientation, and by [∆ n ] ∈ E n (R n ) the corresponding integration current. For ζ ∈ U , we define a map D(ζ) on a neighbourhood of ∆ n in R n by D(ζ) : s = (s 1 , . . . , s n ) → D(ζ, s ) := L (s 1 ζ), . . . , L (s n ζ) ∈ U n ⊂ X n Ω and denote by D(ζ) # [∆ n ] ∈ E n (X n Ω ) the push-forward of [∆ n ] by D(ζ). (4.7) γ(t) = t a γ(a) for t ∈ [0, a], |γ(a)| = ρ, γ| [a,1] is C 1 . for k large enough one has γ k (1) = ζ, thus one then can replace γ by γ k . Hence we can assume that (4.7) holds. Let (Ψ t ) [t∈[a,1]] denote the γ-adapted deformation of the identity provided by Theorem 4.7, possibly with (δ, L) replaced by (δ/2, L + δ). Proposition 4.6 shows that, for f1 , . . . , fn ∈ RΩ , p * Ω * n 1 * f1 * • • • * fn (ζ) can be written as (4.10) with t = 1, and (4.13)-(4.14) then show that (4.1) holds because δ ′ (t) ≥ δ ′ and c(1) ≤ c. Therefore, (4.1) holds on K δ,L Ω * n \ U too. In fact, in view of the proof of Theorem 4.7 given below, one can give the following generalization of Theorem 4.1: Theorem 4.8. Let δ, L be positive real numbers. Then there exist positive constants c and δ ′ such that, for every integer n ≥ 1 and for all d.f.s. Ω 1 , . . . , Ω n with Ω j,4δ = Ø (j = 1, • • • , n) and f1 ∈ RΩ 1 , . . . , fn ∈ RΩn , the function 1 * f1 * • • • * fn belongs to RΩ , where Ω := Ω 1 * • • • * Ω n , and ( Let j ∈ {1, . . . , n}. The second part of (4.8) follows from the inclusionΦ a,t Ñj ⊂ Ñj for t ∈ [a, 1],which stems from the fact that the jth component of the vector field (4.19) vanishes on Ñj (because η (0, 0) = 0).Sinceζ 1 + • • • + ζ n = γ(a) ⇒ |ζ 1 | + • • • + |ζ n | = |γ(a)| for any (ζ 1 , . . . , ζ n ) ∈ V, the first part of (4.8) follows from the inclusion Φ a,t Ñ γ(a) ⊂ Ñ γ(t) for t ∈ [a, 1], ), whence γ(t)v 0 (t) ≤ γ(a)v 0 (a) exp δ -1 √ 2 L(γ| [a,t] )for all t; now, if v ∈ Ñ γ(a) , we find v 0 (a) = γ(a), whence v 0 (t) = γ(t) for all t. Lemma 4.11. Let Ṽ := s 1 γ(a), . . . , s n γ(a) | (s 1 , . . . , s n ) ∈ ∆ n ∈ (R × C) n . Then (4.24) , s n ) ∈ ∆ n , whence λ 1 (a) + • • • + λ n (a)≤ |γ(a)| = ρ and |v j (a)| = ρ for j = 1, . . . , n. Notation 4. 14 . 14 Given δ, M, L > 0, we denote by Π δ,M,L Ω the set of all paths γ ∈ Π dv Ω such that V (γ) ≤ M , L(γ) ≤ L and inf t∈[0,t * ] dist 1 γ dv (t), S Ω ≥ δ, where γ dv is as in (2.13) and dist 1 is the distance associated with the norm• 1 defined on R 2 × C by (µ, λ, ζ) 1 := |µ| + |λ| 2 + |ζ| 2 . (4.29)There exists a ∈ (0, 1) such that γ(t) = t a γ(a) for t ∈ [0, a] and |γ(a)| = ρ.Then, for t ∈ [0, 1] and v ∈ R × C, we setη(t, v) := dist 1 (V (γ |t ), v), R ×{(0, 0)} ∪ S Ω . and, for v = (v 1 , • • • , v n ) ∈ (R × C) n , D t, v := η(t, v 1 ) + • • • + η(t, v n ) + |γ(t) -(v 1 + • • • + v n )|. See the proof of [Sau15, Prop. 5.2]. Note that the right-hand side of (4.9) must be interpreted as (4.10) ∆n (p * Ω f1 ) ζ t 1 • • • (p * Ω fn ) ζ t n det ∂ζ t i ∂s j 1≤i,j≤n ds 1 • • • ds n with the notation (4.11) (each function ζ t i is Lipschitz on ∆ n and Rademacher's theorem ensures that it is differentiable ζ t 1 , . . . , ζ t n := Ψ t • D γ(a) , ζ t i := p Ω • ζ t i for 1 ≤ i ≤ n almost everywhere on ∆ n , with bounded partial derivatives). 1] for any f1 , . . . , fn ∈ RΩ , with β as in Lemma 4.3. Proof. assuming without loss of generality that the first two conditions in (4.7) hold. If the third condition in (4.7) does not hold, i.e. if γ|[a,1] 4.22)t ∈ [0, 1] ⇒ γj (t) ∈ M Ω and dλ j /dt = |dγ j /dt|. Since ζ j ∈ U and γ j (t) = t a ζ j for t ∈ [0, a], the property (4.22) holds for t ∈ [0, a]. For t ∈ [a, 1], the second property in (4.22) follows from the fact that the R-projection of X j (t, v ) ∈ R × C coincides with the modulus of its C-projection.Since γ1 (t), . . . , γn (t) = Φ a,t γ1 (a), . . . , γn (a) and the first property in (4.22) holds at t = a, the first property in (4.22) for t ∈ [a, 1] is a consequence of the inclusion (4.23) Φ a,t M n Ω ⊂ M n Ω , which can itself be checked as follows: suppose v * ∈ (R × C) n \ M n Ω , then it has at least one component v * j in S Ω and, in view of the form of the vector field (4.19), the submanifold { • • • + v n (t); then (4.19) shows that d dt γ(t)v 0 (t) = γ(t)v 0 (t) D(t, v(t)) γ′ (t), hence the Lipschitz function h(t) := γ(t)v 0 (t) has an almost everywhere defined derivative which satisfies |h ′ (t)| ≤ d dt γ(t)v 0 (t) ≤ 1 D(t, v(t)) |γ ′ (t)| h(t), which is ≤ δ -1 √ 2 |γ ′ (t)| h(t) by (4.17 Let j ∈ {1, . . . , n}. Since η is 1-Lipschitz, we can define a Lipschitz function on [a, 1] by the formula h j (t) := η v j (t) , and its almost everywhere defined derivative satisfies |γ ′ (t)| ≤ g(t)h j (t), where g(t):= δ -1 √ 2 |γ ′ (t)|. (a) e -δ -1 √ 2 L(γ| [a,t] ) ≤ η v j (t) ≤ η v j (a) e δ -1 √ 2 L(γ| [a,t] ) for all t ∈ [a, 1].Let us now fix t ∈ [a, 1]. We conclude by distinguishing two cases. , Gronwall's lemma yields V (t) ≤ V (a) e 3δ -1 L(γ| [a,t] ) , and hence, sinceV (a) = ρ n j=1 |s js ′ j |, we have (4.28) V (t) ≤ ρ e 3δ -1 L(γ| [a,t] )Then, (4.28) entails via Rademacher's theorem that the following estimate holds a.e. on ∆ n : ≤ ρ e 3δ -1 L(γ| [a,t] ) .Remark 4.13. Theorem 4.8 is verified by replacing the vector field (4.19) by n j=1 |γ |h ′ |λ ′ j (t)| = n j=1 η v j (t) D(t, v(t)) j (t)| ≤ |v ′ j (t)| = h j (t) D(t, v(t)) a g(τ ) dτ = δ -1 √ t 2 L(γ| [a,t] ), we deduce that j=1 n i=1 ∂ζ t i ∂s j Finally, (4.14) follows from the inequality Since (4.26) det ∂ζ t i ∂s j 1≤i,j≤n ≤ n j=1 X(t, v ) = η v j Thereforen X 1 := η 1 (v 1 ) |s j -s ′ j |. n i=1 ∂ζ t i ∂s j D(t, v ) γ′ (t) . ′ (t)| ≤ |γ ′ (t)|, hence λ 1 (t) + • • • + λ n (t) ≤ λ 1 (a) + • • • + λ n (a) + t a |γ ′ | ≤ L(γ |t ). Therefore, we just need to show that (4.25) dist v j (t), S Ω ≥ δ ′ (t) for j = 1, . . . , n. Suppose first that η(v j (a)) ≥ ρ e - √ 2 δ -1 L(γ| [a,t] ) . Then the first inequality in (4.26) yields η(v j (t)) ≥ δ ′ (t), and since dist v j (t), S Ω ≥ η(v j (t)) we get (4.25). Graduate School of Sciences, Hiroshima University. 1-3-1 Kagamiyama, Higashi-Hiroshima, Hiroshima 739-8526, Japan. 2 IMCCE, CNRS-Observatoire de Paris, France. Acknowledgements. This work has been supported by Grant-in-Aid for JSPS Fellows Grant Number 15J06019, French National Research Agency reference ANR-12-BS01-0017 and Laboratoire Hypathie A*Midex. The authors thank Fibonacci Laboratory (CNRS UMI 3483), the Centro Di Ricerca Matematica Ennio De Giorgi and the Scuola Normale Superiore di Pisa for their kind hospitality. Proof of Theorem 4.7 We suppose that we are given n ≥ 1, ρ > 0, a d.f.s. Ω such that Ω = Ω and Ω 3ρ = Ø, and γ ∈ Π δ,L Ω * n satisfying (4.7) with δ ∈ (0, ρ) and L > 0. We set γ(t) := L(γ |t ), γ(t) and define functions by the formulas (4.16) η(v) := dist v, {(0, 0)} ∪ S Ω , D t, v := η(v 1 ) + where | • | is the Euclidean norm in R × C ≃ R 3 . The assumptions Ω = Ω and γ ∈ Π δ,L Ω * n yield Lemma 4.9. The function D satisfies (4.17) Proof. Let (t, v ) ∈ [a, 1] × (R × C) n . For each j ∈ {1, . . . , n}, pick u j ∈ {(0, 0)} ∪ S Ω so that η(v j ) = |v ju j |, and let Either all of the u j 's equal (0, 0), in which case u = (0, 0) too, or u = (λ, ω) is a non-trivial sum of at most n points of the form u j = (λ j , ω j ) ∈ S Ω , in which case we have in fact ω j ∈ Ω λ j because of Lemma 2.4 and the assuption Ω = Ω, hence (2.12) then yields ω ∈ Ω * n λ . We thus find Otherwise, u ∈ S Ω * n and (4.18) shows that D t, v ≥ δ because γ ∈ Π δ,L Ω * n . Since D never vanishes, we can define a non-autonomous vector field . . . The functions X j : [a, 1] × (R × C) n → R × C are locally Lipschitz, thus we can apply the Cauchy-Lipschitz theorem on the existence and uniqueness of solutions to differential equations and get a locally Lipschitz flow map (value at time t of the unique maximal solution to d v/dt = X(t, v ) whose value at time t * is v ). We construct a γ-adapted deformation of the identity out of the flow map as follows: ) . Then the second inequality in (4.26) yields and we are done. Only the inequality (4.14) remains to be proved. We first show the following: Lemma 4.12. For any t ∈ [a, 1] and u, v ∈ (R × C) n , the vector field (4.19) satisfies Proof of Lemma 4.12. We rewrite X j (t, u ) -X j (t, v ) as follows: Then, summing up |X j (t, u ) -X j (t, v )| in j, we obtain (4.27) from the inequality n j=1 η(v j ) ≤ D(t, v ). We conclude by deriving the inequality (4.14) from Lemma 4.12. We use the notation (4.11) to define ζ t 1 , . . . , ζ t n : ∆ n → C, and we now define v t j : ∆ n → R × C for t ∈ [a, 1] by the formulas v a j ( s ) := s j γ(a) and We obtain from (4.17) and (4.27) the following estimate: Choosing (µ j , u j ) ∈ R ×{(0, 0)} ∪ S Ω so that η(t, v j ) = (V (γ |t ), v j ) -(µ j , u j ) 1 for each j and using (µ We can thus define a map (t . . . ) be the flow of (4.30) with the initial condition v a j := (|γ(a)|s j , γ(a)s j ) with s ∈ ∆ n . Since γ′ (t), η(t, v j ) and D(t, v ) are Lipschitz continuous on [a, 1] × (R × C) n , we find by Rademacher's theorem that dζ t j /dt is differentiable a.e. on [a, 1] and satisfies when s j = 0. Since η(v t j ) and D(t, v t ) are real valued functions, we have Therefore, the following holds for every t ∈ [a, 1]: Arguing as for Theorem 4.1, we obtain Theorem 4.15. Let δ, L, M > 0 be real numbers. Then there exist c, δ ′ > 0 such that, for every d.d.f.s. Ω such that Ω 4δ,M = Ø (M ≥ 0), for every integer n ≥ 1 and for every f1 , . . . , fn ∈ R dv Ω , the function 1 * f1 * • • • * fn belongs to R dv Ω * n and satisfies where the seminorm Applications In this section, we display some applications of our results of Section 4. We first introduce convergent power series with coefficients in RΩ : Definition 5.1. Given Ω a d.f.s. and r ≥ 1, we define RΩ {w 1 , • • • , w r } as the space of all such that, for every δ, L > 0, there exists a positive constant C satisfying Fk where |k| := k 1 + • • • + k r (with the notation of Definition 3.16 for • δ,L Ω ). We can now deal with the substitution of resurgent formal series in a context more general than in Theorem 1.3. Theorem 5.2. Let r ≥ 1 be an integer and let Ω 0 , . . . , Ω r be d.f.s. Then for any F (w 1 , . . . , w r ) ∈ RΩ 0 {w 1 , • • • , w r } and for any φ1 , . . . , φr ∈ C[[z -1 ]] without constant term, one has where Proof. Since the family f.s. satisfies the conditions in Theorem 4.8 for sufficiently small δ > 0, for every L > 0, there exist δ Therefore, since F (w 1 , . . . , w r ) ∈ RΩ 0 {w 1 , • • • , w r }, we find that F ( φ1 , . . . , φr ) converges in RΩ 0 * Ω * ∞ and defines an Ω 0 * Ω * ∞ -resurgent formal series. Notice that, in view of Theorem 2.13, Theorem 1.3 is a direct consequence of Theorem 5.2. Next, we show the following implicit function theorem for resurgent formal series:
72,363
[ "741642" ]
[ "478911", "541700" ]
01756689
en
[ "info" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01756689/file/Rapport-CSTwiem.pdf
Wiem Housseyni Maryline Chetto email: maryline.chetto@univ-nantes.frwiem.housseyni@univ-nantes.fr ARTE: a Simulation Tool for Reconfigurable Energy Harvesting Real-time Embedded Systems This report presents ARTE a new free Java based software, which we have developed for simulation and evaluation of reconfigurable energy harvesting real-time embedded systems. It is designed in the aim to generate, compare, and evaluate different real-time scheduling strategies on their schedulability performance as well as energy efficiency in the presence of external unpredictable reconfiguration scenarios. A reconfiguration scenario is defined as unpredictable events from the environment such as addition-remove of software tasks, and increase-decrease of the power rate. The occurrence of such events may evolve the system towards an overload state. For this purpose, ARTE provides a task sets generator, reconfiguration scenarios generator, and a simulator. It also offers to the designer the possibility to generate and run simulation for specific systems. In the actual version, ARTE can simulate the execution of periodic independent and (m,k)-firm constrained task sets on monoprocessor architecture. The energy consumption is considered as a scheduling parameter in the same manner as Worst Case Execution Time (WCET). Introduction Since decades, the literature has revealed a substantial interest in the real-time scheduling problem, where several approaches and algorithms have been ported over the years to optimize the scheduling of real-time tasks on single and multiple processor systems. The energy constraint has emerged as a major issue in the design of such systems, despite there are big research efforts addressing this problem. Research works have focused on either dynamic energy awareness algorithm in order to reduce system energy consumption or energy harvesting technology. In order to enhance the lifespan and to achieve energy autonomy in such system, there is a tremendous interest in the energy harvesting technologies recently. In recent years, energy scavenging or harvesting technology from renewable sources such as photovoltaic cells, and piezoelectric vibrations emerges as new alternative to ensure sustainable autonomy and perpetual function of the system. By the same token, the literature has revealed a substantial interest in the scheduling research for energy aware and power management scheduling for real-time systems. Still, there is sufficient scope for research, although uni-processor real-time scheduling for energy harvesting based systems is well studied. On the other hand, scheduling techniques for the reconfigurable energy harvesting real-time systems are not mature enough to either be applicable or optimal as much as currently available uni-processor real-time scheduling techniques. Nowadays, new criteria such as energy efficiency, high-performance and flexible computing arises the need of new generation of real-time systems. As a result, such systems need to solve a number of problems that so far have not been addressed by traditional real-time systems. Reconfigurable systems are solutions to providing both higher energy efficiency, and high-performance and flexibility. In recent years, a substantial number of research works from both academia and industry have been made to develop reconfigurable control systems [START_REF] Gharbi | Functional and operational solutions for safety reconfigurable embedded control systems[END_REF], [START_REF] Da Silva | Modeling of reconfigurable distributed manufacturing control systems[END_REF]. Reconfiguration is usually performed in response to both user requirements and dynamic changes in its environment such as unpredictable arrival of new tasks, and hardware or software failures. Some examples of such systems are multi-robot systems [START_REF] Chen | Combining re-allocating and rescheduling for dynamic multi-robot task allocation[END_REF], and wireless sensor networks [START_REF] Grichi | Rwin: New methodology for the development of reconfigurable wsn[END_REF]. From the literature we can drive different definitions of a reconfigurable system. The authors of [START_REF] Grichi | Rocl: New extensions to ocl for useful verification of flexible software systems[END_REF] define a reconfiguration of a distributed system as any addition/ removal/update of one/more software-hardware elements. In this work, we define a reconfiguration as a dynamic operation that offers to the system the capability to adjust and adapt its behavior i.e., scheduling policy, power consumption, according to environment and the fluctuating behavior of renewable source, or to modify the applicative functions i.e., add-remove-update software tasks. Almost of embedded systems are real-time constrained. A real-time system involves a set of tasks where each task performs a computational activity according to deadline constraints. The main purpose of a real-time system is to produce not only the required results but also within strict time constraints. In order to check whether a set of tasks respects its temporal constraints when a scheduling algorithm is used or to evaluate the efficiency of a new approach, software simulation against other algorithms is considered as a valid comparison technique and it is commonly used in the evaluation of real-time systems. Over the last years, several simulation tools have been ported MAST [START_REF] Harbour | Mast: Modeling and analysis suite for real time applications[END_REF], CHEDDAR [START_REF] Singhoff | Cheddar: a flexible real time scheduling framework[END_REF], STORM [START_REF] Urunuela | Storm a simulation tool for real-time multiprocessor scheduling evaluation[END_REF], FORTAS [START_REF] Courbin | Fortas: Framework for real-time analysis and simulation[END_REF] and YARTISS [START_REF] Chandarli | YARTISS: A Generic, Modular and Energy-Aware Scheduling Simulator for Real-Time Multiprocessor Systems[END_REF]. However, none of these approaches provide support for reconfigurable based applications, yet. More recently, new simulation tools are proposed for reconfigurable computing systems Reconf-Pack [START_REF] Gammoudi | Reconf-pack: A simulator for reconfigurable battery-powered real-time systems[END_REF]. However, this attempt do not support energy harvesting requirements. New challenges raised by reconfigurable real-time embedded energy harvesting systems in terms of reconfigurability, scheduling, and energy harvesting management. Indeed, in such a context, it is very difficult to evaluate and compare scheduling algorithms on their schedulability performance as well as energy efficiency. Unlike any prior work, we formulate and solve the challenging problem of scheduling real-time applications on reconfigurable energy-harvesting embedded system platforms. Based on these motivations, we investigate in this report the challenges and viability of designing an open and flexible simulation tool able to generate and simulate accurately the behavior of such systems. In this report, we introduce ARTE, a new simulation tool for reconfigurable energy harvesting real-time embedded systems, which provides various functions to simulate the scheduling process of real-time task sets and their temporal behavior when a scheduling policy is used. It provides the classical real-time scheduling policy EDF, the optimal scheduling algorithm for energy harvesting systems EDH, EDF scheduling for (m,k)-constrained task sets, and a new scheduling policy which is an extension of EDH for (m,k)-firm constrained task sets EDH-MK, and finally a new hierarchical approach Reconf-Algorithm. It implements also classical and new feasibility tests for both real-time and energy harvesting requirements. The main aim of this research work is to guarantee a feasible execution in the whole system in the presence of unpredictable events while satisfying a quality of service (QoS) measured first in term of the percentage of satisfied deadline, second the percentage of satisfied deadline while considering the degree of importance of tasks, and finally the overhead introduced by the proposed approach. The remainder of this report is as follows: we review related works and examples of real-time simulators in Section 2. Section 3 presents a background about the EDF, EDH scheduling algorithm, and the (m,k)-firm model. In section 4 we detail the proposed new scheduling approach for reconfigurable energy harvesting real-time systems. Then in Section 5 we present the various functionalities of our simulation tool. A case study is given in Section 6. Finally, we discuss our future work in Section 7 and Section 8 brings a conclusion to this report. Research Aims New challenges raised by reconfigurable real-time embedded energy harvesting systems in terms of reconfigurability, scheduling, and energy harvesting management. Such systems work in dynamic environment where unpredictable events may occur arrival-remove of software tasks or increase-decrease of power rate. However, when unpredictable events from the environment occur, the system may evolve towards an unfeasible state (processor/energy overload) and the actual operation mode is no longer optimal. For this aim, the main focus of this research is on how to achieve system feasibility while satisfying a graceful QoS degradation. For this purpose we define three operating modes: -Normal mode: where all the tasks in the system execute 100% of their instances while meeting all the deadlines -Degradation mode level 1: This is the case where the normal mode is not feasible. The K least important tasks execute in degraded mode according to the model (m,k)-firm and other tasks execute normally. The schedulability test is performed by considering iteratively the tasks according to their importance. -Degradation mode level 2. This is the case where the degradation mode level 1 is not feasible. Abandonable tasks are gradually eliminated by increasing importance. When a system executing under an operating mode and an external event occurs it is imperative to verify the schedubility test. We identify three cases: -The system may continue to execute in the actual operating mode, -the system may be executed under degraded mode, -the system may executed under normal mode. The quality of service (QoS) is measured in term of: -The percentage of satisfied deadline, -the percentage of satisfied deadline while considering the degree of importance of tasks, -the overhead introduced by the proposed approach. Related Works In this section we outline the existing works related to simulation of the scheduling of real-time tasks. There are a lot of tools to test and visualize the temporal and execution behavior of real-time systems, and they are divided mainly into two categories: the execution analyzer frameworks and the simulation software. MAST [START_REF] Harbour | Mast: Modeling and analysis suite for real time applications[END_REF] is a modeling and analysis suite for real-time applications that is developed in 2000. MAST is an event-driven scheduling simulator that permits modeling of distributed real-time systems and offers a set of tools to e.g. test their feasibility or perform sensitive analysis. Another known simulator is Cheddar [START_REF] Hamdaoui | A dynamic priority assignment technique for streams with (m, k)-firm deadlines[END_REF][START_REF] Bernat | Combining (/sub m//sup n/)-hard deadlines and dual priority scheduling[END_REF] which is developed in 2004 and it handles the scheduling of real-time tasks on multiprocessor systems. It provides many implementations of scheduling, partitioning and analysis of algorithms, and it comes with a friendly Graphical User Interface (GUI). Unfortunately, no API documentation is available to help with the implementation of new algorithms and to facilitate its extensibility. Moreover, Cheddar is written in Ada programming language [START_REF] Mccormick | Building parallel, embedded, and real-time applications with Ada[END_REF] which is used mainly in embedded systems and it has strong features such as modularity mechanisms and parallel processing. Ada is often the language of choice for large systems that require real-time processing, but in general, it is not a common language among developers. We believe that the choice of Ada as the base of Cheddar reduces the potential contributions to the software from average developers and researchers. Finally, STORM [START_REF] Urunuela | Storm a simulation tool for real-time multiprocessor scheduling evaluation[END_REF] FORTAS [START_REF] Courbin | Fortas: Framework for real-time analysis and simulation[END_REF], and YARTISS [START_REF] Chandarli | YARTISS: A Generic, Modular and Energy-Aware Scheduling Simulator for Real-Time Multiprocessor Systems[END_REF] are tools which are written in Java. In 2009, STORM is released and it is described as a simulation tool for Real time multiprocessor scheduling. It has modular architectures (both software and hardware) which simulate the scheduling of task sets on multiprocessor systems based on the rules of a chosen scheduling policy. The specifications of the simulation parameters and the scheduling policies are modifiable using an XML file. However, the simulator tool lacks a detailed documentation and description of the available scheduling methods and the features of the software. On the other hand, FORTAS is a real-time simulation and analysis framework which targets uniprocessor and multiprocessor systems. It is developed mainly to facilitate the comparison process between the different scheduling algorithms, and it includes features such as task generators and computation of results of each tested and compared scheduling algorithm. FORTAS represents valuable contributions in the effort towards providing an open and modular tool. Unfortunately, it seems to suffer from the following issues: its development is not open to other developers for now, we can only download .class files, no documentation is yet provided and it seems that no new version has been released to public since its presentation in [START_REF] Courbin | Fortas: Framework for real-time analysis and simulation[END_REF]. More recently, YARTISS is proposed as a modular and extensible tool. It is a real-time multiprocessor scheduling simulator which provides various functions to simulate the scheduling process of real-time task sets and their temporal behavior when a scheduling policy is used. Functionalities of YARTISS: 1) simulate a task set on one or several processors while monitoring the system energy consumption, 2) concurrently simulate a large number of tasksets and present the results in a user friendly way that permits us to isolate interesting cases, and 3) randomly generate a large number of task sets. However, none of these simulation tools provide support for unpredictable reconfiguration scenarios yet. To date, only a few recent works target the real-time scheduling issue in reconfigurable systems. In [START_REF] Gammoudi | Reconf-pack: A simulator for reconfigurable battery-powered real-time systems[END_REF] a simulator tool is proposed for Reconfigurable Battery-Powered Real-Time Systems Reconf-Pack. Reconf-Pack is a simulation tool for analyzing a reconfiguration and applying the proposed strategy for real-time systems. It is based upon another tool Task-Generator which generates random tasks. According to the state of the system after a reconfiguration, Reconf-Pack calculates dynamically a deterministic solution. Moreover, it compares the pack-based solutions to related works. However, it seems to suffer from the following issues: its development is not open to other developers for now, and isn't available for download. During the development of ARTE, we learned from those existing tools and we included some of their features in addition to others of our own. Our aim is to provide a simulation tool for reconfigurable energy harvesting real-time systems that is easily it can be used to generate, compare, simulate the scheduling of real-time tasks on reconfigurable energy harvesting real-time systems. Background This section gives a background about first the EDF scheduling algorithm, then EDH scheduling algorithm, and finally the (M,K)-model. Earliest Deadline First scheduling EDF EDF is probably the most famous dynamic priority scheduler. As a consequence of its optimality for preemptive uniprocessor scheduling of independent jobs, the runtime scheduling problem is perfectly solved if we assume there exists no additional constraints on the jobs. EDF is the scheduler of choice since any feasible set of jobs is guaranteed to have a valid EDF schedule [START_REF] Chetto | Optimal scheduling for real-time jobs in energy harvesting computing systems[END_REF]. Time-feasibility: A set ψ of n tasks τ i = {C i , T i , E i } is time feasible if and only if n i=1 C i T i ≤ 1 (1) However, when considering energy harvesting requirements EDF is no longer optimal and a necessary condition but non sufficient of schedulability is as follows: Time-feasibility: A set ψ of n tasks is time feasible if and only if n i=1 C i T i ≤ 1 (2) Energy-feasibility: A set ψ of n tasks is energy feasible if n i=1 E i C + E P (0, t) ≤ 1 (3) Let E p (0,t) be the amount of energy that will be produced by the source between 0 and t and C is the energy storage unit (supercapacitor or battery) capacity. Earliest Deadline First-Energy Harvesting EDH Dertouzos [START_REF] Dertouzos | Control robotics: The procedural control of physical processes[END_REF] shows that the Earliest Deadline First Algorithm (EDF) is optimal. EDF schedules at each instant of time t, that job ready for execution whose deadline is closest to t. But the problem with EDF is that it does not consider future jobs arrivals and their energy requirements. In [START_REF] Chetto | A note on edf schedulingfor real-time energy harvesting systems[END_REF], the authors prove that EDF is no longer optimal for RTEH systems. Jobs are processed as soon as possible thus consuming the available energy greedily.Although noncompetitive, EDF turns out to remain the best non-idling scheduler for uniprocessor RTEH platforms [START_REF] Chetto | A note on edf schedulingfor real-time energy harvesting systems[END_REF]. In [START_REF] Chetto | Optimal scheduling for real-time jobs in energy harvesting computing systems[END_REF], the authors describe the Earliest Deadline-Harvesting ED-H scheduling algorithm which is proved to be optimal for the scheduling problem of Energy-Harvesting Real-time systems. ED-H is an extension of the EDF algorithm that adds energy awareness capabilities by using the notion of slack-time and slack-energy. The idea behind ED-H is to order the jobs according to the EDF rule since the jobs issued from the periodic tasks have hard deadlines. Executing them in accordance with their relative urgency appears to be the best approach even if they are not systematically executed as soon as possible due to possible energy shortage. The difference between ED-H and classical EDF is to decide when to execute a job and when to let the processor idle. Before authorizing any job to execute, the energy level of the storage must be sufficient so that all future occurring jobs execute timely with no energy starvation, considering their timing and energy requirements and the replenishment rate of the storage unit. According to EDH a processor P j and battery B j , that performs tasks set ψ j should satisfy the schedulability tests described follows: -Time-feasibility: [START_REF] Chetto | Optimal scheduling for real-time jobs in energy harvesting computing systems[END_REF] ψ j is time feasible if and only if U Pj = n i=1 C i T i ≤ 1 (4) -Energy-feasibility: [START_REF] Chetto | Optimal scheduling for real-time jobs in energy harvesting computing systems[END_REF] ψ j is energy feasible if and only if U e j ≤ 1 (5) U e j is the energy load of the set of tasks ψ j assigned to processor P j U e j = sup 0≤t1≤t2≤H E cj (t 1 , t 2 ) C + E Pj (t 1 , t 2 ) Theorem 1 [START_REF] Chetto | Optimal scheduling for real-time jobs in energy harvesting computing systems[END_REF] The set of tasks ψ j assigned to processor P j is feasible if and only if U Pj ≤ 1 and U e j ≤ 1. Theorem 1 gives a necessary and sufficient condition of schedulability. (M,K)-model For its intuitiveness and capability of capturing not only statistical but also deterministic quality of service QoS requirements, the (m,k)-model has been widely studied, e.g., [START_REF] Bernat | Combining (/sub m//sup n/)-hard deadlines and dual priority scheduling[END_REF], [START_REF] Hamdaoui | A dynamic priority assignment technique for streams with (m, k)-firm deadlines[END_REF], [START_REF] Hua | Energy-efficient dual-voltage soft real-time system with (m, k)-firm deadline guarantee[END_REF], [START_REF] Quan | Enhanced fixed-priority scheduling with (m, k)-firm guarantee[END_REF], and [START_REF] Ramanathan | Overload management in real-time control applications using (m, k)firm guarantee[END_REF]. The (m,k)-model was originally proposed by Hamdaoui et al. [START_REF] Hamdaoui | A dynamic priority assignment technique for streams with (m, k)-firm deadlines[END_REF]. According to this model, a repetitive task of the system is associated with an (m,k) (0<m<k) constraint requiring that m out of any k consecutive job instances of the task meet their deadlines. A dynamic failure occurs, which implies that the temporal quality of service QoS constraint is violated and the scheduler is thus considered failed, if, within any k consecutive jobs, more than (k-m) job instances miss their deadlines. Based on this (m,k)-model, Ramanathan et al. [START_REF] Ramanathan | Overload management in real-time control applications using (m, k)firm guarantee[END_REF] proposed to partition the jobs into mandatory and optional jobs. So long as all of the mandatory jobs can meet their deadlines, the (m,k)-constraints can be ensured. The mandatory jobs are the jobs that must meet their deadlines in order to satisfy the -constraints, while the optional jobs can be executed to further improve the quality of the service or simply be dropped to save computing resources. Quan et al. [START_REF] Quan | Enhanced fixed-priority scheduling with (m, k)-firm guarantee[END_REF] formally proved that the problem of scheduling with the (m,k)-guarantee for an arbitrary value of mand k is NP-hard in the strong sense. They further proposed to improve the mandatory/optional partitioning by reducing the maximal interference between mandatory jobs. Mandatory/Optional Job Partitioning With (M,K)-Pattern The (m,k)-pattern of task τ i , denoted by Π i , is a binary string Π i = {π i0 , π i1 π i(ki-1 )} which satisfies the following: 1) π ij is a mandatory job if π ij = 1 and optional if π ij = 0 and 2) ki-1 j=0 π ij = m i . By repeating the (m,k)-pattern , we get a mandatory job pattern for τ i . It is not difficult to see that the (m,k)-constraint for τ i can be satisfied if the mandatory jobs of τ i are selected accordingly. Evenly Distributed Pattern (Even Pattern) Even Pattern strategy was proposed by Ramanathan et al. [START_REF] Ramanathan | Overload management in real-time control applications using (m, k)firm guarantee[END_REF] as follows: the first release is always mandatory and subsequent distribution of mandatory and optional alternating. Mathematically, Π i j =    1, if j = j × m i k i × k i m i , f orj = 0, 1, .., k i 0, otherwise (6) In [START_REF] Niu | Energy minimization for real-time systems with (m, k)-guarantee[END_REF] a necessary and sufficient feasibility test for (m,k)-constrained tasks executing under EDF scheduling policy is proposed. Theorem 2: Let system T = {τ 0 , τ i , .., τ n-1 }, where τ i ={C i , T i , D i , m i , k i } and Ψ be the mandatory job set according to their E-patterns. Also, let L represent either the ending point of the first busy period when scheduling only the mandatory jobs or the LCM of T(i) , i = 0, ..., (n-1), whichever is smaller. Then, Ψ is schedulable with EDF if and only if (iff) all the mandatory jobs arriving within [0, L] can meet their deadlines, i.e., i W i (0, t) = i ( m i k i × t -D i T i + ) × C i ≤ t (7) 5 New Scheduling Approach for Reconfigurable Energy Harvesting Real-Time Systems Reconfiguration is usually performed in response to both user requirements and dynamic changes in its environment such as unpredictable activation of new tasks, removal of tasks, or increase-decrease of power supply of the system. Some examples of reconfigurable systems are multi-robot systems [START_REF] Chen | Combining re-allocating and rescheduling for dynamic multi-robot task allocation[END_REF] and wireless sensor networks [START_REF] Grichi | Rwin: New methodology for the development of reconfigurable wsn[END_REF]. At run-time, the occurrence of unpredictable task's activation makes the static schedule no longer optimal and may evolve the system towards an unfeasible state due to energy and processing overloads. Thereafter, some existing or added tasks may violate deadlines. The system has to dynamically adjust and adapt the task allocation and scheduling in order to cope with unpredictable reconfiguration scenarios. We identify mainly two function modes: -Normal mode: where all the tasks in the system execute 100% of their instances while meeting all the deadlines -Degradation mode level 1: This is the case where the normal mode is not feasible. The K least important tasks execute in degraded mode according to the model (m,k)-firm and other tasks execute normally. The schedulability test is performed by considering iteratively the tasks according to their importance. -Degradation mode level 2. This is the case where the degradation mode level 1 is not feasible. Abandonable tasks are gradually eliminated by increasing importance. At any instant, external unpredictable reconfigurations may occur to add or remove software tasks or to increase-decrease power supply on the system. The occurrence of such events provoke the execution of schedulability tests to identify in which mode the tasks should be executed. Normal Mode In the normal mode tasks are assumed to be executed under the optimal scheduler for real-time energy harvesting systems EDH algorithm. Then the tasks set should satisfy the following theorem: Theorem 1 [START_REF] Chetto | Optimal scheduling for real-time jobs in energy harvesting computing systems[END_REF] The set of tasks ψ j assigned to processor P j is feasible if and only if U Pj ≤ 1 and U e j ≤ 1. Degradation mode level 1: EDH-MK Algorithm We propose in this work a new real-time scheduler for reconfigurale energy harvesting real-time systems EDH-MK. The proposed algorithm EDH-MK is an extension of the EDH algorithm with (m,k)-firm guarantee. When it is impossible to execute the tasks set in normal mode due to processor and/or energy overloads, we propose to execute the K least important tasks in degraded mode according to the model (m,k)-firm and other tasks execute normally. In this work we propose the necessary and sufficient schedulability conditions for the EDH-MK algorithm. Definition 1: The static slack time of a mandatory job set Ψ on the time interval [t 1 , t 2 ) is SST Ψ (t 1 , t 2 ) = t 1 -t 2 - i W (t 1 , t 2 ) ( 8 ) SST Ψ (t 1 , t 2 ) gives the longest time that could be made available within [t 1 , t 2 ) after executing mandatory jobs of Ψ with release time at or after t 1 and deadline at or before t 2 . Definition 2: The total mandatory energy demand within interval [0,t] is g(0, t) = i ( m i k i × t T i ) × En i (9) Proof : As shown [START_REF] Ramanathan | Overload management in real-time control applications using (m, k)firm guarantee[END_REF] that, if the mandatory jobs are determined according to (6), for the first p i jobs of τ i , there are l i (t) = mi ki × p i jobs that are mandatory. Therefore, the total mandatory energy load within interval [0,t] that has to be finished by time t, denoted by g(0,t), can be formulated as follows: g(0, t) = i ( m i k i × t -D i T i + ) × En i (10) g(0, t) = i ( m i k i × t T i ) × En i (11) Let E p (0,t) be the amount of energy that will be produced by the source between 0 and t and C is the energy storage unit (supercapacitor or battery) capacity. Definition 3: The total mandatory static slack energy on the time interval [0, t] is SSE Ψ (0, t) = C + E p (0, t) -g(0, t) (12) SSE Ψ (0,t) gives the largest energy that could be made available within [0,t] after executing mandatory jobs with release time at or after 0 and deadline at or before t. U E Ψ ≤ 1 (15) Proof : As proof of Lemma since SSE Ψ (t 1 , t 2 ) ≤ 0 amounts to U E Ψ (t 1 , t 2 ) ≤ 1. The necessary and sufficient schedulability conditions falls into two constraints which should be respected. -Real-time constraints: For each processor P j the tasks set Ψ j assigned to P j should satisfy their deadlines. From equation ( 7) Time-feasibility: The set Ψ j is time feasible if and only if i W i (0, t) = i ( m i k i × 1 T i ) × C i ≤ 1 (16) -Energy constraints: Each processor P j must not, at any moment, lack energy to execute the tasks set assigned to processor P j . Energy feasibility: P j is energy feasible if and only if U E Ψ ≤ 1 (17) We give a necessary and sufficient condition for EDH-MK schedulability and feasibility. Theorem 3: Let Ψ be the mandatory job set according to their E-patterns. Also, let L represent either the ending point of the first busy period when scheduling only the mandatory jobs or the LCM of T(i ) , i = 0, ..., (n-1), whichever is smaller. Then, Ψ is schedulable with EDH-MK if and only if (iff) all the mandatory jobs arriving within [0, L] can meet their deadlines, i.e., i W (0, t) ≤ 1andSSE Ψ ≤ 0 (18) Proof : "If ": We suppose that constraint ( 17) is satisfied and Ψ is not schedulable by EDH-MK. Let us show a contradiction. First, we assume that Ψ is not schedulable by EDH-MK because of time starvation. of energy starvation. Lemma 2 states that there exists a time interval [t 0 , d 1 ) such that g(t 0 , d 1 ) > C + E p (t 0 , d 1 ) i.e., C + E p (t 0 , d 1 ) -g(t 0 , d 1 ) < 0. Thus, SSE Ψ < 0 and condition 17 in Theorem 2 is violated. "Only if ": Suppose that Ψ is feasible. Thus, Ψ is time-feasible and energy feasible. From constraint (7) in theorem 2 and constraint [START_REF] Mccormick | Building parallel, embedded, and real-time applications with Ada[END_REF] in Lemma 2, it is the case that constraint (17) is satisfied. Degradation mode level 2: Removal Algorithm This is the case where the degradation mode level 1 is not feasible. Abandonable tasks are gradually eliminated by increasing importance. We sort all the abandonable tasks in an ascending order of degree of importance such that we can reject those with less importance one by one until establishing the system feasibility. Theorem 4: The set of tasks Ψ j assigned to processor P j is feasible under degradation mode level 2: Removal algorithm if and only if the set of non abandonable tasks set Ψ na j U Ψ na j ≤ 1 and U e Ψ na j ≤ 1. (19) Proof: Directly follows from the proof of the theorem 4 in [18] Reconf-Algorithm To adjust the framework to cope with any unpredictable external event such as new task arrivals, task removal, and increase-decrease power supply, we characterize a reconfiguration as any procedure that permits to reconfigure the system to be feasible, i.e., satisfying its real-time and energy constraints with the consideration of system performance optimization. We propose an approach with two successive adaptation strategies to reconfigure the system at run-time. The two adaptation strategies are performed in a hierarchical order as depicted in Fig. 1. -Degradation mode level 1: EDH-MK Algorithm -Degradation mode level 2: Removal Algorithm Functionalities In this section we explain all functionalities of ARTE in details while showing their specifications and various characteristics regarding the problem of real-time scheduling in reconfigurable energy harvesting systems. Task Set Generator Task Model The current version proposes a task model according to the Liu and Layland task model with energy related parameters. All tasks are considered periodic and independent. Each task τ i is characterized by i) worst case execution time WCET C i , ii) worst case energy consumption WCEC E i , and iii) its period T i . It is considered that tasks have implicit deadlines, i.e., deadlines are equal to periods. In addition, each task is characterized by a degree of importance L i which define the functional and operational importance of the execution of the task vis-a-vis the application. Moreover, tasks are considered to be (m,k)-firm constrained deadlines. Tasks are classified into two categories: the first is firm task set with (1,1)-firm deadline constraints; the other is a set of soft tasks with (m,k)-soft deadline constraints. And a boolean A i to determine if a task is abandonnable or not. The used task sets can be loaded into the simulator either through the GUI by using a file browser or entering the parameters manually, or by using task set generator as depicted in Fig. 2 . For the simulation results to be credible, the used task sets should be randomly generated and varied sufficiently. The current version includes by default a generator based on the UUniFast-Discard algorithm [START_REF] Bini | Measuring the performance of schedulability tests[END_REF] coupled with a hyper-period limitation technique [START_REF] Goossens | Limitation of the hyper-period in real-time periodic task set generation[END_REF] adapted to energy constraints. This algorithm generates task sets by dividing among them the CPU utilization (U = Ci Ti ) and the energy utilization (U e = Ei TiP r where Pr is the recharging function) chosen by the user. The idea behind the algorithm is to distribute the system's utilization on the tasks of the system. When we add the energy cost of tasks to the system, we end up with two parameters to vary and two conditions to satisfy. The algorithm in its current version distributes U and U e uniformly on the tasks then it finds the 2-tuple (C i , E i ) which satisfies all the conditions namely U i , U e and energy consumption constraints. The operation is repeated several times until the desired 2-tuple approaches the imposed conditions. Finally, the algorithm returns as a result a time and potentially energy feasible system. The (m,k)-firm parameters are randomly generated in the interval [START_REF] Gharbi | Functional and operational solutions for safety reconfigurable embedded control systems[END_REF][START_REF] Chandarli | YARTISS: A Generic, Modular and Energy-Aware Scheduling Simulator for Real-Time Multiprocessor Systems[END_REF]. We define three levels of importance then the degree of importance is randomly generated in the interval [START_REF] Gharbi | Functional and operational solutions for safety reconfigurable embedded control systems[END_REF][START_REF] Chen | Combining re-allocating and rescheduling for dynamic multi-robot task allocation[END_REF] where 1 is the higher importance level. The parameter A i is randomly generated in the interval [0, 1]. Reconfiguration Scenarios Generator In order to represent as near as possible the real behavior of the physical reconfigurable energy harvesting real-time embedded systems we developed a reconfiguration scenarios generator tool. Through the GUI the user can use personalized reconfiguration scenarios by selecting the user personalization option, or using random reconfiguration scenarios by using the random task set generator as depicted in Fig. 3. The option user personalization offers to the user the possibility to generate reconfiguration scenarios that modify the applicative functions i.e., add-remove software tasks or increase-decrease the power supply of the system. For the simulation results to be credible, the used reconfiguration scenarios should be randomly generated and varied sufficiently. The current version includes by default a reconfiguration scenarios generator which offers the possibility to generate three kinds of reconfiguration scenarios: i) high dynamic system, ii) medium dynamic system, and iii) low dynamic system. The randomly reconfiguration scenarios algorithm calculates the number of jobs Njobs in the system upon one hyper-period. -High dynamic system: the generator will add randomly n tasks where n is randomly in the interval [10%, 3%] from Njobs. -Medium dynamic system: the generator will add randomly where n is randomly in the interval [5%, 1%] from Njobs. -Low dynamic system: the generator will add randomly where n is randomly in the interval [ 1%, 0%] from Njobs. Simulation Tool The aim of this tool is to simulate the scheduling of a system according to the parameters and the assumptions of the user, mainly the task set, and the scheduling policy. But the purpose of ARTE is not only restricted to checking the feasibility of a given system but also to simulate and analyze the performance of the scheduling policies when unpredictable reconfiguration scenarios occur in the system. Through the main interface as depicted in Fig. 4 the user have the possibility to use task sets loaded into the simulator either through the GUI by using a file browser or entering the parameters manually, or by using task set generator. When the user creates a system the hyper period will be calculated automatically and displayed in the GUI. For simulation the user choose the scheduling policy as well as the time interval for simulations. Two kinds of analyzes can be performed: scheduling simulation and test feasibility. The user can generate a set of reconfiguration scenarios to the system. Case Study This section presents a case study through which we can show the different features and functionalities implemented in ARTE as well as to explore the performance of the proposed EDH-MK algorithm and Reconf-Algorithm in order to keep feasible executions with graceful QoS after any external reconfiguration scenario that may evolve the system towards an unfeasible state. For this aim we create a new system where we generate randomly a task set with parameters in table 1. Initially, we have verified the task set feasibility using the simulator tool Fig. 5. Then, we have choose to generate a random high dynamic reconfiguration scenario using the reconfiguration scenario tool Fig. 6. Thereafter, the system evolves toward an unfeasible state Fig. 7. In order to analyze the EDH scheduler performance we have run a simulation on 100 time units. The EDH scheduler provides 114 deadline miss Fig. 8. In order to analyze the EDHMK scheduler performance we have run a simulation on 100 time units. The EDHMK scheduler provides 16 deadline miss Fig. 9. 8 Future Works The actual release offers many important features where the main purpose is to provide a simulator tool which provide the flexibility and the performance to deal with unpredictable reconfiguration scenarios. But, it has been made in a hurry and improvements are planned to address all features targeted by the proposed simulator tool. The authors are now working on: the development of an extension of the developed graphical user interface to facilitate the use of the simulator by a large number of users, and to provide our tool with a GUI to display the simulation results in an interactive and intuitive way. Three different views envisaged: a time chart, a processor view and an energy curve as well as a comparison results view permits to the user to see the simulation of selected scheduling policies, -implement other task models in the simulator, -implement other scheduling approaches in the simulator, such as the fixed priority approaches. implement the multiprocessor platforms, and develop new scheduling techniques based on the migration of tasks between the different processors, -finally, we plan the use of a distributed system decentralizing the control, and more precisely the use of an multi-agent system MAS. We aim through the use of MAS to represent as near as possible the real behavior of the physical networked reconfigurable energy harvesting real-time embedded systems thanks to the developed simulator. Motivated by these considerations, we choose to deploy the intelligent agents to simulate the dynamic behavior of networked reconfigurable energy harvesting real-time embedded systems. Conclusion This report presents ARTE, a real-time scheduling simulator for reconfigurable energy harvesting real-time embedded systems. Presently, the ARTE simulator is able to simulate accurately the execution of task sets of a reconfigurable energy harvesting system. We briefly presented existing simulation tools. However, none of the aforementioned efforts respond to the new criteria of flexibility, agility and high-performance. Thus, there is a need to develop a new simulation tool that offers the ability to deal with the dynamic behavior of reconfigurable systems. We have detailed the different features provided by ARTE: 1) scheduling simulation, 2) feasibility tests, and 3) percentage of missed deadline measurement. We have described the three main tools Fig. 1 . 1 Fig. 1. The reconfiguration scenarios generator tool. Fig. 2 . 2 Fig. 2. The task set generator tool. Fig. 3 . 3 Fig. 3. The reconfiguration scenarios generator tool. Fig. 4 . 4 Fig. 4. The simulation tool. Fig. 5 . 5 Fig. 5. Random system generation. Fig. 6 . 6 Fig. 6. Random reconfiguration scenario generation. Fig. 7 . 7 Fig. 7. Test system feasibility. Fig. 8 . 8 Fig. 8. EDH simulation. Fig. 9 . 9 Fig. 9. EDH-MK simulation. At least one mandatory job with deadline after d 1 executes within [t 0 , d 1 ). Let t 2 be the latest time where a mandatory job, say τ 2 , with deadline after d 1 is executed. As d 1 is lower than d 2 and mandatory jobs are executed according to the earliest deadline rule in EDH-MK, we have r 2 < r 1 . At time t 2 , one of the following situations occurs. Case 2a: The processor is busy all the times in [t 0 , d 1 ). τ 2 is preempted by a higher priority job, say τ 3 , with d 3 ≤ d 1 . From rule 4.2 in[START_REF] Chetto | Optimal scheduling for real-time jobs in energy harvesting computing systems[END_REF], P SE Ψ (r 3 ) > 0 which implies that SE τ1 (r 3 ) > 0 and in consequence g(r 3 , d 1 ) < E(r 3 ) + E p (r 3 , d 1 ). All mandatory jobs that are executed within [r 3 , d 1 ) have release time at or after r 3 and deadline at or before d 1 . Consequently, the amount of energy they require is at most g(r 3 , d 1 ). That contradicts deadline violation and E(d 1 ) = 0. Case 2b: The processor is idle in [t 3 -1, t 3 ) with t 3 > t 2 and busy all the times in [t at t 3 because t 0 is the latest one. Furthermore, no mandatory job with deadline after d 1 is executed after t 2 and consequently after t 3 . In order not to waste energy, all the energy which arrives from the source is used to advance mandatory jobs with deadline after d 1 . The processor continuously commutes from active state to inactive state. The storage is maintained at maximum level until τ 1 releases. Consequently, we have E(r 1 ) = C. As τ 1 is feasible, g(r 3 , d 1 ) ≤ C + E p (r 1 , d 1 ). Thus, E(r 1 ) + E p (r 1 , d 1 ) ≥ g(r 1 , d 1 ). That contradicts deadline violation and E(d 1 ) = 0. Lemma 2: The set Ψ is energy-feasible if and only if Directly follows from Lemma 2. "Only If ": Since is energy-feasible, let us consider an energy-valid schedule produced within [0, d M ax ). The amount of energy demanded in each interval of time [t 1 , t 2 ), g(t 1 , t 2 ), is necessarily less than or equal to the actual energy available in [t 1 , t 2 ) given by E(t 1 ) + E p (t 1 , t 2 ). An upper bound on E(t 1 ) is the maximum storable energy at time t 1 , that is C. Consequently, g(t 1 , t 2 )is lower than or equal to C + E p (t 1 , t 2 ). This leads to ∀ (t 1 , t 2 ) ∈ [0, d M ax ), g(t 1 , t 2 ) ≤ C + E p (t 1 , t 2 ) i.e. SSE (t 1 , t 2 ) ≥ 0. Thus, SSE ≥ 0. Lemma 3: Ψ is energy-feasible if and only if SSE Ψ > 0 (14) Proof : "If ": Definition 4: Let d be the deadline of the active job at current time t c . The preemption slack energy of a mandatory job set Ψ at t c is P SE Ψ (t c ) = min tc≤ri≤di≤d SE τi (t c ) (13) Lemma 1: If d 1 is missed in the EDH-MK schedule because of energy starvation there exists a time instant t such that g(t, d 1 ) > C + E p (t, d 1 ) and no schedule exists where d 1 and all earlier deadlines are met. Proof : Recall that we have to consider the energy starvation case where d 1 is missed with E(d 1 ) = 0. Let t 0 be the latest time before d 1 where a mandatory job with deadline after d 1 releases, no other mandatory job is ready just before t 0 and the energy storage unit is fully charged i.e. E(t 0 ) = C. The initialization time can be such time. The processor is idle within [t 0 -1, t 0 ) since no mandatory jobs are ready. As no energy is wasted except when there are no ready jobs, the processor is busy at least from time t 0 to t 0 + 1. We consider two cases: Case 1: No mandatory job with deadline after d 1 executes within [t 0 , d 1 ). Consequently, all the mandatory jobs that execute within [t 0 , d 1 ). have release time at or after t 0 and deadline at or before d 1 . The amount of energy required by these mandatory jobs is g(t 0 , d 1 ). As is feasible, g(t 0 , d 1 ) is no more than the maximum storable energy plus all the incoming energy i.e., C + E p (t 0 , d 1 ). As E(t 0 ) = C, we conclude that all mandatory jobs ready within [t 0 , d 1 ) can be executed with no energy starvation which contradicts the deadline violation at d 1 with E(d 1 ) = 0. Case 2: 3 , d 1 ). The processor stops idle at time t 3 imperatively by rule 4.1 [START_REF] Chetto | Optimal scheduling for real-time jobs in energy harvesting computing systems[END_REF] if E(t 3 ) = C. By hypothesis, there is no mandatory job waiting with deadline at or before d 1 Table 1 . 1 Initial System Configuration. File Name systemfile.txt Power Rate 10 Processor Utilization 0.8 Energy Utilization 0.8 Number of tasks 10 Emax 200 Emin 2 Battery Capacity 200 Proc number 1 of ARTE: 1) random generator of task sets, 2) random generator of reconfiguration scenarios sets, and 3) a simulator tool. Finally, we presented some expanding features we will implement.
46,620
[ "883818" ]
[ "38167", "473973", "473973" ]
01756728
en
[ "math" ]
2024/03/05 22:32:10
2019
https://univ-tln.hal.science/hal-01756728/file/Chang-Jin-Novotny-final.pdf
Antonin Novotny T Chang B J Jin A Novotný Compressible Navier-Stokes system with general inflow-outflow boundary data Keywords: Compressible Navier-Stokes system, inhomogeneous boundary conditions, weak solutions, renormalized continuity equation, large inflow, large outflow published or not. The documents may come Compressible Navier-Stokes system with general inflow-outflow boundary data Introduction We consider the problem of identifying the non steady motion of a compressible viscous fluid driven by general in/out flux boundary conditions on general bounded domains. Specifically, the mass density = (t, x) and the velocity u = u(t, x), (t, x) ∈ I × Ω ≡ Q T , I = (0, T ) of the fluid satisfy the Navier-Stokes system, ∂ t + div x ( u) = 0, (1.1) ∂ t ( u) + div x ( u ⊗ u) + ∇ x p( ) = div x S(∇ x u), (1.2) S(∇ x u) = µ ∇ x u + ∇ t x u + λdiv x uI, µ > 0, λ ≥ 0, (1.3) in Ω ⊂ R d , d = 2, 3, where p = p( ) is the barotropic pressure. The system is endowed with initial conditions (0) = 0 , u(0) = 0 u 0 . (1.4) We consider general boundary conditions, u| ∂Ω = u B , | Γ in = B , (1.5) where Γ in = x ∈ ∂Ω u B • n < 0 , Γ out = x ∈ ∂Ω u B • n > 0 . (1.6) We concentrate on the inflow/outflow phenomena, we have therefore deliberately omitted the contribution of external forces f . Nevertheless, all results of this paper remain valid also in the presence of external forces. Investigation and better insight to the equations in this setting is important for many real world applications. In fact this is a natural and basic abstract setting for flows in pipelines, wind tunnels, turbines to name a few concrete examples. In spite of this fact the problem in its full generality resists to all attempts of its solution for decades. To the best of our knowledge, this is the first work ever treating this system for large boundary data in a very large class of bounded domains. Indeed, the only available results on the existence of strong solutions in setting (1.1-1.6) are on a short time interval or deal with small boundary data perturbations of an equilibrium state, see e.g. Valli, Zajaczkowski [START_REF] Valli | Navier-Stokes equations for compressible fluids: Global existence and qualitative properties of the solutions in the general case[END_REF]. The only results on the existence of weak solutions for large flows for system (1.1-1.6) with large boundary data are available in papers by Novo [START_REF] Novo | Compressible Navier-Stokes model with inflow-outflow boundary conditions[END_REF] (where the domain is a ball and the incoming/outgoing velocity field is constant) or by Girinon [START_REF] Girinon | Navier-Stokes equations with nonhomogeneous boundary conditions in a bounded three-dimensional domain[END_REF], where the domain is more general but the inflow boundary has to be convex set included in a cone, and the velocity at the inflow boundary has to satisfy so called no reflux condition. In the steady case, the problem with large boundary conditions is open for barotropic flows. It was solved only recently for the constitutive law of pressure of so called hard sphere model (when lim → p( ) = ∞ for some > 0), see [START_REF] Feireisl | Stationary solutions to the compressible Navier-Stokes system with general boundary conditions Preprint Nečas Center for Mathematical Modeling[END_REF]. There are several results dealing with data close to equilibrium flows, see Plotnikov, Ruban, Sokolowski [START_REF] Plotnikov | Inhomogeneous boundary value problems for compressible Navier-Stokes equations: well-posedness and sensitivity analysis[END_REF], [START_REF] Plotnikov | Inhomogeneous boundary value problems for compressible Navier-Stokes and transport equations[END_REF], Mucha, Piasecki [START_REF] Mucha | Compressible perturbation of Poiseuille type flow[END_REF], Piasecki [START_REF] Piasecki | On an inhomogeneous slip-inflow boundary value problem for a steady flow of a viscous compressible fluid in a cylindrical domain[END_REF], Piasecki and Pokorny [START_REF] Piasecki | Strong solutions to the Navier-Stokes-Fourier system with slip-inflow boundary conditions[END_REF] among others. Our goal is to establish the existence of a weak solution ( , u) to problem (1.1-1.6) for general large boundary data B , u B in an arbitrary bounded sufficiently smooth domain with no geometric restrictions on the inflow boundary. Such general result requires a completely different approach to the construction of solutions than the approach employed by Novo or Girinon. We suggest a new (spatially local) method of construction of solutions via regularization of the continuity equation by a specific non homogenous parabolic boundary value problem, instead of using transport equation based approximation as in [START_REF] Novo | Compressible Navier-Stokes model with inflow-outflow boundary conditions[END_REF] or [START_REF] Girinon | Navier-Stokes equations with nonhomogeneous boundary conditions in a bounded three-dimensional domain[END_REF]. This approach allows to remove the restrictions imposed on the domain and data. Another novelty with respect to the two above mentioned papers is the fact that we include in our investigation the pressure laws that may be non monotone on a compact portion of interval [0, ∞), in the spirit of [START_REF] Feireisl | Compressible Navier-Stokes equations with a non-monotone pressure law[END_REF]. (It is to be noticed that a method allowing non monotone pressure laws on a non compact portion of [0, ∞) was recently suggested in [START_REF] Bresch | Global existence of weak solutions for compresssible Navier-Stokes equations: Thermodynamically unstable pressure and anisotropic viscous stress tensor[END_REF], but this method does not work if growth of p (expressed through coefficient γ) is less than 9/5 and it is not clear whether it would work with the non homogenous boundary conditions.) The paper is organized as follows. In Section 2 we define weak solutions to the problem and state the main theorem (Theorem 2.4). In Section 4 the approximated problem (including two small parameters ε > 0 and δ > 0) is specified and its solvability is proved. Limit ε → 0 is performed in Section 5 and limit δ → 0 in Section 6. At each stage of the convergence proof from the approximate system to the original system (ε → 0 and δ → 0, respectively) our approach follows closely the Lions approach [START_REF] Lions | Mathematical topics in fluid dynamics[END_REF] (for ε → 0) and Feireisl's approach [START_REF] Feireisl | On the existence of globally defined weak solutions to the Navier-Stokes equations of compressible isentropic fluids[END_REF] (for δ → 0). This includes the main tools as effective viscous flux identity, oscillations defect measure and renormalization techniques for the continuity equation. The first two tools are local, and remain essentially unchanged (with respect to their use in the case of homogenous Dirichlet boundary conditions), while the third tool -the renormalization technique for the continuity equation introduced in Di Perna-Lions [START_REF] Diperna | Ordinary differential equations, transport theory and Sobolev spaces[END_REF] (in the case of squared integrable densities) and in Feireisl [START_REF] Feireisl | On the existence of globally defined weak solutions to the Navier-Stokes equations of compressible isentropic fluids[END_REF] (in the case of non squared integrable densities) -has to be essentially modified in order to be able to accommodate general non homogenous boundary data. This topic is investigated in Section 3 (and applied in Sections 5.4 (for the limit ε → 0)) and 6.5 (for the limit δ → 0)). Besides the original approximation presented in Section 4 (that allows to treat the outflow/inflow problem in full generality, without geometric restrictions on the form and position of the inflow/outflow boundaries, in contrast with all previous treatments of this problem) the content of Sections 3, 5.4, 6.5 represents the main novelty of this paper. The results on the renormalized continuity equation formulated in Lemmas 3.1, 3.2 are of independent interest within the context of the theory of compressible fluids. Main result In order to avoid additional technicalities, we suppose that the boundary data satisfy u B ∈ C 2 (∂Ω; R d ), B ∈ C(∂Ω). (2.1) In agreement with the standard existence theory in the absence of inflow/outflow, we assume for pressure p = p -p, p ∈ C[0, ∞) ∩ C 1 (0, ∞), p(0) = 0, (2.2) ∀ > 0, p ( ) > max{0, a 1 γ-1 -b}, p( ) ≤ a 2 γ + b, p ∈ C 2 c [0, ∞), p ≥ 0, p (0) = 0, where γ > 1 and a 1 , a 2 , b > 0. We allow, in general, a non-monotone pressure p. If p = 0 then the pressure is monotone p = p and conditions (2.2) includes the isentropic pressure law p( ) = a γ , a > 0, γ > 1 which can be taken as a particular case. In general the splitting (2.2) to strictly increasing and bounded negative compactly supported functions complies with pressure laws that obey assumptions a 1 γ-1 -b ≤ p ( ), p( ) < b + a 2 γ , p(0) = 0, where p( ) ≥ 0 in a (small) right neighborhood of 0. We notice that the very latter condition (or p (0) = 0) is not needed in the homogenous case, cf. Feireisl [START_REF] Feireisl | Compressible Navier-Stokes equations with a non-monotone pressure law[END_REF], [START_REF] Feireisl | Dynamics of viscous compressible fluids[END_REF]; it is specific for the non-homogenous problem. It enters into the game only in order to treat signs of the boundary terms arising from the non-zero boundary conditions in the a priory estimates (see Section 4.3.3 for more details). For further convenience, it will be useful to introduce Helmholtz functions: H( ) = 0 p(z) z 2 dz, H( ) = 0 p(z) z 2 dz, H( ) = - 0 p(z) z 2 dz (2.3) and relative energy functions E( |r) = H( ) -H (r)( -r) -H(r), E( |r) = H( ) -H (r)( -r) -H(r), (2.4) E( |r) = H( ) -H (r)( -r) -H(r). We begin with the definition of weak solutions to system (1.1-1.6). Definition 2.1 [Weak solutions to system (1.1-1.6)] We say that ( , u) is a bounded energy weak solution of problem (1.1-1.6) if: 1. It belongs to functional spaces: 1 and the integral identity ∈ L ∞ (0, T ; L γ (Ω)), 0 ≤ a.a. in (0, T ) × Ω, u ∈ L 2 (0, T ; W 1,2 (Ω; R d )), u| I×∂Ω = u B ; (2.5) 2. Function ∈ C weak ([0, T ], L γ (Ω)) Ω (τ, •)ϕ(τ, •) dx- Ω 0 (•)ϕ(0, •) dx = τ 0 Ω ∂ t ϕ+ u•∇ x ϕ dxdt- τ 0 Γ in B u B •nϕ dS x dt (2.6) holds for any τ ∈ [0, T ] and ϕ ∈ C 1 c ([0, T ] × (Ω ∪ Γ in )). 3. Function u ∈ C weak ([0, T ], L 2γ γ+1 (Ω; R d )) , and the integral identity Ω u(τ, •) • ϕ(τ, •) dx - Ω 0 u 0 (•)ϕ(0, •) dx (2.7) = τ 0 Ω u • ∂ t ϕ + u ⊗ u : ∇ x ϕ + p( )div x ϕ -S(∇ x u) : ∇ x ϕ dxdt holds for any τ ∈ [0, T ] and any ϕ ∈ C 1 c ([0, T ] × Ω; R d ). 4. There exists a Lipschitz extension u ∞ ∈ W 1,∞ (Ω; R d ) of u B whose divergence is non negative in a certain interior neighborhood of ∂Ω, i.e. divu ∞ ≥ 0 a.e. in Û - h ≡ {x ∈ Ω | dist(x, ∂Ω) < h}, h > 0 (2.8) such that the energy inequality Ω 1 2 |u -u ∞ | 2 + H( ) (τ ) dx + τ 0 Ω S(∇ x (u -u ∞ )) : ∇ x (u -u ∞ ) dxdt (2.9) ≤ Ω 1 2 0 |u 0 -u ∞ | 2 + H( 0 ) dx - τ 0 Ω p( )divu ∞ dxdt - τ 0 Ω u • ∇ x u ∞ • (u -u ∞ ) dxdt - τ 0 Ω S(∇ x u ∞ ) : ∇ x (u -u ∞ ) dxdt - τ 0 Γ in H( B )u B • ndS x dt -H τ 0 Γout u B • ndS x dt holds. In inequality (2.9), H = inf >0 H( ) > -∞. (2.10) Remark 2.1. 1. An extension u ∞ of u B verifying (2.8) always exists, due to the following lemma (see [START_REF] Girinon | Navier-Stokes equations with nonhomogeneous boundary conditions in a bounded three-dimensional domain[END_REF]Lemma 3.3]). Lemma 2.2. Let V ∈ W 1,∞ (∂Ω; R d ) be a Lipschitz vector field on the boundary ∂Ω of a bounded Lipschitz domain Ω. Then there is h > 0 and a vector field V ∞ ∈ W 1,∞ (R 3 ) ∩ C c (R d ), divV ∞ ≥ 0 a.e. in Ûh (2.11) verifying V ∞ | ∂Ω = V, where Ûh = {x ∈ R 3 | dist(x, ∂Ω) < h}. 2. A brief inspection of formula (2.10) gives the estimate of value H, H ≥ -sup ∈(0,1) p( ) -sup >1 p( ) > -∞ provided suppp ⊂ [0, r], where r > 1 without loss of generality. Equation (2.6) implies the total mass inequality Ω (τ ) dx ≤ Ω 0 dx - τ 0 Γ in B u B • ndS x dt (2.12) for all τ ∈ [0, T ]. To see it, it is enough to take for test functions a convenient sequence ϕ = ϕ δ , δ > 0 (e.g. the same as suggested in (5.27)) and let δ → 0. Definition 2.2 We say that the couple ( , u) ∈ L p (Q T ) × L 2 (0, T ; W 1,2 (Ω, R d )), p > 1 is a renormalized solution of the continuity equation if b( ) ∈ C weak ([0, T ]; L 1 (Ω) ) and if it satisfies in addition to the continuity equation (2.6) also equation 1] (see also monographs [START_REF] Lions | Mathematical topics in fluid dynamics[END_REF], [START_REF] Feireisl | Dynamics of viscous compressible fluids[END_REF], [START_REF] Novotný | Introduction to the mathematical theory of compressible flow[END_REF], [START_REF] Feireisl | Singular limits in thermodynamics of viscous fluids[END_REF]) covers the case u| (0,T )×∂Ω = 0 (which is covered by Theorem 2.4 as well) and the case u • n| (0,T )×∂Ω = 0 completed with the Navier conditions-eventually with friction-(which is not covered by Theorem 2.4). Ω (b( )ϕ)(τ ) dx - Ω b( 0 )ϕ(0) dx = (2.13) τ 0 Ω b( )u • ∇ x ϕ -ϕ (b ( ) -b( )) div x u dxdt - τ 0 Γ in b( B )u B • nϕ dS x dt for any ϕ ∈ C 1 c ([0, T ] × (Ω ∪ Γ in )), ∈ C[0, ∞) ∩ C 1 (0, ∞), zb -b ∈ C[0, ∞), |b(z)| ≤ c(1 + z 5p/6 ), |zb (z) -b(z)| ≤ c(1 + z p/2 Ω 1 2 0 u 2 0 + H( 0 ) dx < ∞, 0 ≤ 0 , Ω 0 dx > 0. ( 2 2. Theorem 2.4 still holds provided one considers in the momentum equation at its right hand side term f corresponding to large external forces, provided f ∈ L ∞ (Q T ) (modulo necessary changes in the weak formulation in order to accommodate the presence of this term). 3. Conditions on the regularity p, p, B and u B in Theorem 2.4 could be slightly weakened, up to p continuous on [0, ∞), locally Lipschitz on [0, ∞) instead of p ∈ C[0, ∞) ∩ C 1 (0, ∞), p Lipschitz differentiable with compact support on [0, ∞) instead of p ∈ C 2 c [0, ∞), B ∈ L ∞ (∂Ω), u B ∈ W 1,∞ (∂Ω) , at expense of some additional technical difficulties. We shall perform the proof in all details in the case d = 3 assuming tacitly that both Γ in and Γ out have non zero (d -1)-Hausdorff measure. Other cases, namely the case d = 2 is left to the reader as an exercise. 3 Renormalized continuity equation with non homogenous data The case of squared integrable density In this section we generalize the Di-Perna, Lions transport theory [START_REF] Diperna | Ordinary differential equations, transport theory and Sobolev spaces[END_REF] to the continuity equation with non homogenous boundary data. The main result reads: Lemma 3.1. Suppose that Ω ⊂ R d , d = 2, U + h (Γ in ) ≡ {x 0 + zn(x 0 ) | 0 < z < h, x 0 ∈ Γ in } ∩ (R d \ Ω) (3.1) and extend the vector field u B to U + h (Γ in ), ũB (x) = u B (x 0 ), x = x 0 + zn(x 0 ) ∈ U + h (Γ in ). (3.2) If Γ in ∈ C 2 , such extension always exists and ũB ∈ C 1 (U + h (Γ in ), cf. Foote [START_REF] Foote | Regularity of the distance function[END_REF]. Consider now the flow generated in U + h (Γ in ) by the field -ũ B defined on U + h (Γ in ), X (s, x 0 ) = -ũ B (X(s, x 0 )), X(0) = x 0 ∈ U + h (Γ in ) ∪ Γ in , for s > 0, X(s; x 0 ) ∈ U + h (Γ in ). (3.3) Let Ũ+ h (Γ in ) = x ∈ U + h (Γ in ) x = X(s, x 0 ) for a certain x 0 ∈ Γ in and 0 < s < h . Employing the local Cauchy-Lipschitz theory for ODEs to equation (3.3) and evoking the differentiability properties of its solutions with respect to the "initial" data (see e.g. the book of Taylor [START_REF] Taylor | Partial Differential Equations (Basic theory)[END_REF]Chapter 1] or of Benzoni-Gavage [START_REF] Benzoni-Gavage | Calcul différentiel et équations différentielles[END_REF]), we infer that: 1. For any x 0 ∈ U + h (Γ in ), there is unique T (x 0 ) > 0 and T (x 0 ) > 0 such that the map X ∈ C 1 ((-T (x 0 ), T (x 0 )); U + h (Γ in )) is a maximal solution of problem (3.3). If x 0 ∈ Γ in , then there is unique T (x 0 ) > 0 such that the map X ∈ C 1 ([0, T (x 0 )); U + h (Γ in ) ∪ Γ in ) is a maximal solution. 2. For any compact K ⊂ U + h (Γ in ) ∪ Γ in , T K ≡ inf x 0 ∈K T (x 0 ) > 0 and for any compact K ⊂ U + h (Γ in ), T K ≡ inf x 0 ∈K T (x 0 ) > 0. 3. For any z ∈ Ũ+ h (Γ in ) there is an open ball B(z) centered at z and δ z > 0 such that X ∈ C 1 ([-δ z , δ z ]× B(z)). In particular, item 1. in the above list implies that the set Ũ+ h (Γ in ) is not empty. With points 2. and 3. at hand, we are ready to show that Ũ+ h (Γ in ) is an open set. Indeed, let z 0 = X(s 0 ; x 0 ), s 0 ∈ (0, T ), T = min{h, T (x 0 )} with x 0 = γ(0), where γ : B (0) → R d , γ(σ) = x 0 + O(σ, a(σ)) T with a ∈ C 2 (B (0); R + ) representing the local description of Γ in in the vicinity of x 0 . In the above O is a fixed orthonormal matrix, B (0) is a d -1 dimensional ball centered at 0, and we may suppose, without loss of generality, that ∇ σ a = 0 in B (0). We may now consider the map Φ : (-T K , T K ) × B (0) (s, σ) → z ∈ Φ (-T K , T K ) × B (0) ⊂ Ũ+ h (Γ in ), z = Φ(s, σ) = X(s; X(s 0 ; γ(σ))), K = {X(s 0 , γ(σ) | σ ∈ B (0)}. We have clearly Φ(0, 0) = z 0 . It is a cumbersome calculation to show that det ∂ s Φ, ∇ σ Φ (0, 0) = 0, see [START_REF] Girinon | Navier-Stokes equations with nonhomogeneous boundary conditions in a bounded three-dimensional domain[END_REF]Section 3.3.6]. We may therefore apply to this map the implicit function theorem and conclude that there is an open set (0, 0) ∈ U ⊂ (-T K , T K ) × B (0), open set z 0 ∈ V ⊂ R d , and a map Ψ ∈ C 1 (V ; U ) such that Φ • Ψ(z) = z for any z ∈ V . In particular, V ⊂ Φ (-T K , T K ) × B (0) . We may therefore extend the boundary data B to Ũ+ h (Γ in ) by setting ˜ B (X(s, x 0 )) = B (x 0 )exp s 0 divũ B (X(z, x 0 ))dz . (3.4) Clearly, ˜ B ∈ W 1,∞ ( Ũ+ h (Γ in )) and div x (˜ B ũB ) = 0 in Ũ+ h (Γ in ), ˜ B | Γ in = B . (3.5) Now we put Ωh = Ω ∪ Γ in ∪ Ũ+ h (Γ in ) and extend ( , u) from (0, T ) × Ω to (0, T ) × Ωh by setting ( , u)(t, x) = (˜ B , ũB )(x), (t, x) ∈ (0, T ) × Ũ+ h (Γ in ). Conserving notation ( , u) for the extended fields, we easily verify that ( , u) ∈ L 2 ((0, T ) × Ωh ) × L 2 (0, T ; W 1,2 ( Ωh ; R d )), and that it satisfies the equation of continuity (1.1), in particular, in the sense of distributions on (0, T ) × Ωh . Next, we use the regularization procedure due to DiPerna and Lions [START_REF] Diperna | Ordinary differential equations, transport theory and Sobolev spaces[END_REF] applying convolution with a family of regularizing kernels obtaining for the regularized function [ ] ε , ∂ t [ ] ε + div x ([ ] ε u) = R ε a.e. in (0, T ) × Ωε,h , (3.6) where Ωε,h = x ∈ Ωh dist(x, ∂ Ωh ) > ε , R ε ≡ div x ([ ] ε u) -div x ([ u] ε ) → 0 in L 1 loc ((0, T ) × Ωh ) as ε → 0. The convergence of R ε evoked above results from the application of the refined version of the Friedrichs lemma on commutators, see e.g. [START_REF] Diperna | Ordinary differential equations, transport theory and Sobolev spaces[END_REF] ∂ t b([ ] ε ) + div x (b([ ] ε )u) + (b ([ ] ε )[ ] ε -b([ ] ε )) div x u = b ([ ] ε )R ε or equivalently, Ωh b([ ] ε (τ ))ϕ(τ )dx - Ωh b([ 0 ] ε )ϕ(0)dx = τ 0 Ωh b([ ] ε )∂ t ϕ + b([ ] ε )u • ∇ x ϕ -ϕ (b ([ ] ε )[ ] ε -b([ ] ε )) div x u dxdt - τ 0 Ωh ϕb ([ ] ε )R ε dx dt for all τ ∈ [0, T ], for any ϕ ∈ C 1 c ([0, T ] × Ωh ), 0 < ε < dist(supp(ϕ), ∂ Ωh ). Thus, letting ε → 0 we get Ωh b( (τ ))ϕ(τ )dx - Ωh b( 0 )ϕ(0)dx (3.7) = τ 0 Ωh b( )∂ t ϕ + b( )u • ∇ x ϕ -ϕ (b ( ) -b( )) div x u dxdt for all τ ∈ [0, T ], for any ϕ ∈ C 1 c ([0, T ] × Ωh ). Now we write Ωh b( )u • ∇ x ϕdx = Ω b( )u • ∇ x ϕ dx + Ũ+ h (Γ in ) b( )u • ∇ x ϕdx, (3.8) where, due to (3.5), the second integral is equal to Γ in ϕb( B )u B • ndS x + Ũ+ h (Γ in ) ϕ(˜ B b (˜ B ) -b(˜ B ))divũ B dx (3.9) We notice that ϕ is vanishing on a neighborhood of ∂Ω \ Γ in and that Γ in is Lipschitz. This justifies the latter integration by parts although Ũ+ h (Γ in ) may fail to be Lipschitz. Now, we insert the identities (3.8-3.9) into (3.7) and let h → 0. Recalling regularity of (˜ B , ũB ) evoked in (3.5) and summability of ( , u), we deduce finally that Ω b( (τ ))ϕ(τ )dx - Ω b( 0 )ϕ(0)dx = τ 0 Ω b( )∂ t ϕ + b( )u • ∇ x ϕ -ϕ (b ( ) -b( )) div x u dxdt + τ 0 ∂Ω b( B )u B • nϕdS x dt for all τ ∈ [0, T ], for any ϕ ∈ C 1 c ([0, T ] × (Ω ∪ Γ in )) . This finishes proof of Lemma 3.1. The case of bounded oscillations defect measure If the density is not square integrable, but if it is a weak limit of a sequence whose oscillations defect measure is bounded, one replaces in the case of theory with no inflow/outflow boundary data Lemma 3.1 by another result, see [START_REF] Feireisl | On the existence of globally defined weak solutions to the Navier-Stokes equations of compressible isentropic fluids[END_REF]. The goal of this Section is to generalize this result to continuity equation with non homogenous boundary data. We introduce the L p -oscillations defect measure of the sequence δ which admits a weak limit in L 1 (Q T ) as follows osc p [ δ ](Q T ) ≡ sup k≥1 lim sup δ→0 Q T T k ( δ ) -T k ( ) p dxdt , (3.10) where truncation T k ( ) of is defined as follows T k (z) = kT (z/k), T ∈ C 1 [0, ∞), T (z) =            z if z ∈ [0, 1], concave on [0, ∞), 2 if z ≥ 3, (3.11) where k > 1. The wanted result is the following lemma. Lemma 3.2. Suppose that Ω ⊂ R d , d = 2, 3 is a bounded Lipschitz domain and let ( B , u B ) satisfy assumptions (2. 1). Assume that the inflow portion of the boundary Γ in is a C 2 open (d-1)-dimensional manifold. Suppose further that δ in L p ((0, T ) × Ω), p > 1, u δ u in L r ((0, T ) × Ω; R d ), ∇u δ ∇u in L r ((0, T ) × Ω; R d 2 ), r > 1. (3.12) and that osc q [ δ ]((0, T ) × Ω) < ∞ (3.13) for 1 q < 1 -1 r , where ( δ , u δ ) solve the renormalized continuity equation (2.13) (with any b ∈ C 1 [0, ∞) and b having compact support). Then the limit functions , u solve again the renormalized continuity equation (2.13) for any b belonging to the same class. Remark 3.3. Let {Ω n } ∞ n=1 , Ω n ⊂ Ω n+1 ⊂ Ω, Ω n ⊂ Ω ∪ Γ in be a family of domains satisfying condition: For all compact K of Ω ∪ Γ in there exists n ∈ N * such that K ⊂ Ω n . Then one can replace in Lemma 3.2 assumption (3.12) by a slightly weaker assumption ∀n ∈ N * , δ in L p ((0, T ) × Ω n ), where ∈ L p (Ω), p > 1 ∀n ∈ N * , u δ u in L r ((0, T ) × Ω n ; R d ), where u ∈ L r (Ω; R d ), ∀n ∈ N * , ∇u δ ∇u in L r ((0, T ) × Ω n ; R d 2 ), where ∇u ∈ L r (Ω; R d 2 ), r > 1. This observation (which is seen from a brief inspection of the proof hereafter) is not needed in the present paper but may be of interest whenever one deals with the stability of weak solutions with respect to perturbations of the boundary in the case of non homogenous boundary data. Proof of Lemma 3.2 The proof of the lemma follows closely with only minor modifications the similar proof when boundary velocity is zero, see [START_REF] Feireisl | On the existence of globally defined weak solutions to the Navier-Stokes equations of compressible isentropic fluids[END_REF]. During the process of the proof one shall need Lemma 3.1. This is the only moment when the requirement on the regularity of Γ in is needed. We present however here the entire proof for the sake of completeness. Renormalized continuity eqution (2.13) with b = T k reads Ω T k ( δ )ϕ(τ, x) dx - Ω T k ( 0 )ϕ(0, x) dx = (3.14) τ 0 Ω T k ( δ )∂ t ϕ + T k ( δ )u δ • ∇ x ϕ -ϕ (T k ( δ ) δ -T k ( δ )) div x u δ dxdt - τ 0 Γ in T k ( B )u B • nϕ dS x dt for any ϕ ∈ C 1 ([0, T ] × (Ω ∪ Γ in )). Passing to the limit δ → 0 in (3.14), we get Ω T k ( )ϕ(τ, x) dx - Ω T k ( 0 )ϕ(0, x) dx = τ 0 Ω T k ( )∂ t ϕ + T k ( )u • ∇ x ϕ -ϕ(T k ( ) -T k ( ))div x u dxdt - τ 0 Γ in T k ( B )u B • nϕ dS x dt for any ϕ ∈ C 1 ([0, T ] × (Ω ∪ Γ in )). Since for fixed k > 0, T k ( ) ∈ L ∞ ((0, T ) × R d ) , we can employ Lemma 3.1 (with a slight obvious modification which takes into account the non zero right hand side (T k ( ) -T k ( ))div x u) in order to infer that Ω b M (T k ( ))ϕ(τ, x)dx - Ω b M ( 0 )ϕ(0, x)dx = (3.15) τ 0 Ω b M (T k ( ))∂ t ϕ + b M (T k ( ))u • ∇ x ϕ -ϕ b M (T k ( ))T k ( ) -b M (T k ( )) div x u dxdt + τ 0 Ω ( T k ( ) -)div x u b M (T k ( )) dxdt - τ 0 Γ in b M (T k ( B ))u B • nϕ dS x dt holds with any ϕ ∈ C 1 ([0, T ] × (Ω ∪ Γ in )) and any b M ∈ C 1 [0, ∞) with b M having a compact support in [0, M ). Seeing that by lower weak semi-continuity of L 1 norms, T k ( ) → in L 1 ((0, T ) × Ω) as k → ∞ , we obtain from equation (3.15) by using the Lebesgue dominated convergence theorem Ω b M ( )ϕ(τ, x)dx - Ω b M ( 0 )ϕ(0, x)dx = (3.16) τ 0 Ω b M ( )∂ t ϕ + b M ( )u • ∇ x ϕ -ϕ (b M ( ) -b M ( )) div x u dxdt - τ 0 Γ in b M ( B )u B • nϕ dS x dt with any ϕ ∈ C 1 ([0, T ] × (Ω ∪ Γ in )) as k → ∞, provided we show that ( T k ( ) -)div x u)b M (T k ( )) L 1 ((0,T )×Ω) → 0 as k → ∞. (3.17) To show the latter relation we use lower weak semicontinuity of L 1 norm, Hölder's inequality, uniform bound of u δ in L r (0, T ; W 1,r (Ω)) and interpolation of L r between Lebesgue spaces L 1 and L q to get ( T k ( ) -)div x u)b M (T k ( )) L 1 ((0,T )×Ω) ≤ max z∈[0,M ] |b M (z)| {T k ( )≤M } |( T k ( ) -)div x u)|dxdt ≤ c sup δ>0 δ T k ( δ ) -δ ) q(r-1)-r r(q-1) L 1 ((0,T )×Ω) lim inf δ→0 δ T k ( δ ) -δ ) q r(q-1) L q ({T k ( )≤M }) . We have δ T k ( δ ) -δ ) L 1 ((0,T )×Ω) ≤ 2sup δ>0 δ L 1 ({ δ ≥k}) → 0 as k → ∞ by virtue of the uniform bound of δ in L p ((0, T ) × Ω) (in the above we have also used algebraic relation zT k (z) -T k (z) ≤ 2z1 {z≥k} ), while δ T k ( δ ) -δ ) L q ({T k ( )≤M }) ≤ 2 T k ( δ ) L 1 ({T k ( )≤M }) ≤ 2 T k ( δ ) -T k ( ) L q ((0,T )×Ω) + T k ( ) -T k ( ) L q ((0,T )×Ω) + T k ( ) L q ({T k ( )≤M }) , Approximate problem Our goal is to construct solutions the existence of which is claimed in Theorem 2.4. To this end, we adopt the approximation scheme based on pressure regularization (small parameter δ > 0) adding artificial viscosity terms to both (1.1) and (1.2) (small parameter ε > 0). This is so far standard procedure. Moreover, mostly for technical reasons, we regularize the momentum equation by adding a convenient dissipative monotone operator with small parameter ε > 0. (This step is not needed when treating zero boundary data, but seems to be necessary if the boundary velocity is non zero.) In sharp contrast with [START_REF] Girinon | Navier-Stokes equations with nonhomogeneous boundary conditions in a bounded three-dimensional domain[END_REF], we consider for the new system a boundary value problem on Ω with the non homogenous boundary conditions for velocity and a convenient nonlinear Neumann type boundary condition for density. The approximated problem is determined in Section 4.1 and the crucial theorem on its solvability and estimates is formulated in Section 4.2 (see Lemma 4.1). Lemma 4.1 is proved by a standard combination of a fixed point and Galerkin method, see Section 4.3. After this process we have at our disposal a convenient solution of the approximate problem. Once the sequence of approximated solutions is available the limit process has to be effectuated in order 1. ε → 0 (see Section 5), 2. δ → 0 (see Section 6). Approximating system of equations The approximate problem reads: ∂ t -ε∆ x + div x ( u) = 0, (4.1) (0, x) = 0 (x), (-ε∇ x + u) • n| I×∂Ω = B u B • n if [u B • n](x) ≤ 0, x ∈ ∂Ω, u B • n if [u B • n](x) > 0, x ∈ ∂Ω (4.2) ∂ t ( u) + div x ( u ⊗ u) + ∇ x p δ ( ) = div x S(∇ x u) -ε∇ x • ∇ x u + εdiv |∇ x (u -u ∞ )| 2 ∇ x (u -u ∞ ) (4.3) u(0, x) = u 0 (x), u| I×∂Ω = u B , (4.4) with positive parameters ε > 0, δ > 0, where we have denoted p δ ( ) = p( ) + δ β , β > max{γ, 9/2} (4.5) and where u ∞ is an extension of u B from Lemma 2.2. The exact choice of β is irrelevant from the point of view of the final result provided it is sufficiently large. It is guided by convenience in proofs; it might not be optimal. Anticipating the future development, we denote: H δ ( ) = H( ) + δH (β) ( ), H δ ( ) = H( ) + δH (β) ( ), H (β) ( ) = 1 z β-2 dz = 1 β -1 β (4.6) and E δ ( |r) = E( |r) + δE (β) ( |r), E δ ( |r) = E( |r) + δE (β) ( |r), (4.7) E (β) ( |r) = H (β) ( ) -[H (β) ] (r)( -r) -H (β) (r). Generalized solutions of the approximate problem Definition 4.1 A couple ( ε , u ε ) and associated tensor field Z ε is a generalized solution of the sequence of problems (4.1-4.4) ε>0 iff the following holds: 1. It belongs to the functional spaces: ε ∈ L ∞ (0, T ; L β (Ω)) ∩ L 2 (0, T ; W 1,2 (Ω)), 0 ≤ ε a.a. in (0, T ) × Ω, (4.8) u ε ∈ L 2 (0, T ; W 1,2 (Ω; R 3 )) ∩ L 4 (0, T ; W 1,4 (Ω; R 3 )), u ε | I×∂Ω = u B , Z ε → 0 in L 4/3 (Q T ; R 3 ) as ε → 0. 2. Function ε ∈ C weak ([0, T ], L β (Ω) ) and the integral identity Ω ε (τ, x)ϕ(τ, x) dx - Ω 0 (x)ϕ(0, x) dx = (4.9) τ 0 Ω ε ∂ t ϕ + ε u ε • ∇ x ϕ -ε∇ x ε • ∇ x ϕ dxdt - τ 0 Γ in B u B • nϕ dS x dt holds for any τ ∈ [0, T ] and ϕ ∈ C 1 c ([0, T ] × (Ω ∪ Γ in )); 3. Function ε u ε ∈ C weak ([0, T ], L 2β β+1 (Ω; R 3 )) , and the integral identity Ω ε u ε (τ, •) • ϕ(τ, •) dx - Ω 0 u 0 (•)ϕ(0, •) dx = - τ 0 Ω Z ε : ∇ x ϕ dxdt (4.10) + τ 0 Ω ( ε u ε ∂ t ϕ + ε u ε ⊗ u ε : ∇ x ϕ + p δ ( ε )div x ϕ -ε∇ x ε • ∇ x u ε • ϕ -S(∇ x u ε ) : ∇ x ϕ) dxdt holds for any τ ∈ [0, T ] and any ϕ ∈ C 1 c ([0, T ] × Ω; R 3 ). Energy inequality Ω 1 2 ε |u ε -u ∞ | 2 + H δ ( ε ) + Σ 2 2 ε (τ ) dx (4.11) +δ β -1 2 τ 0 Γ in β ε |u B • n|dS x dt + δ 1 β -1 τ 0 Γout β ε |u B • n|dS x dt + Σ 2 τ 0 Γ in 2 ε |u B • n|dS x dt + τ 0 Ω S(∇ x (u ε -u ∞ )) : ∇ x (u ε -u ∞ ) + εH δ ( ε )|∇ x ε | 2 + εΣ|∇ x ε | 2 +ε|∇ x (u ε -u ∞ )| 4 dxdt ≤ Ω 1 2 0 |u 0 -u ∞ | 2 + H δ ( 0 ) + Σ 2 2 0 dx -E -δB τ 0 Γ in |u B • n|dS x dt -(H -δA) τ 0 Γout |u B • n|dS x dt - τ 0 Γ in H δ ( B )u B • ndS x dt + Σ τ 0 Γ in ε B |u B • n|dS x dt - τ 0 Ω p δ ( ε )divu ∞ dxdt - τ 0 Ω ε u ε • ∇ x u ∞ • (u ε -u ∞ ) dxdt +ε τ 0 Ω ∇ x ε • ∇ x (u ε -u ∞ ) • u ∞ dxdt - τ 0 Ω S(∇ x u ∞ ) : ∇ x (u ε -u ∞ ) dxdt - Σ 2 τ 0 Ω 2 ε divu ε dxdt, A = 1 β -1 , B = 2 β-1 β -1 β B holds for a.a. τ ∈ (0, T ), any Σ > 0 with some continuous extension u ∞ of u B in the class (2.11). In the above, H δ is defined in (4.6), H is defined in (2.10) and -∞ < E := inf >0, B <r< B E(r| ), where E(•|•) is defined in (2.4). (4.12) The main achievement of this section is existence theorem for approximating problem (4.1-4.4). It is announced in the next lemma. u 0 ∈ L 2 (Ω), 0 ∈ W 1,2 (Ω), 0 < ≤ 0 ≤ < ∞. (4.13) 0 < B ≤ B ≤ B < ∞ (4.14) Then for any continuous extension u ∞ of u B in class (2.11) there exists a generalized solution ( , u ) and Z ε to the sequence of approximate problems (4.1 -4.4) ε∈(0,1) -which belongs to functional spaces (4.8), satisfies weak formulations (4.9-4.10) and verifies energy inequality (4.11) -with the following extra properties: (i) In addition to (4.8) it belongs to functional spaces: ε ∈ L 5 3 β (Q T ), √ ε , β 2 ∈ L 2 (I, W 1,2 (Ω)), ∂ t ∈ L 4/3 (Q T ), ∇ 2 ∈ L 4/3 (Q T ). ( 4.15) (ii) In addition to the weak formulation (4.9), the couple ( ε , u ε ) satisfies equation (4.9) in the strong sense, meaning it verifies equation (4.1) with ( ε , u ε ) a.e. in Q T , boundary identity (4.2) with ( ε , u ε ) a.e. in (0, T ) × ∂Ω and initial conditions in the sense lim t→0+ ε (t) -0 L 4/3 (Ω) = 0. (iii) The couple ( ε , u ε ) satisfies identity ∂ t b( ε ) + εb ( ε )|∇ x ε | 2 -εdiv x (b ( ε )∇ x ε ) + div x (b( ε )u ε ) + [b ( ε ) ε -b( ε )] div x u ε = 0 (4.16) a.e. in (0, T ) × Ω with any b ∈ C 2 [0, ∞), where the space-time derivatives have to be understood in the sense a.e. Remark 4.2. Identity (4.16) holds in the weak sense Ω b( ε (τ ))ϕ(τ ) dx - Ω b( 0 )ϕ(0) dx = τ 0 ∂Ω εb ( ε )∇ x ε -b( ε )u ε • ndS x dt + τ 0 Ω b( ε )∂ t ϕ + (b( ε )u ε -εb ( ε )∇ x ε ) • ∇ x ϕ -ϕ εb ( ε )|∇ x ε | 2 + ( ε b ( ε ) -b( ε ))divu ε dxdt with any τ ∈ [0, T ] and ϕ ∈ C 1 c ([0, T ] × Ω) with any b whose growth (and that one of its derivatives) in combination with (4.15) guarantees b( ε ) ∈ C weak ([0, T ]; L 1 (Ω)), existence of traces and integrability of all terms appearing at the r.h.s. Solvability of the approximating equations This section is devoted to the proof of Lemma 4.1. We adopt the nowadays standard procedure based on computing the approximate density in terms of u in (4.1), (4.2), calculating u via a Galerkin approximation, and applying a fixed point argument, see [START_REF] Novotný | Introduction to the mathematical theory of compressible flow[END_REF]Chapter 7]. We proceed in several steps. These steps are described in the following subsections. Construction of a (strong) solution of problem (4.1-4.2) Here we consider the problem (4.1-4.2), with fixed ε > 0 and with fixed sufficiently regular u. We may suppose without loss of generality that u B • n on Γ in , 0 on ∂Ω \ Γ in ≡ v ∈ C 1 (∂Ω), B v ≡ g ∈ C 1 (∂Ω). (4.17) Now, problem (4.1-4.2) may be rewritten as parabolic problem: ∂ t -ε∆ + div( u) = 0 in (0, T ) × Ω, (4.18) -ε∇ • n + v = g in (0, T ) × ∂Ω, (0) = 0 in Ω. Applying to problem (4.18) the maximal regularity theory for parabolic systems, we get in particular: Lemma 4.3. Suppose that Ω is a bounded domain of class C 2 and assume further that 0 ∈ W 1,2 (Ω), u ∈ L ∞ (0, T ; W 1,∞ (Ω)), u| (0,T )×∂Ω = u B , v, g ∈ C 1 (∂Ω) are given by (4.17). Then we have: 1. The parabolic problem (4.18) admits a unique solution in the class 3. In the sequel we denote = S 0 , B ,u B (u) ≡ S(u). Then ∈ L 2 (0, T ; W 2,2 (Ω)) ∩ W 1,2 (0, T ; L 2 (Ω)). (4.19) The following estimates hold: There is c = c(T, ε, |Ω|, |∂Ω| 2 , K) > 0 (independent of u, 0 , B , u B ) such that (τ ) 2 L 2 (Ω) ≤ c 0 2 L 2 (Ω) + τ 0 ∂Ω 2 B |v|dS x dt , (4.20) τ 0 ∇ x 2 L 2 (Ω) dt + τ 0 ∂Ω 2 |v|dSdt ≤ c 0 2 L 2 (Ω) + τ 0 ∂Ω 2 B |v|dSdt , for all τ ∈ [0, T ], provided u L ∞ (Q T ) + divu L ∞ (Q T ) ≤ K. (4.21) 2. Let ≤ 0 (x) ≤ for a. a. x ∈ Ω, ≤ B (x) ≤ for all x ∈ Γ in . Then exp - τ 0 divu(s) L ∞ (Ω) ds ≤ (τ, x) ≤ exp τ 0 divu(s) L ∞ (Ω) ds , (4.22 [S 0 , B ,u B (u 1 ) -S 0 , B ,u B (u 2 )](τ ) L 2 (Ω) ≤ Γ u 1 -u 2 L ∞ (0,τ ;W 1,∞ (Ω)) (4.23) with some number ). The reader wishing to read more about the maximal regularity to parabolic equations is referred to the monograph [START_REF] Denk | Fourier multipliers and problems of elliptic and parabolic type[END_REF]. Γ = Γ(T, ε, |Ω|, |∂Ω| 2 , K, 0 L 2 (Ω) , 2 B v L 1 (0;T ;L 1 (∂Ω)) ) > 0, provided both u 1 , u 2 verify (4. Proof of statement 2. Estimates (4.20) is obtained testing (4.18) by and using first the Hölder and Young inequalities and next the Gronwall inequality. Indeed, this testing gives 1 2 Ω 2 (τ ) dx - τ 0 ∂Ω 2 vdS x dt + ε τ 0 Ω |∇ x | 2 dxdt = 1 2 Ω 2 0 dx - τ 0 ∂Ω B vdS x dt - τ 0 Ω ∇ x • u + 2 divu dxdt, where 1 2 τ 0 ∂Ω B vdS x dt ≤ 1 2 τ 0 ∂Ω 2 |v|dS x dt + 1 2 τ 0 ∂Ω 2 B |v|dS x dt and τ 0 Ω ∇ x •u + 2 divu dxdt ≤ ε 2 τ 0 Ω |∇ x | 2 dxdt+ τ 0 1 ε u 2 L ∞ (Ω) + divu L ∞ (Ω) Ω 2 dxdt; whence application of the standard Gronwall inequality yields (τ ) 2 L 2 (Ω) ≤ 0 2 L 2 (Ω) + τ 0 ∂Ω 2 B |v|dS x dt exp (K + 1 ε K 2 )τ which is first inequality in (4.20). Once this inequality is known, the second inequality is immediate. Proof of statement 3. We shall proceed to the proof of upper bound (4.22). To this end we define R(t) = exp t 0 divu(s) L ∞ (Ω) ds, i.e ∂ t R + div(Ru) ≥ 0, R(0) = . We further set ω(t, x) = (t, x) -R(t), so that ω satisfies ∂ t ω -ε∆ω + div(ωu) ≤ 0, in (0, T ) × Ω, -ε∂ω • n + ωv = ( B -R)v in (0, T ) × ∂Ω. Testing the latter inequality by ω + (the positive part of ω) we get while reasoning in the same way as in the proof of estimate (4.20), 1 2 Ω |ω + | 2 (τ ) dx + τ 0 ∂Ω |ω + | 2 |v|dS x dt + ε τ 0 Ω |∇ x ω + | 2 dxdt ≤ 1 2 Ω |ω + | 2 (0) dx + τ 0 ∂Ω ω + ( B -R)|v|dS x dt - τ 0 Ω ∇ x ω + • uω + + |ω + | 2 divu dxdt. Now we employ the fact that the first term at the right hand side is zero, while the second term is non positive, and handle the last term as in the proof of (4.20) to finally get ω + (τ ) L 2 (Ω) = 0 which yields the upper bound in (4.22). To derive the lower bound we shall repeat the same procedure with R(t) = exp - t 0 divu(s) L ∞ (Ω) ds and ω(t, x) = R(t) -(t, x). Proof of statement 4. The difference η ≡ 1 -2 , where i = S 0 , B ,u B (u i ), verifies equation: ∂ t η -ε∆η = F a.e. in Q T , -ε∇ x η • n + ηv = 0 in (0, T ) × ∂Ω with zero initial data, where F = -1 div(u 1 -u 2 ) -∇ x 1 • (u 1 -u 2 ) -ηdivu 2 -∇ x η • u 2 , | Ω F η dx| ≤ 1 2 u 1 -u 2 2 W 1,∞ (Ω) + 1 2 1 2 W 1,2 (Ω) + K + 1 ε K 2 η 2 L 2 (Ω) + ε 2 ∇ x η 2 L 2 (Ω) . Therefore, by the same token as before, after testing the above equation by η, we get η(τ ) 2 L 2 (Ω) + ε 2 τ 0 ∇ x η 2 L 2 (Ω) dt + τ 0 ∂Ω η 2 |v|dS x dt ≤ τ 2 u 1 -u 2 2 W 1,∞ (Qτ ) + τ 0 1 2 1 2 W 1,2 (Ω) + K + 1 ε K 2 η 2 L 2 (Ω) dt; whence Gronwall lemma and bounds (4.20) yield the desired result (4.23). Galerkin approximation for approximate problem (4.1-4.4) We start by introducing notations and gathering some preliminary material: 1. We denote X = span{Φ i } N i=1 where B := {Φ i ∈ C ∞ c (Ω) | i ∈ N * } is an orthonormal basis in L 2 (Ω; R 3 ) (4. 24) a finite dimensional real Hilbert space with scalar product (•, •) X induced by the scalar product in L 2 (Ω; R 3 ) and • X the norm induced by this scalar product. We denote by P N the orthogonal projection of L 2 (Ω; R 3 ) to X. Since X is finite-dimensional, norms on X induced by W k,p -norms, k ∈ N , 1 ≤ p ≤ ∞ are equivalent. In particular, there are universal numbers (depending solely on the dimension N of X) 0 < d < d < ∞ such that d v W 1,∞ (Ω) ≤ v X ≡ v L 2 (Ω) ≤ d v W 1,∞ (Ω) , for all v ∈ X. (4.25) 2. Let g ∈ L 1 (Q T ), infess (t,x)∈Q T g ≥ a > 0. We define for a.a. t ∈ (0, T ), M g(t) ∈ L(X, X), < M g(t) Φ, Ψ > X := Ω g(t, x)ΦΨ dx (4.26) With this definition at hand, we easily see that there holds for a.a. t ∈ (0, T ), M g(t) L(X,X) ≤ δ Ω g(t, x) dx, < M g(t) Φ, Φ > X ≥ a Φ 2 X , (4.27) M -1 g(t) (t) ∈ L(X, X), M -1 g(t) (t) L(X,X) ≤ δ a , M g 1 (t) -M g 2 (t) L(X,X) ≤ δ g 1 (t) -g 2 (t) L 1 (Ω) , (4.28) M -1 g 1 (t) -M -1 g 2 (t) L(X,X) ≤ δ a 2 g 1 (t) -g 2 (t) L 1 (Ω) . In the above formulas δ is a positive universal number dependent solely of N . Moreover, if in addition g ∈ C([0, T ]; L 1 (Ω)) then M g(•) , M -1 g(•) ∈ C([0, T ]; L(X, X)). (4.29) Finally, if in addition ∂ t g ∈ L 1 (Q T ), then ∂ t M g(.) (t) = M ∂tg(t) ∈ L(X, X), ∂ t M -1 g(.) (t) = -M -1 g(t) M ∂tg(t) M -1 g(t) ∈ L(X, X) (4.30) for a.a. t ∈ (0, T ). We shall look for T ∈ (0, T ] and u N = u ∞ + v N , v N ∈ C([0, T ]; X), (4.31) satisfying Ω ∂ t ( N u N ) • Φ dx = Ω divS(∇ x u N )+εdiv |∇ x (u N -u ∞ )| 2 ∇ x (u N -u ∞ ) (4.32) -∇ x p δ ( N ) -div( N u N ⊗ u N ) -ε∇ x N • ∇ x u N dx, or equivalently, omitting index N at v N , u N and N Ω v(t) • Φ dx - Ω 0 v 0 Φ dx = t 0 Ω divS(∇ x u)+εdiv |∇ x (u -u ∞ )| 2 ∇ x (u -u ∞ ) (4.33) -∇ x p δ ( ) -div( u ⊗ u) -ε∇ x • ∇ x u-∂ t u ∞ • Φ dxdt, where v 0 = u 0 -u ∞ , (t) = S(u)(t), Φ ∈ X, t ∈ (0, T ) with S being defined in item 3. of Lemma 4.3. The latter equation is equivalent to the integral formulation v(t) = T(v) := M -1 S(u)(t) P ( 0 v 0 ) + M -1 S(u)(t) t 0 P N(S(u)(s), u(s)) ds , (4.34) where N( , u) = divS(∇ x u)+εdiv |∇ x (u -u ∞ )| 2 ∇ x (u -u ∞ ) -∇ x p δ ( ) -div( u ⊗ u) -ε∇ x • ∇ x u-∂ t u ∞ and where here and in the sequel, P states for P N . Clearly, N(S(u), u) ∈ L 2 (Q T ) due to (4.19), (4.22), (4.31). This implies that 1) the map t → P N(S(u)(t), u(t)) ∈ L 2 (0, T ; X), 2) the map t → t 0 P N(S(u)(s), u(s))ds ∈ C([0, T ]; X), 3) operator T maps C([0, T ]; X) to C([0, T ]; X), 4) we have a bound P N(S(u), u) X ≤ d 2 (K, , T ), provided u verifies (4.21). (4.35) 5) Likewise, using in addition (4.23), P N(S(u 1 ), u 1 ) -P N(S(u 2 ), u 2 ) X ≤ d 3 (K, , T ) u 1 -u 2 X , Tv 1 -Tv 2 C([0,t];X) ≤ t Γδd 2 ( ) 2 + δd 3 e Kt v 1 -v 2 C([0,t];X) provided u ∞ + v i verifies (4.21). Now we take K > 0 sufficiently large and T sufficiently small, so that K d 2 ≥ max δ e KT P v 0 X + d 2 (K, , T )T , d u ∞ W 1,∞ (Ω) and T Γδd 2 ( ) 2 + δd 3 e KT < 1 With this choice, we easily check employing (4.25) that u ∞ + v verifies condition (4.21), provided v C([0,T ];X ≤ K d 2 . We have thus showed that T is a contraction maping from the (closed) ball B(0; Kd/2) ⊂ C([0, T ], X) into itself. It therefore admits a unique fixed point v ∈ C([0, T ], X), and u = u ∞ + v solves problem (4.33). Denote now T = {T ∈ (0, T ) | problem (4. 33) admits a solution v ∈ C([0, T ]; X)}, and set T max = sup T. We have already proved that T is not empty. In what follows we prove that T max = T . In fact, if T max < T , then necessarily lim T →Tmax v C([0,T ];X) → ∞. (4.37) We shall show that (4.37) cannot happen. To this end we derive in the next section the uniform estimates. Uniform bounds independent of the Galerkin approximation We first integrate equation (4.1) ( N ,u N ) in order to obtain the conservation of total mass. Omitting subscript N in order to simplify notatio, we get Ω (τ ) dx + τ 0 Γout u B • ndS x dt = Ω 0 dx + τ 0 Γ in |u B • ∂ t H δ ( ) + εH δ ( )|∇ x | 2 -εdiv H δ ( )∇ x + div(H δ ( )u) + p δ ( )divu = 0 a.e. in Q τ , τ ∈ (0, T ), or, after using boundary conditions (4.18), ∂ t Ω H δ ( ) dx + ε Ω H δ ( )|∇ x | 2 dx + ∂Ω H δ ( )( B -)v + H δ ( )u B • n dS x = - Ω p δ ( )divu dx or further, after employing the definition (4.17) of v, ∂ t Ω H δ ( ) dx+ε Ω H δ ( )|∇ x | 2 dx+ Γ in H δ ( B )-H δ ( )( B -)-H δ ( ) |v|dS x + Γout H δ ( )u B •ndS x = - Ω p δ ( )divu dx - Γ in H δ ( B )u B • ndS x . (4.39) Next, we deduce from (4.32) τ 0 Ω ∂ t ( v) • v -u ⊗ u : ∇ x v dxdt + τ 0 Ω S(∇ x u) : ∇ x v dx +ε τ 0 Ω |∇ x v| 4 dxdt - τ 0 Ω p δ ( )divv dxdt + τ 0 Ω ε∇ x • ∇ x u • v dxdt = 0, where by virtue of (4.18) (after several integrations by parts and recalling that u = u ∞ + v), Ω ∂ t ( u) • v -u ⊗ u : ∇ x v dx = Ω ∂ t v 2 + 1 2 ∂ t v 2 + ∂ t u ∞ • v + 1 2 div( u)v 2 -u • ∇ x v • u ∞ dx = Ω 1 2 ∂ t ( v 2 ) + ε∆ u ∞ • v + ε 2 ∆ v 2 -div( u)u ∞ • v -u • ∇ x v • u ∞ dx = ∂ t Ω 1 2 v 2 dx + Ω u • ∇ x u ∞ • v -ε∇ x • ∇ x u • v -ε∇ x • ∇ x v • u ∞ dx. Consequently, Ω 1 2 v 2 + H δ ( ) (τ ) dx + τ 0 Γ in E δ ( B | )|u B • n|dS x dt + τ 0 Γout H δ ( )|u B • n|dS x dt (4.40) +ε τ 0 Ω |∇ x v| 4 dxdt + ε τ 0 Ω H δ ( )|∇ x | 2 dxdt + τ 0 Ω S(∇ x v) : ∇ x v dxdt ≤ Ω 1 2 0 v 2 0 + H δ ( 0 ) dx - τ 0 Γ in H δ ( B )u B • ndS x dt + τ 0 Ω -p δ ( )divu ∞ -S(∇ x u ∞ ) : ∇ x v -u • ∇ x u ∞ • v + ε∇ x • ∇ x v • u ∞ dxdt, where, recall, H δ , E δ are defined in (4.6), (4.7), respectively. Further, we test equation (4.1) ( N ,u N ) by N in order to get, after several integrations by parts, 1 2 Ω 2 (τ ) dx + 1 2 τ 0 ∂Ω 2 |u B • n|dS x dt + ε τ 0 Ω |∇ x | 2 dxdt (4.41) = 1 2 Ω 2 0 dx + τ 0 Γ in B |u B • n|dS x dt - 1 2 τ 0 Ω 2 divudxdt where as in (4.40) we have omitted indexes N . Now, our goal is to derive from (4.40) and (4.41) useful bounds for the sequence ( N , u N ). The coefficients of derived bounds depend tacitly on the parameters of the problem (as γ, β, µ, λ, a, b, T, Ω, functions p, p) and on "data", where "data" stands for Ω 1 2 0 v 2 0 + H δ ( 0 ) dx, u ∞ W 1,∞ (Ω) , B , B ≡ B C(∂Ω) , H, E. In particular, they are always independent N . If they depend on ε or δ this dependence is also always indicated in their argument as well as the dependence on other quantities (notably T ) if it is necessary for the understanding of proofs. The coefficients may take different values even in the same formulas. Before attacking estimates we shall list several consequences of structural assumptions (2.2), (4.5) and formulas (2.3), (2.4), (4.6), (4.7) needed for the derivation of those bounds. A brief excursion to (2.3) and (4.6) yields -∞ < - δ β -1 + H ≤ H δ . ( 4 E (β) (r| ) ≥ 1 2 β - 2 β-1 β -1 r β , where E (β) is defined in (4.7). Finally, using namely conditions p (0) = 0 (cf. last line in formula (2.2)), we get E := inf >0; B <r< B E(r| ) > -∞ Putting together these three observations we infer (see (4.6-4.7) for the notation), E δ ( B | ) ≥ δ 2 β + E -δ 2 β-1 β -1 β B . (4.43) 3. Recalling again definition (4.7) of E δ , E δ we find identity H δ ( ) = E δ ( | ) + D δ ( , ), (4.44) where D δ ( , ) = H δ ( )( -) + H δ ( ) + H( ) , = 1 |Ω| Ω 0 dx. Thanks to (4.38), sup t∈(0,T ) Ω |D( , )| dx ≤ c(data), (4.45) where we have also employed regularity of p near zero to show that the map → H( ) is bounded near 0. 4. We have E δ ( |r) = E( |r) + δE (β) ( |r), where, due to convexity of H and H (β) on (0, ∞), E δ enjoys the following coercivity property: There is c = c( ) > 0, such that E δ ( | ) ≥ c δ β 1 Ores ( ) + 1 Ores ( ) (4.46) +1 Oess ( )( -) 2 + γ 1 Ores ( ) + 1 Ores ( ) + 1 Oess ( )( -) 2 for all ≥ 0, where O ess = ( 1 2 , 2 ) while O res = [0, ∞) \ O ess , and where we have used the growth condition for p from (2.2). 5. We deduce from the Korn and Poincaré inequalities, v 2 W 1,2 (Ω) ≤ c S(∇ x v) : ∇ x v L 1 (Ω) . ( 4 .47) 6. There holds H δ = H + δH (β) + H, where H ≥ 0, [H (β) ] ( ) = β β-2 (4.48) T 0 Ω |H ( )||∇ x | 2 dxdt ≤ sup >0 p ( ) T 0 Ω |∇ x | 2 dxdt, (4.49) where sup >0 p ( ) < ∞ namely thanks to assumption p (0) = 0 (cf. again last line in formula (2.2)). 7. The absolute value of the right hand side of inequality (4.40) is bounded by 1 + α α c(data, δ) τ 0 Ω E δ ( | ) + 1 2 v 2 dxdt + αε ∇ x 2 L 2 (Qτ ) (4.50) + α + ε c(data, T ) α ∇ x v 2 L 2 (Qτ ) + c(data) α , with arbitrary α > 0, where we have used several times the Hölder and Young inequalities, and coercivity (4.46) of E δ ( | ). 8. By the same token, the absolute value of the right hand side of equality (4.41) is bounded by 1 δ 1 + α α c(data, T ) + c 1 + α α 1 δ τ 0 Ω E δ ( | ) dxdt (4.51) +αδ τ 0 Γ in β |u B • n|dS x dt + α ∇ x v 2 L 2 (Qτ ) with arbitrary α > 0. Next, we multiply equation (4.41) by a positive number Σ and add it to inequality (4.40). With notably (4.42-4.43) at hand, we deduce from this operation the following inequality which will be our departure point: Ω 1 2 v 2 + H δ ( ) + Σ 2 2 (τ ) dx (4.52) +δ β -1 2 τ 0 Γ in β |u B • n|dS x dt + δ 1 β -1 τ 0 Γout β |u B • n|dS x dt + Σ 2 τ 0 ∂Ω 2 |u B • n|dS x dt + τ 0 Ω S(∇ x v) : ∇ x v + εH δ ( )|∇ x | 2 + εΣ|∇ x ε | 2 + ε|∇ x v| 4 dxdt ≤ Ω 1 2 0 v 2 0 + H δ ( 0 ) + Σ 2 2 0 dx -E -δB τ 0 Γ in |u B • n|dS x dt -(H -δA) τ 0 Γout |u B • n|dS x dt - τ 0 Γ in H δ ( B )u B • ndS x dt + Σ τ 0 Γ in B |u B • n|dS x dt - τ 0 Ω p δ ( )divu ∞ dxdt - τ 0 Ω u • ∇ x u ∞ • v dxdt +ε τ 0 Ω ∇ x ε • ∇ x v • u ∞ dxdt - τ 0 Ω S(∇ x u ∞ ) : ∇ x v dxdt - Σ 2 t∈(0,T ) Ω v 2 (t) dx ≤ K(data, T, δ) (4.53) v L 2 (0,T ;W 1,2 (Ω)) ≤ K(data, T, δ), (4.54) sup t∈(0,T ) Ω E δ ( | )|(t) dx ≤ L(data, T, δ), (4.55) ε T 0 Ω H δ ( )|∇ x | 2 dx ≤ L(data, T, δ), (4.56) ε T 0 Ω |∇ x | 2 dx ≤ L(data, T, δ), (4.57) ≥ exp - T 0 ( u ∞ (s) W 1,∞ (Ω) + c v(s) W 1,2 (Ω) )ds ≥ K 1 ( , T, data). Coming back to (4.53), and using v L 2 (Ω) ≥ d v W 1,∞ (Ω) , we finally obtain v C([0,T ];W 1,∞ (Ω)) ≤ 1 d K K 1 for any T < T max . This contradicts (4.37). We have thus proved that T max = T . From ( N = S(u N ), u N = u ∞ + v N ) of Galerkin solutions to the problem (4.33): N |u N | 2 L ∞ (I,L 1 (Ω)) ≤ L(data, δ), (4.60) u N L 2 (I,W 1,2 (Ω)) ≤ L(data, δ), (4.61) N L ∞ (I,L β (Ω)) ≤ L(data, δ), (4.62) ε ∇ N 2 L 2 (Q T ) + ε ∇( β/2 N ) 2 L 2 (Q T ) ≤ L(data, δ), (4.63) |u B • n| 1/β L β ((0,T )×∂Ω) ≤ L(data, δ), (4.64) ε u N -u ∞ 4 L 4 (0,T ;W 1,4 (Ω)) ≤ L(data, δ). (4.65) From these bounds we find by Hölder inequalities, Sobolev embeddings and interpolation N u N L ∞ (I,L 2β β+1 (Ω)) + N u N L 2 (I,L 6β β+6 (Ω)) ≤ L(data, δ), (4.66) N |u N | 2 L 2 (I,L 6β 4β+3 (Ω)) ≤ L(data, δ), (4.67) N L 5 3 β (Q T ) ≤ L(data, δ, ε), (4.68) div( N u N ) L 4/3 (Q T ) ≤ L(data, δ, ε). ( 4 ∂ t N L 4/3 (Q T ) + N L 4/3 (0,T ;W 2,4/3 (Ω)) ≤ L(data, δ, ε). (4.70) The above bounds imply, via several classical convergence theorems, existence of a chosen subsequence (not relabeled) whose limits and way of convergence will be specified in the following text. We deduce from (4.70), N in L 4/3 (0, T ; W 2,4/3 (Ω)) and in L 2 (0, T ; W 1,2 (Ω)), ∂ t N ∂ t in L 4/3 (Q T ) (4.71) and also in addition with help of (4.62-4.63) by Lions-Aubin Lemma, N → in L 2 (Q T ), ∇ x N → ∇ x in L 4/3 (Q T ); whence, in particular, N → a.e. in Q T and in L p (Q T ), 1 ≤ p < 5 3 β, (4.72) ∇ x N → ∇ x a.e. in Q T and in L p (Q T ), 1 ≤ p < 2, (4.73) where we have used (4.68) and (4.63). Consequently, in particular p δ ( N ), H δ ( N ) → p δ ( ), H δ ( ) in L p/β (Q T ), β < p < 5 3 β. (4.74) Further, due to (4.63), trace theorem and (4.64) N in L 2 ((0, T ) × ∂Ω), N |u B • n| 1/β |u B • n| 1/β in L β ((0, T ) × ∂Ω). (4.75) Next, we derive from (4.18) written with ( N , u N ) that the sequences of functions t → Ω N (t)ϕ dx are for any ϕ ∈ C 1 c (Ω) uniformly bounded and equi-continuous in C[0, T ]; whence the Arzela-Ascoli theorem in combination with the separability of L β (Ω) furnishes, N → in C weak ([0, T ], L β (Ω)). (4.76) Estimate (4.65) yields u N u (weakly) in L 4 (0, T ; W 1,4 (Ω)), (4.77) and in combination with (4.72) N u N u e.g. in L 2 (0, T ; L 6β β+6 (Ω)) (4.78) and finally, together with (4.69), div( N u N ) div( u) in L 4/3 Q T ). (4.79) Estimate (4.65) furnishes further ε|∇ x (u N -u ∞ )| 2 ∇ x (u N -u ∞ ) Z ≡ Z ε weakly in L 4/3 (Q T ; R 9 ), (4.80) where Z ε L 4/3 (Q T ) → 0 as ε → 0. Second convergence in (4.73) and (4.77) yield ∇ x N • ∇ x u N ∇ x • ∇ x u in L 4/3 (Q T ). (4.81) Returning with estimates (4.60-4.67) and with (4.81) to (4.33), we infer that the sequences of functions t → N u N (t)Φ i are for any Φ i ∈ B uniformly bounded and equi-continuous in C[0, T ]. We may thus combine Arzela-Ascoli theorem with the fact that the linear hull of B is dense in L 2β β-1 (Ω) to deduce that N u N → u in C weak ([0, T ]; L 2β β+1 (Ω)), (4.82) where we have used the second convergence in (4.77) in order to identify the limit. Seeing the compact imbedding L 2β β+1 (Ω) → → W -1,2 (Ω), we deduce N u N (t) → u(t) (strongly) in W -1,2 (Ω) for all t ∈ [0, T ]. This implies, in particular, N u N → u in L 2 (0, T ; W -1,2 (Ω)). Combining weak convergence (4.77) with just obtained strong convergence of N u N , and with estimate (4.67) we get N u N ⊗ u N u ⊗ u in L 2 (I, L 6β 4β+3 (Ω)). (4.83) Relations (4.72-4.83) guarantee the belonging of ( , u) to class (4.8) while (4.63), (4.68) and (4.70) guarantee additional regularity (4.15). Relation (4.71) guarantees that equation (4.9) is satisfied in the strong sense (4.1). Equation (4.1) ε,uε tested by e yields inequality (4.41) by the same manipulations as presented during the derivation of inequality (4.41). Equation (4.16) is obtained by multiplying (4.1) by b ( ). The limits (4.72-4.78), (4.81-4.83) employed in (4.32) lead to equation (4.10). It remains to pass to the limit from the inequality (4.52) ( N ,u N ) to inequality (4.11). To this end, we use at the left hand side the lower weak semi-continuity of norms and convex functionals. The right hand side converges to its due limit (the same expression with ( , v)) due to (4.72), (4.77), (4.81), (4.83) and (4.75). We postpone the details of the limit passage in the energy inequality to the next Section, where similar reasoning will be employed. We have thus established Lemma 4.1. Limit ε → 0 The aim in this section is to pass to the limit in the weak formulation (4.9-4.11) of the problem (4.1-4.4) ( ε,uε) in order to recover the weak formulation of problem (1.1-1.6) written with p δ , H δ instead of p, H (cf. (4.8-4.11)). We expect that there is a (weak) limit ( , u) of a conveniently chosen subsequence ( ε , u ε ), that represents a weak solution of problem (1.1-1.6) (p=p δ ,H=H δ ) (cf. (2.5-2.9)). More exactly, we want to prove the following lemma: Lemma 5.1. Under assumptions of Lemma 4.1, there exists a subsequence ( ε , u ε ) (not relabeled) and a couple ( , u) such that ε * (weakly - * ) in L ∞ (0, T ; L β (Ω)), u ε u ∈ L 2 (0, T ; W 1,2 (Ω; R 3 )) (5.1) 0 ≤ a.a. in (0, T ) × Ω, u| (0,T )×∂Ω = u B , satisfying: 1. Function ∈ C weak ([0, T ], L β (Ω) ) and the integral identity Ω (τ, •)ϕ(τ, •)dx - Ω 0 (•)ϕ(0, •)dx (5.2) = τ 0 Ω ∂ t ϕ + u • ∇ x ϕ dxdt - τ 0 Γ in B u B • nϕ dS x dt holds for any τ ∈ [0, T ] and ϕ ∈ C 1 c ([0, T ] × (Ω ∪ Γ in )); 2. The renormalized continuity equations holds: Ω b( )ϕ(τ )dx - Ω b( 0 )ϕ(0)dx = (5.3) τ 0 Ω b( )∂ t ϕ + b( )u • ∇ x ϕ -ϕ (b ( ) -b( )) div x u dxdt - τ 0 Γ in b( B )u B • nϕ dS x dt for any ϕ ∈ C 1 c ([0, T ] × (Ω ∪ Γ in )) , and any continuously differentiable b with b having a compact support in [0, ∞). Function u ∈ C weak ([0, T ], L 2β β+1 (Ω; R 3 )) , and the integral identity Ω u(τ, •) • ϕ(τ, •)dx - Ω 0 u 0 (•)ϕ(0, •)dx (5.4) = τ 0 Ω ( u • ∂ t ϕ + u ⊗ u : ∇ x ϕ + p δ ( )div x ϕ -S(∇ x u) : ∇ x ϕ) dxdt holds for any τ ∈ [0, T ] and any ϕ ∈ C 1 c ([0, T ] × Ω; R 3 ). The energy inequality Ω 1 2 |u -u ∞ | 2 + H δ ( ) (τ )dx + τ 0 Ω S(∇ x (u -u ∞ )) : ∇ x (u -u ∞ )dxdt (5.5) ≤ Ω 1 2 0 |u 0 -u ∞ | 2 + H( 0 ) dx - τ 0 Γ in H δ ( B )u B • ndS x dt -E -δB τ 0 Γ in |u B • n|dS x dt -(H -δA) τ 0 Γout |u B • n|dS x dt - τ 0 Ω p δ ( )divu ∞ dxdt - τ 0 Ω u • ∇ x u ∞ • (u -u ∞ )dxdt - τ 0 Ω S(∇ x u ∞ ) : ∇ x (u -u ∞ )dxdt holds for a.a. τ ∈ (0, T ). Numbers A, B are defined in (4.11) and H, E in (2.10) and (4.12), respectively. Vector field u ∞ is a given continuous extension of u B in class (2.11). The remaining part of Section 5 will be devoted to the proof of Lemma 5.1. The proof will be performed in the following subsections. We shall obtain estimate (5.11) by taking in the momentum equation (4.10) with ( ε , u ε ) test function ϕ = η(t)B ψ ε - 1 |Ω| Ω ψ ε dx , where η ∈ W 1,∞ 0 (0, T ) and ψ ∈ C 1 c (Ω) are convenient cut off functions. This testing provides an identity of the form T 0 Ω ηψp δ ( ε ) ε dxdt = T 0 Ω R( ε , u ε , η, ψ) dxdt, where the right hand side may be bounded from above via the uniform estimates (5.6-5.10) by virtue of Hölder, Sobolev and interpolation inequalities, and Lemma 5.3 by a positive number dependent of ∇ x ψ, but independent, in particular, of η, η (and, of course, independent of ε). In order to obtain this formula, one must perform several times integration by parts and employ conveniently continuity equation (4.9). We notice that the most disagreeable terms involving integration over the boundary vanish due to the fact that ϕ and ψ vanish at the boundary. This is nowadays a standard and well understood procedure. We refer the reader for more details to [11, Section 3.2], or to monographs [START_REF] Feireisl | Dynamics of viscous compressible fluids[END_REF], [START_REF] Novotný | Introduction to the mathematical theory of compressible flow[END_REF], [START_REF] Feireisl | Singular limits in thermodynamics of viscous fluids[END_REF]. Seeing decomposition (2.2) of the pressure and seeing that p is bounded, the latter formula provides bound (5.11). Weak limits in continuity and momentum equations We shall first pass to the limit in the weak formulations of the continuity equation (4.9) and momentum equation (4.10). Estimates (5.6) and (5.8) yield convergence (5.1), and estimate (5.11) together with (2.2), (4.5) implies p δ ( ε ) p δ ( ) weakly in L β+1 β ((0, T )×K)) for any compact K ⊂ Ω. Here, and in the sequel, g( , u) denotes a weak limit in L 1 (Q T ) of the sequence g( ε , u ε ). By virtue of (5.10) (and (5.6)) the terms multiplied by ε will vanish in the limit. Sequence Z ε → 0 in L 4/3 (Q T ; R 9 ) by construction, see (4.80). Seeing that ε → in C weak ([0, T ]; L β (Ω)) (as one can show by means of the Arzela-Ascoli type argument from equation (4.9) and uniform bounds (5.6-5.10)), we deduce from the compact imbedding L β (Ω) → → W -1,2 (Ω) and from u ε u in L 2 (0, T ; W 1,2 (Ω)) the weak-* convergence ε u ε * u in L ∞ (0, T ; L 2β β+1 (Ω)) that may be consequently improved thanks to momentum equation (4.10) and estimates (5.6-5.10) to ε u ε → u in C weak (0, T ; L 2β β+1 (Ω)) again by the Arzela-Ascoli type argument. With this observation at hand, employing compact imbedding 2 (Ω; R 3 )) and consequently ε u ε ⊗u ε u⊗u weakly e.g. in L 1 (Q T ; R 9 ), at least for a chosen subsequence (not relabeled). Having the above, we get the following limits in equations (4.9-4.10): L 2β β+1 (Ω) → → W -1,2 (Ω) and u ε u in L 2 (0, T ; W 1,2 (Ω)) we infer that ε u ε → u in L 2 (0, T, W -1, Ω (τ, x)ϕ(τ, x)dx - Ω 0 (x)ϕ(0, x)dx = τ 0 Ω ∂ t ϕ + u • ∇ x ϕ dxdt (5.12) = - τ 0 Γ in B u B • nϕ dS x dt for any τ ∈ [0, T ] and ϕ ∈ C 1 c ([0, T ] × (Ω ∪ Γ in )); Ω u(τ, •) • ϕ(τ, •)dx - Ω 0 u 0 (•)ϕ(0, •)dx (5.13) = τ 0 Ω u∂ t ϕ + u ⊗ u : ∇ x ϕ + p δ ( )div x ϕ -S(∇ x u) : ∇ x ϕ dxdt for any τ ∈ [0, T ] and any ϕ ∈ C 1 c ([0, T ] × Ω; R 3 ). It remains to show that p δ ( ) = p δ ( ). The rest of this section is devoted to the proof of this identity. This is equivalent to show that ε → a.e. in Q T . Effective viscous flux identity We denote by ∇ x ∆ -1 the pseudodifferential operator with Fourier symbol iξ |ξ| 2 and by R the Riesz transform with Fourier symbol ξ⊗ξ |ξ| 2 . Following Lions [START_REF] Lions | Mathematical topics in fluid dynamics[END_REF], we shall use in the approximating momentum equation (4.9) test function ϕ(t, x) = ψ(t)φ(x)∇ x ∆ -1 ( ε φ), ψ ∈ C 1 c (0, T ), φ ∈ C 1 c (Ω) and in the limiting momentum equation (5.13) test function ϕ(t, x) = ψ(t)φ(x)∇ x ∆ -1 ( φ), ψ ∈ C 1 c (0, T ), φ ∈ C 1 c (Ω) subtract both identities and perform the limit ε → 0. This is a laborious, but nowadays standard calculation (whose details can be found e.g. in [START_REF] Feireisl | On the existence of globally defined weak solutions to the Navier-Stokes equations of compressible isentropic fluids[END_REF]Lemma 3.2], [START_REF] Novotný | Introduction to the mathematical theory of compressible flow[END_REF], [START_REF] Feireisl | Dynamics of viscous compressible fluids[END_REF] or [START_REF] Feireisl | Singular limits in thermodynamics of viscous fluids[END_REF]Chapter 3]) leading to the identity T 0 Ω ψφ 2 p δ ( ) -(2µ + λ)divu dxdt - T 0 Ω ψφ 2 p δ ( ) -(2µ + λ) divu dxdt (5.14) = T 0 Ω ψφu • R • ( uφ) -u • R( φ) dxdt -lim ε→0 T 0 Ω ψφu ε • ε R • ( ε u ε φ) -ε u ε • R( ε φ) dxdt. This process involves several integrations by parts and exploits continuity equation in form (4.9) and (5.12). We notice that the non homogenous data do not play any role due to the presence of compactly supported cut-off functions ψ and φ. The essential observation for getting (5.14) is the fact that the map → ϕ defined above is a linear and continuous from L p (Ω) to W 1,p (Ω), 1 < p < ∞ as a consequence of classical Hörmander-Michlin's multiplier theorem of harmonic analysis. The most non trivial moment are convex. Consequently, Λ ln -ln ≥ p( ) -p( ) (5.17) and Λ ln -ln ≥ p( ) -p( ). (5.18) Coming now back to (5.15), we obtain by using monotonicity of p, (2µ + λ) divu -divu ≤ p( ) -p( ) ≤ p( ) -p( ) + p( ) -p( ) . Employing (5.17) and (5.18) further yields divu -divu ≤ cΛ(1 + r) ln -ln , (5.19) provided supp p ⊂ [0, r]. This is the crucial inequality that plays in the case of non monotone pressure the same role as would be played by the inequality divu -divu ≥ 0 in the case of monotone pressure law. Strong convergence of density sequence Since verifies continuity equation (5.12) and since it belongs to to L 2 (Q T ) we may employ Lemma 3.1 in order to conclude that it verifies also renormalized continuity equation (2.13). In view of Remark 2.5, identity (2.13) is valid for any b belonging to class (2.14). In particular, for b( ) ≡ L( ) = log , it reads Ω L( (τ, x))ϕ(τ, x)dx - Ω L( 0 )ϕ(0, x)dx (5.20) = τ 0 Ω L( )∂ t ϕ + L( )u • ∇ x ϕ -ϕ div x u dxdt + τ 0 ∂Ω L( B )u B • nϕdS x dt. We continue with the renormalized version of the approximate equation of continuity (4.16). In particular, for b( ) = L( ) ≡ log( ), when passing to the weak formulation, we obtain Ω L( ε (τ, x))ϕ(τ, x) dx - Ω L( 0 (x))ϕ(0, x) dx (5.21) - τ 0 Ω L( ε )∂ t ϕ + L( ε )u ε • ∇ x ϕ dxdt + τ 0 Ω ε div x u ε ϕ dxdt + τ 0 Γ in ϕL( ε )u B • n dS x dt -ε τ 0 Γ in ϕL ( ε )∇ x ε • n dS x dt ≤ o(ε) for any τ ∈ [0, T ] and any ϕ ∈ C 1 c ([0, T ] × (Ω ∪ Γ in )), ϕ ≥ 0, where the inequality sign appears due to the omission of the (non negative) term containing εL ( ε )|∇ x ε | 2 (recall that L is convex) and o(ε), lim ε→0 o(ε) = 0 corresponds to the terms of (4.16) containing ε as multiplier. Finally, we use the boundary conditions (4.10) obtaining Ω L( ε (τ, x))ϕ(τ, x) dx - Ω L( 0 (x))ϕ(0, x) dx (5.22) - τ 0 Ω L( ε )∂ t ϕ + L( ε )u ε • ∇ x ϕ dxdt + τ 0 Ω ε div x u ε ϕ dxdt + τ 0 Γ in L( ε )u B • n + L ( ε )( B -ε )u B • n dS x dt ≤ o(ε) for any τ ∈ [0, T ] and any ϕ ∈ C 1 c ([0, T ] × (Ω ∪ Γ in )), ϕ ≥ 0. Subtracting (5.20) from (5.22) while taking ϕ independent of t ∈ [0, T ], we get Ω L( ε (τ, x))ϕ(x) dx - Ω L( (τ, x))ϕ(x) dx (5.23) - τ 0 Ω L( ε ) -L( ) u ε • ∇ x ϕ dxdt + τ 0 Ω ε div x u ε -div x u ϕ dxdt + τ 0 Γ in ϕ [L( B ) -L ( ε )( B -ε ) -L( ε )] |u B • n| dS x ≤ o(ε) for any τ ∈ [0, T ] and any ϕ ∈ C 1 c (Ω ∪ Γ in )), ϕ ≥ 0. As L is convex, we deduce, Ω L( ε (τ, x))ϕ(x) dx - Ω L( (τ, x))ϕ(x) dx - τ 0 Ω L( ε ) -L( ) u ε • ∇ x ϕ dxdt + τ 0 Ω ε div x u ε -div x u ϕ dxdt ≤ o(ε). Whence, letting ε → 0 yields Ω log -log )(τ, x)ϕ(x) dx + τ 0 Ω log -log u • ∇ x ϕ dxdt (5.24) ≤ cΛ(1 + r) τ 0 Ω log -log dxdt for any τ ∈ [0, T ] and any ϕ ∈ C 1 c (Ω ∪ Γ in ), ϕ ≥ 0, where we have used (5.19). Let now ũB ∈ W 1,∞ (Ω) be a Lipschitz extension of u B to Ω constructed in Lemma 2.2. Since Γ out is in class C 2 , function x → dist(x, Γ out ) belongs to C 2 (U - ε 0 (Γ out ) ∪ Γ out ) for some "'small"' ε 0 > 0, where U - ε 0 (Γ out ) ≡ {x = x 0 -zn(x 0 ) | x 0 ∈ Γ out , 0 < z < ε 0 } ∩ Ω, Energy inequality We shall pass to the limit in the energy inequality (4.11) with the goal to deduce from it energy inequality (5.5). To this end we first take identity (4.16) with b(z) = z 2 , ϕ = 1 in order to get identity (4.41) ( ε,uε) . This is justified by virtue of Remark 4.2. Due to this operation, all terms in (4.11) multiplied by Σ vanish. Once this is done, we integrate the resulting inequality over τ from 0 < τ 1 < τ 2 < T to get τ 2 τ 1 Ω 1 2 ε |u ε -u ∞ | 2 + H δ ( ε ) (τ ) dxdt + τ 2 τ 1 τ 0 Ω S(∇ x (u ε -u ∞ )) : ∇ x (u ε -u ∞ ) dxdt (5.32) ≤ τ 2 τ 1 Ω 1 2 0 |u 0 -u ∞ | 2 + H( 0 ) dxdτ - τ 2 τ 1 τ 0 Γ in H δ ( B )u B • ndS x dtdτ -E -δB τ 0 Γ in |u B • n|dS x dt -(H -δA) τ 0 Γout |u B • n|dS x dt - τ 2 τ 1 τ 0 Ω p δ ( ε )divu ∞ dxdt - τ 2 τ 1 τ 0 Ω ε u ε • ∇ x u ∞ • (u ε -u ∞ ) dxdtdτ - τ 2 τ 1 τ 0 Ω S(∇ x u ∞ ) : ∇ x (u ε -u ∞ ) -ε∇ x ε • ∇ x (u ε -u ∞ ) • u ∞ dxdtdτ, where, at the left hand side we have omitted the non negative terms multiplied by ε and the non negative terms involving integrals over the Γ in and Γ out portions of the boundary. We can now use the convergences established in Section 5.2 and in (5.31) in combination with the lower weak semi-continuity of convex functionals at the left hand side (see e.g. [START_REF] Feireisl | Singular limits in thermodynamics of viscous fluids[END_REF]Theorem 10.20]) -to this end we write H δ = H δ + H and realize that H δ is convex and H is bounded on (0, ∞)-to get τ 2 τ 1 Ω 1 2 |u -u ∞ | 2 + H δ ( ) (τ ) dxdt + τ 2 τ 1 τ 0 Ω S(∇ x (u -u ∞ )) : ∇ x (u -u ∞ ) dxdt ≤ τ 2 τ 1 Ω 1 2 0 |u 0 -u ∞ | 2 + H( 0 ) dxdτ - τ 2 τ 1 τ 0 Γ in H δ ( B )u B • ndS x dtdτ -E -δB τ 0 Γ in |u B • n|dS x dt -(H -δA) τ 0 Γout |u B • n|dS x dt -lim inf ε→0 τ 2 τ 1 τ 0 Ω p δ ( ε )divu ∞ dxdt - τ 2 τ 1 τ 0 Ω u • ∇ x u ∞ • (u -u ∞ ) dxdtdτ - τ 2 τ 1 τ 0 Ω S(∇ x u ∞ ) : ∇ x (u -u ∞ ) dxdtdτ. We observe that due to (2.11), τ 0 Û- h (∂Ω) p( ε ) + δ( β ε + ε ) divu ∞ dxdt ≥ 0 (if h > 0 is sufficiently small) and lim inf ε→0 τ 2 τ 1 τ 0 Ω\ Û- h (∂Ω) p( ε ) + δ( β ε + ε ) divu ∞ dtdτ = τ 2 τ 1 τ 0 Ω\ Û- h (∂Ω) p( ) + δ( β + ) divu ∞ dxdtdτ, while τ 0 Ω p( ε )divu ∞ dxdt → τ 0 Ω p( )divu ∞ dxdt by virtue of (5.31). Using these facts in (5.32), letting h → 0 and τ 1 → τ 2 while applying the Theorem on Lebesgue points yields the desired inequality (5.5). Lemma 5.1 is thus proved. 6 Limit δ → 0. Proof of Theorem 2.4 Our ultimate goal is to perform limit δ → 0. We will prove the following: Lemma 6.1. Let ( δ , u δ ) be a sequence of functions constructed in Lemma 5.1. Then there is a subsequence (not relabeled) such that δ weakly-* in L ∞ (0, T ; L γ (Ω)), (6.1) u δ u in L 2 (0, T ; W 1,2 (Ω; R 3 )), where the couple ( , u) is a weak solution of problem (1.1-1.6). The remaining part of this section is devoted to the proof of Lemma 6.1, which is nothing but Theorem 2.4. Uniform estimates We shall start with estimates for weak solutions ( δ , u δ ) constructed in Lemma 5.1. They are collected in the following lemma. Lemma 6.2. Let ( δ , u δ ) be a couple constructed in Lemma 5.1. Then, the following estimates hold: u δ L 2 (I,W 1,2 (Ω;R 3 )) ≤ L(data), (6.2) δ L ∞ (I,L γ (Ω)) ≤ L(data), (6.3) δ u 2 δ L ∞ (I,L 1 (Ω)) ≤ L(data), (6.4 ) δ 1/β δ L ∞ (I,L β (Ω)) ≤ L(data), (6.5) There is α(γ) > 0 such that δ L γ+α ((0,T )×K) ≤ L(data, K), with any compact K ⊂ Ω. (6.6) In the above, "data" stands for Ω 1 2 0 u 2 0 + H( 0 ) dx, u ∞ W 1,∞ (Ω) , B , B Proof of Lemma 5.2 Similarly as before, continuity equation (5.12) ( δ ,u δ ) yields L ∞ (0, T ; L 1 (Ω)) bound for the sequence δ . Now, uniform estimates (6.2-6.5) follow directly from energy inequality (5. ϕ = η(t)B ψ α δ - 1 |Ω| Ω ψ α δ dx , where α > 0 is sufficiently small, and where η ∈ W 1,∞ 0 (0, T ) and ψ ∈ C 1 c (Ω) are convenient cut off functions. After several integrations by parts, using renormalized equation (5.3) (with ( δ , u δ ) and b( ) = α , cf. Remark 2.3), we arrive finally at T 0 Ω ηψp δ ( δ ) α δ dxdt = T 0 Ω R( δ , u δ , η, ψ) dxdt, where the right hand side may be bounded from above due to estimates (6.2-6.5), in the same way as in the Section 5.1. We refer the reader for more details of this standard but laborious procedure again to [11, Section 4.1], or to monographs [START_REF] Feireisl | Dynamics of viscous compressible fluids[END_REF], [START_REF] Novotný | Introduction to the mathematical theory of compressible flow[END_REF], [START_REF] Feireisl | Singular limits in thermodynamics of viscous fluids[END_REF]. Weak limits in the field equations Estimates (6.2-6.3) yield immediately weak convergence announced in (6.1) and estimate (6.6) together with (2.2) imply p( δ ) p( ) weakly in L γ+α γ ((0, T )×K)) for any compact K ⊂ Ω. The terms multiplied by δ in the momentum equation will vanish due to estimate (6.5). Repeating carefully the (standard) reasoning of Section 5.2, we deduce that ∈ C weak ([0, T ]; L γ (Ω)), u ∈ C weak ([0, T ]; L 2γ γ+1 (Ω; R 3 )), and the the limit in equations (5.12) and (4.10) reads Ω (τ, •)ϕ(τ, •)dx - Ω 0 (•)ϕ(0, •)dx = τ 0 Ω ∂ t ϕ + u • ∇ x ϕ dxdt (6.7) = - τ 0 Γ in B u B • nϕ dS x dt for any τ ∈ [0, T ] and ϕ ∈ C 1 c ([0, T ] × (Ω ∪ Γ in )); Ω u(τ, •) • ϕ(τ, •)dx - Ω 0 u 0 (•)ϕ(0, •)dx (6.8) = τ 0 Ω u∂ t ϕ + u ⊗ u : ∇ x ϕ + p( )div x ϕ dxdt - τ 0 Ω S(∇ x u) : ∇ x ϕdxdt for any τ ∈ [0, T ] and any ϕ ∈ C 1 c ([0, T ] × Ω; R 3 ). We can perform the weak limit in the renormalized continuity equation (5.3) for ( δ , u δ ). We obtain, by the same token, Ω (b( )u)(τ )ϕ(τ )dx - Ω b( 0 )u 0 ϕ(0)dx = (6.9) τ 0 Ω b( )u • ∇ x ϕ -ϕ(b ( ) -b( ))div x u dxdt - τ 0 Γ in b( B )u B • nϕ dS x dt for any ϕ ∈ C 1 ([0, T ] × (Ω ∪ Γ in )), ( δ , u δ ) (in L 1 (Q T ).)) It remains to show that p( ) = p( ). The rest of this section is devoted to the proof of this identity. This is equivalent to show that δ → a.e. in Q T . Effective viscous flux identity We now perform similar reasoning as in Section 5.3. Since however functions and log do not possess enough summability, we shall replace them by convenient truncations T k ( ) and L k ( ), where T k ( ) is defined in (3.11) and L k ( ) = 1 T k (z) z dz. (6.10) We shall repeat the process described in Section 5.3 with T k ( δ ) resp. T k ( ) instead of δ , : Following [START_REF] Feireisl | On the existence of globally defined weak solutions to the Navier-Stokes equations of compressible isentropic fluids[END_REF], we shall use in the approximating momentum equation (5.4) (where ( , u) = ( δ , u δ )) test function ϕ(t, x) = ψ(t)φ(x)∇ x ∆ -1 (T k ( δ )φ), ψ ∈ C 1 c (0, T ), φ ∈ C 1 c (Ω) and in the limiting momentum equation (6.8) test function ϕ(t, x) = ψ(t)φ(x)∇ x ∆ -1 (T k ( )φ), ψ ∈ C 1 c (0, T ), φ ∈ C 1 c (Ω) subtract both identities and perform the limit δ → 0. This leads to equation Finally, we employ formulas (5.17), (5.18), similarly as when deriving (5.19), in order to get (2µ + λ) τ 0 Ω T k ( )divu -T k ( )divu dxdt ≤ cΛ(1 + r) τ 0 Ω ln -ln dxdt. (6.13) Oscillations defect measure The main achievement of the present section is the following lemma. Lemma 6.3. Let ( δ , u δ ) be a sequence constructed in Lemma 5.1. Then osc γ+1 [ δ ](Q T ) < ∞. ( 6 .14) The quantity osc γ+1 [ δ ](Q T ) is defined in (3.10) It is well known that Lemma 6.3 follows from the effective viscous flux identity, see [START_REF] Feireisl | On the existence of globally defined weak solutions to the Navier-Stokes equations of compressible isentropic fluids[END_REF]Lemma 4.3], [START_REF] Feireisl | Dynamics of viscous compressible fluids[END_REF], [START_REF] Novotný | Introduction to the mathematical theory of compressible flow[END_REF] for the detailed proof. To see this fact, we observe that there is non-decreasing p I i δ , (6.16) where I 1 δ = 2µ + λ T 0 Ω T k ( δ ) -T k ( ) div x u δ dxdt, I 2 δ = 2µ + λ T 0 Ω T k ( ) -T k ( ) div x u δ dxdt, I 3 δ = T 0 Ω p b ( )T k ( ) -p b ( ) T k ( ) dxdt We first observe that the second integral at the left hand side is non negative (indeed, p m is nondecreasing and we can use Theorem 10.19 in [START_REF] Feireisl | Singular limits in thermodynamics of viscous fluids[END_REF]). Second, we employ the Hölder inequality and interpolation together with the lower weak semi-continuity of norms and bounds (6.2 -6.3) to estimate integrals I 1 δ , I 2 δ in order to get |I 1 δ + I 2 δ | ≤ c osc γ+1 [ δ ](Q T ) 1 2γ (6.17 Inserting the last inequality into (6.16) yields (in combination with estimates of integrals I 1 δ -I 3 δ ) the statement of Lemma 6.3. Strong convergence of density Since couple ( , u) verifies continuity equation (2.6), it verifies also renormalized continuity equation (2.13) in view of Lemma 3.2. In view of Remark 2.5 we can take in the latter equation b = L k . We get, in particular, where the first term converges to 0 as k → ∞ by virtue of (6.14) and interpolation estimate (indeed, T k ( ) -T k ( ) L 1 (Q T ) → 0 as k → ∞ by virtue of definition of T k and lower weak semi-continuity of norms), while the second term is bounded from above by the expression at the right hand side of (6.13). Remark 2 . 3 . 23 and any continuously differentiable b with b having a compact support in [0, ∞). Weak solution to problem (1.1-1.6) satisfying in addition renormalized continuity equation (2.13) is called renormalized weak solution. It can be shown easily by using the Lebesgue dominated convergence theorem that the the family of test functions b in the previous definition can be extended to b . 15 )Remark 2 . 5 . 1 . 15251 Then for any Lipschitz extension u ∞ of u B verifying (2.8) problem (1.1-1.6) possesses at least one bounded energy renormalized weak solution ( , u). Theorem 2.4 holds regardless the (d -1)-Hausdorff measure of Γ in or Γ out is zero. If the Hausdorff measure |Γ in | d-1 = 0 then all conditions on B become irrelevant. The standard theory developed in [11, Theorem 1. where we have used algebraic relation zT k (z) ≤ 2T k (z) and the Minkowski inequality. Equation (3.16) implies (2.13) with any b ∈ C 1 [0, ∞), b with compact support by virtue of the Lebesgue dominated convergence theorem. Lemma 3.2 is proved. Lemma 4 . 1 . 41 Let Ω be a domain of class C 2 . Let ( B , u B ) verify assumptions (2.1) and let initial and boundary data verify ) in particular, e -Kτ ≤ (t, x) ≤ e Kτ for all τ ∈ [0, T ] provided u verifies condition (4.21). (Here and in the sequel |A| is a Lebesgue measure of set A ⊂ R 3 while |A| 2 denotes its 2 -D Hausdorf measure.) 21). Proof of Lemma 4.3 Proof of statement 1. The parabolic boundary value problem (with elliptic differential operator A = -ε∆ + u • ∇ x + divu on Ω and boundary operator B = -εn • ∇ x + v on the boundary ∂Ω) satisfies all assumptions of the maximal regularity theorem by Denk, Hieber, Prüss [4, Theorem 2.1] with p = 2 (the coefficients are sufficiently regular in order to verify conditions [4, conditions (SD), (SB)], principal part of operator A is (normally) elliptic, see [4, condition (E)], while the principal parts of operators A and B verify the Shapiro-Lopatinskii conditions [4, condition (LS)], and the data 0 and g verify conditions [4, condition (D)], and eventually Bergh and Löfström [3, Theorem 6.4.4] for the identification of the Sobolev space W 1,2 (Ω) with the Besov space B 1 2,2 (Ω)). Under these circumstances, Theorem 2.1 in [4] yields the statement in item 1. of Lemma 4.3, in particular (4.19 numbers A, B are define in (4.11). Now we take in (4.52) Σ = 2 sup >0 p ( ) and use estimates (4.44-4.45) when dealing with Ω H δ ( )(τ ) dx, (4.47) when dealing with τ 0 Ω S(∇ x v) : ∇ x v dxdt , (4.48-4.49) when treating ε τ 0 Ω H δ ( )|∇ x | 2 dxdt, and (4.50-4.51) to treat the right hand side (while taking first α > 0 sufficiently small and then ε > 0 also sufficiently small in order to let the terms αε ∇ x L 2 (Qτ ) , (α + ε c α ) ∇ x v 2 L 2 (Qτ ) and αδ τ 0 Γ in β |u • n|dS x "absorb" in the left hand side) with the goal to obtain with help of the Gronwall inequality, sup |u B • n|dS x dt ≤ L(data, T, δ). (4.58) ε v 4 L 4 (0,T ;W 1,4 (Ω)) ≤ L(data, T, δ). (4.59) At this immediate stage, we shall use the first two estimates. Employing (4.25), namely v W 1,∞ (Ω) ≤ c v W 1,2 (Ω) , v ∈ X, and (4.22), we get .69) Now we return to (4.18) -with ( N , u N )-and consider it as parabolic problem with operatot ∂ t -ε∆ in (0, T ) × Ω with right hand side -div( N u N ), and boundary operator -εn • ∇ x + v in (0, T ) × ∂Ω with right hand side B v. The maximal parabolic regularity theory, as e.g. [4, Theorem 2.1], yields that with any b satisfying conditions (2.14) p=γ . (Here again b( , u) denotes weak limit of the sequence b ψφ 2 2 p( )T k ( ) -(2µ + λ)T k ( )divu dxdt -T 0 Ω ψφ 2 p δ ( )T k ( ) -(2µ + λ)T k ( )divu dxdt = T 0 Ω ψφu • T k ( )R • ( uφ) -u • R(T k ( )φ) dxdt δ • T k ( δ )R • ( δ u δ φ)δ u δ • R(T k ( δ )φ) dxdt.Renormalized continuity equation (5.3) ( δ ,u δ ) and its weak limit (6.9) with b = T k play in this calculation an important role. Due to the compact support of φ, the non homogeneity of the boundary data is irrelevant. The right hand side of the last identity is zero by div-curl lemma. Consequently, we get the effective viscous flux identityp( )T k ( ) -p( ) T k ( ) = (2µ + λ) T k ( )divu -T k ( )divu . (6.11)The details of this calculus and reasoning can be found in [11, Lemma 3.2],[START_REF] Feireisl | Dynamics of viscous compressible fluids[END_REF],[START_REF] Novotný | Introduction to the mathematical theory of compressible flow[END_REF] or[START_REF] Feireisl | Singular limits in thermodynamics of viscous fluids[END_REF] Chapter 3]. If p would be non decreasing we would have (2µ + λ) T k ( )divu -T k ( )divu ≥ 0 (according to e.g.[START_REF] Feireisl | Singular limits in thermodynamics of viscous fluids[END_REF] Theorem 10.19]) and we could stop this part of argumentation at this place. In the general case, we must continue. Writing p = pp and recalling that p is non decreasing, we deduce from identity (6.11),(2µ + λ) T k ( )divu -T k ( )divu ≤ p( )T k ( ) -p( ) T k ( ). (6.12) Next, we realize (by employing essentially the lower-weak semi-continuity of norms) that lim sup k→∞ T k ( ) -L 1 (Q T ) = 0, lim k→∞ p( )T k ( ) -p( ) L 1 (Q T ) T k ( ) -p( ) T k ( ) dxdt ≤ τ 0 Ω p( ) -p( ) dxdt lim sup δ→0 3 i=1 3 m ∈ C[0, ∞) and bounded p b ∈ C[0, ∞) such that p( ) = a 2γ γ + p m ( ) -p b (6.15) Indeed, one may take p m = pa 2γ γ + bη( ) min{r, }, p b ( ) = p( ) -bη( ) min{r, }, where r solves equation as γ-1 -2b = 0 and η ∈ C 1 c [0, ∞), η(s) = 1 for s ∈ [0, R), 0 ≤ -η (s) ≤ 1 R with R sufficiently large. With this decomposition, effective viscous flux identity (6.11) can be rewritten as follows a 2γ T 0 Ω γ T k ( ) -γ T k ( ) dxdt + T 0 Ω p m ( )T k ( ) -p m ( ) T k ( ) dxdt = T ) with c > 0 independent of k. Finally, since p b is continuous with compact support, integral |I 3 δ | is bounded by an universal constant c = c(p b ) > 0.Next we write, as in[START_REF] Feireisl | On the existence of globally defined weak solutions to the Navier-Stokes equations of compressible isentropic fluids[END_REF] T0 Ω γ T k ( ) -γ T k ( γ T k ( δ ) -T k ( ) dxdt + T 0 Ω γγ T k ( ) -T k ( ) dxdt k ( δ ) -T k ( )γ+1 dxdt, where we have employed convexity of → γ and concavity of → T k ( ) on [0, ∞), and algebraic inequality |a -b| γ ≤ |a γ -b γ | and |a -b| ≥ |T k (a) -T k (b)|, (a, b) ∈ [0, ∞) 2 . ΩLLL L k ( (τ, x))ϕ(x)dx -Ω L k ( 0 )ϕ(x)dx (k ( )u • ∇ x ϕ -ϕT k ( )div x u dxdt + τ 0 ∂Ω L k ( B )u B • nϕdS x dt, with any ϕ ∈ C 1 c (Ω ∪ Γ in ) and τ ∈ [0, T ].On the other hand, equation (6.9) with b = L k reads,Ω L k ( )(τ, x)ϕ(x)dx -Ω L k ( 0 )ϕ(x)dx (6.19) = τ 0 Ω L k ( )u • ∇ x ϕ -ϕT k ( )div x u dxdt + τ 0 ∂Ω L k ( B )u B • nϕdS x dt, where ϕ ∈ C 1 c (Ω ∪ Γ in ) and τ ∈ [0, T ]. Subtracting (6.[START_REF] Piasecki | Strong solutions to the Navier-Stokes-Fourier system with slip-inflow boundary conditions[END_REF]) and (6.18) yieldsΩ L k ( ) -L k ( ) (τ, x)ϕ(x)dx -τ 0 Ω L k ( ) -L k ( ) (u -ũB ) • ∇ x ϕdxdt (k ( ) -L k ( ) ũB • ∇ x ϕ = τ 0 Ω ϕ T k ( )div x u -T k ( )div x u dxdtdxdt with any ϕ ∈ C 1 c (Ω ∪ Γ in ) and τ ∈ [0, T ],where ũB is defined in Lemma 2.2. Now we consider the family of test functions ϕ δ defined in(5.27). By the same reasoning as in (5.26-5.29) we deduceT 0 Ω L k ( ) -L k ( ) (u -ũB ) • ∇ x ϕ δ dxdt → 0 as δ → 0k ( ) -L k ( ) ũB • ∇ x ϕ δ dxdt ≥ 0δ T k ( )div x u -T k ( )div x u dxdt (δ T k ( ) -T k ( ) div x u dxdt + τ 0 Ω ϕ δ T k ( )div x u -T k ( )div x u dxdt, Theorem 2.4. Let Ω ⊂ R d , d = 2, 3 be a bounded domain of class C 2 . Let the boundary data u B , B satisfy (2.1), where min B ≡ B > 0. Assume that the pressure satisfies hypotheses (2.2) with γ > d/2 and the initial data are of finite energy ). (2.14) Our main result is the following theorem. provided both u i verify (4.21).(4.36) Coming back to T, we find with help of (4.27), (4.22), (4.35), Tv(t) X ≤ δ e Kt P v 0 X + d 2 (K, , T )t provided u ∞ + v verifies (4.21), and with help of (4.28), (4.22), (4.23) on one hand and (4.27), (3.5) on the other hand .42) 2. We shall now investigate the lower bound of E δ ( B | ). First, due to convexity of H, we have E( B | ) ≥ 0. Second, we verify by direct calculation that Galerkin approximation to solutions of approximate problem (4.1-4.4) Recalling structural assumptions (2.2) for p, definitions (2.3) of H and (4.6) of p δ , H δ (notably the coercivity relations (4.46), (4.48), we deduce from (4.53-4.59) the following bounds for the sequence [START_REF] Denk | Fourier multipliers and problems of elliptic and parabolic type[END_REF], structural assumptions on the pressure p, and definitions of p δ and H δ , see (2.2), (4.5), (4.6), and energy inequality (5.5) by the similar (in fact more simple) reasoning as that one performed in Sections 4.3.3, 4.3.4. The last estimate, as in the previous section, is based on the properties of the Bogovskii operator introduced in Lemma 5.3. We obtain it by testing the momentum equation (5.4) with ( δ , u δ ) with test function We say that f ∈ C weak ([0, T ], L p (Ω)) iff Ω f ϕ dx ∈ C[0, T ] for all ϕ ∈ L p (Ω) * The work of T.Ch. has been supported by the NRF grant 2015 R1A5A1009350. † The work of B.J.J. has been supported by NRF grant 2016R1D1A1B03934133 ‡ The work of A.N. has been supported by the NRF grant 2015 R1A5A1009350. 1 Uniform bounds independent on We have to start by deriving uniform bounds independent of ε. We collect them in the following lemma: Lemma 5.2. Let ( ε , u ε ) (and associated Z ε ) be a sequence of (genegalized) solutions of the approximate problem (4.1-4.2) constructed in Lemma 4.1. Then under assumptions of Lemma 4.1 there holds: u L 2 (I,W 1,2 (Ω)) ≤ L(data, δ), (5.6) ε 1/4 u L 4 (I,W 1,4 (Ω)) ≤ L(data, δ), (5.7) L β+1 ((0,T )×K) ≤ L(data, δ, K, δ), with any compacts K ⊂ Ω. (5.11) Here L is a positive constant, which is, in particular, independent of . Proof of Lemma 5.2 Continuity equation (4.9) provides bound With this bound at hand, uniform estimates ( ) for all g with the above properties. In the above L in this process is to show that the right hand side of identity (5.14) is 0. To see it (we repeat the reasoning [START_REF] Feireisl | On the existence of globally defined weak solutions to the Navier-Stokes equations of compressible isentropic fluids[END_REF] for the sake of completeness) we first realize that the C weak ([0, T ], L β (Ω))-convergence of Since R is a continuous operator from L p (R 3 ) to L p (R 3 ), 1 < p < ∞, we also have At this stage we report a convenient version of the celebrated Div-Curl lemma, see [START_REF] Feireisl | Dynamics of viscous compressible fluids[END_REF]Section 6] or [START_REF] Feireisl | Singular limits in thermodynamics of viscous fluids[END_REF]Theorem 10.27]. It reads where Applying this lemma to the above situation, we get In view of compact imbedding L 2β β+3 (Ω) → → W -1,2 (Ω), we have also We easily verify that the sequence Recalling the L 2 (0, T ; W 1,2 (Ω))-weak convergence of u ε we get the desired result. Identity (5.14) now reads p( ) -p( ) = (2µ + λ) divu -divu . (5.15) If the pressure were non decreasing (i.e. if p would be identically zero), we would have by Miniti's trick, p( ) -p( ) ≥ 0 a.e. in Q T , see [9, Theorem 10.19] and consequently divu -divu ≥ 0 a.e. in Q T . We however consider a non-monotone pressure and this simple conclusion is not true anymore. We have to further extend this argument. Following Feireisl [START_REF] Feireisl | Dynamics of viscous compressible fluids[END_REF], we realize that there is Λ > 0 (dependent on p) such that cf. Foote [START_REF] Foote | Regularity of the distance function[END_REF]. Moreover, Since Ω is Lipschitz, we have also that where Ûε (Γ out ) ≡ {x ∈ Ω | dist(x, Γ out ) < ε} and A∆B denotes the symmetric difference of sets A and B. Consider family of Lipschitz test functions in Ω, By Lebesgue theorem and Hardy's inequality (we notice that while, in accordance with (5.25), ( ). This will be done in the next section. Coming with all this information back to (6.20) with ϕ = ϕ δ , and performing first limit δ → 0 and then limit k → ∞ we conclude that This means a.e. in Q T convergence of δ to and consequently the identity p( ) = p( ). The passage δ → 0 from the energy inequality (5.5) (with ( δ , u δ )) to the final energy inequality (2.9) will be done in the same way as in Section 5.5. We have so far performed the whole proof with initial data satisfying (4.13). We notice that this is without loss of generality. Indeed finite energy initial data (2.15) can be easily approximated on the level δ by initial data (4.13) in the way suggested in [START_REF] Feireisl | On the existence of globally defined weak solutions to the Navier-Stokes equations of compressible isentropic fluids[END_REF]Section 4]. This concludes the proof of Theorem 2.4.
83,611
[ "19921" ]
[ "121705", "68760" ]
01756743
en
[ "math", "stat", "info", "spi", "sdv" ]
2024/03/05 22:32:10
2018
https://polytechnique.hal.science/hal-01756743/file/InflamJTB.pdf
Ouassim Bara email: obara@utk.edu Michel Fliess email: michel.fliess@polytechnique.edu Cédric Join email: cedric.join@univ-lorraine.fr Judy Day email: judyday@utk.edu Seddik M Djouadi email: mdjouadi@utk.edu Toward a model-free feedback control synthesis for treating acute inflammation A mathematical perspective Keywords: Immune systems, Inflammatory response, Model-free control, Intelligent controllers published or not. The documents may come Toward a model-free feedback control synthesis for treating acute inflammation Introduction Inflammation is a key biomedical subject (see, e.g., [START_REF] Nathan | Points of control in inflammation[END_REF][START_REF] Vodovotz | Solving immunology?[END_REF]) with fascinating connections to diseases like cancer (see, e.g., [START_REF] Balkwill | Inflammation and cancer: back to Virchow?[END_REF]), AIDS (see, e.g., [START_REF] Deeks | HIV infection, inflammation, immunosenescence, and aging[END_REF]) and psychiatry (see, e.g., [START_REF] Miller | The role of inflammation in depression: from evolutionary imperative to modern treatment target[END_REF]). Mathematical and computational models investigating these biological systems have provided a greater understanding of the dynamics and key mechanisms of these processes (see, e.g., [START_REF]Complex Systems and Computational Biology Approaches to Acute Inflammation[END_REF]). The very content of this particular study leads citations of mathematical models where differential equations play a prominent role (see, e.g., [START_REF] Arazi | Modeling immune complex-mediated autoimmune inflammation[END_REF][START_REF] Arciero | Using a mathematical model to analyze the role of probiotics and inflammation in necrotizing enterocolitis[END_REF][START_REF] Asachenkov | Disease Dynamics[END_REF][START_REF] Barber | A three-dimensional mathematical and computational model of necrotizing enterocolitis[END_REF][START_REF] Day | Modeling the immune rheostat of macrophages in the lung in response to infection[END_REF][START_REF] Day | Mathematical modeling of early cellular innate and adaptive immune responses to ischemia/reperfusion injury and solid organ allotransplantation[END_REF][START_REF] Day | A reduced mathematical model of the acute inflammatory response II. Capturing scenarios of repeated endotoxin administration[END_REF][START_REF] Russo | A mathematical model of inflammation during ischemic stroke[END_REF][START_REF] Dunster | The resolution of inflammation: A mathematical model of neutrophil and macrophage interactions[END_REF][START_REF] Eftimie | Mathematical models for immunology: Current state of the art and future research directions[END_REF][START_REF] Ho | A model of neutrophil dynamics in response to inflammatory and cancer chemotherapy challenges[END_REF][START_REF] Kumar | The dynamics of acute inflammation[END_REF][START_REF] Mathew | Global sensitivity analysis of a mathematical model of acute inflammation identifies nonlinear dependence of cumulative tissue damage on host interleukin-6 responses[END_REF][START_REF] Perelson | Immunology for physicists[END_REF][START_REF] Prodanov | A model of space-fractional-order diffusion in the glial scar[END_REF][START_REF] Reynolds | Mathematical Models of Acute Inflammation and Full Lung Model of Gas exchange under inflammatory stress[END_REF][START_REF] Reynolds | A mathematical model of pulmonary gas exchange under inflammatory stress[END_REF][START_REF] Reynolds | A reduced mathematical model of the acute inflammatory response I. Derivation of model and analysis of antiinflammation[END_REF][START_REF] Song | Ensemble models of neutrophil trafficking in severe sepsis[END_REF][START_REF] Torres | Mathematical modelling of posthemorrhage inflammation in mice: Studies using a novel, computer-controlled, closed-loop hemorrhage apparatus[END_REF][START_REF] Yiu | Dynamics of a cytokine storm[END_REF]). The usefulness of those equations for simulation, prediction purposes, and, more generally, for understanding the intimate mechanisms is indisputable. In addition, some models have also been used in order to provide a real-time feedback control synthesis (see, e.g., [START_REF] Åström | Feedback Systems: An Introduction for Scientists and Engineers[END_REF] for an excellent introduction to this important engineering topic) for treating acute inflammation due to severe infection. Insightful results were obtained via two main model-based approaches: optimal control [START_REF] Bara | Optimal control of an inflammatory immune response model[END_REF][START_REF] Bara | Immune Therapy using optimal control with L 1 type objective[END_REF][START_REF] Bara | Immune therapeutic strategies using optimal controls with L 1 and L 2 type objectives[END_REF][START_REF] Kirschner | Optimal control of the chemotherapy of HIV[END_REF][START_REF] Stengel | Stochastic optimal therapy for enhanced immune response[END_REF][START_REF] Stengel | Optimal enhancement of immune response[END_REF][START_REF] Stengel | Optimal control of innate immune response[END_REF][START_REF] Tan | Optimal control strategy for abnormal innate immune response[END_REF], model predictive control [START_REF] Day | Using nonlinear model predictive control to find optimal therapeutic strategies to modulate inflammation[END_REF][START_REF] Hogg | Acute inflammation treatment via particle filter state estimation and MPC[END_REF][START_REF] Radosavljevic | A data-driven acute inflammation therapy[END_REF][START_REF] Zitelli | Combining robust state estimation with nonlinear model predictive control to regulate the acute inflammatory response to pathogen[END_REF]. Our work in [START_REF] Bara | Optimal control of an inflammatory immune response model[END_REF][START_REF] Bara | Immune Therapy using optimal control with L 1 type objective[END_REF][START_REF] Bara | Immune therapeutic strategies using optimal controls with L 1 and L 2 type objectives[END_REF][START_REF] Day | Using nonlinear model predictive control to find optimal therapeutic strategies to modulate inflammation[END_REF][START_REF] Zitelli | Combining robust state estimation with nonlinear model predictive control to regulate the acute inflammatory response to pathogen[END_REF] made use of the low dimensional system of ordinary differential equations (ODE) derived in [START_REF] Reynolds | A reduced mathematical model of the acute inflammatory response I. Derivation of model and analysis of antiinflammation[END_REF] (see also [START_REF] Day | A reduced mathematical model of the acute inflammatory response II. Capturing scenarios of repeated endotoxin administration[END_REF]). This four variable model possesses the following characteristics: -The model is based on biological first principles, the non-specific mechanisms of the innate immune response to a generic gram-negative bacterial pathogen. -A variable representing anti-inflammatory mediators e.g. Interleukin-10, Transforming Growth Factor-β ) is included and plays an important role in mitigating the negative effects of inflammation to avoid excessive tissue damage. -Though a qualitative model of acute inflammation, it reproduces several clinically relevant outcomes: a healthy resolution and two death outcomes. The calibration of a system of differential equations can be quite difficult since the identification of various rate parameters requires specific data in sufficient quantities, which may not be feasible. Additionally, there is much heterogeneity to account for between patient responses such as the initiating circumstances, patient co-morbidities and personal characteristics, like genetics, age, gender, . . . . In spite of promising preliminary results in [START_REF] Bara | Nonlinear state estimation for complex immune responses[END_REF][START_REF] Bara | Parameter estimation for nonlinear immune response model using EM[END_REF][START_REF] Zitelli | Combining robust state estimation with nonlinear model predictive control to regulate the acute inflammatory response to pathogen[END_REF], state estimation and parameter identification of highly nonlinear models may still require more data than can be reasonably collected. These roadblocks hamper the use of model-based control strategies in clinical practice in spite of recent mathematical advances. Here, another route, i.e, model-free control (MFC) and the corresponding "intelligent" feedback controllers [START_REF] Fliess | Model-free control[END_REF], are therefore explored. 1 We briefly introduce the method before discussing its application to the scenario of controlling the inflammatory response to pathogen. We begin by replacing the poorly known global description by the ultra-local model given by: ẏ = F + αu, (1) where the control and output variables are u and y, respectively; the derivation order of y is 1 like in most concrete situations; α ∈ R is chosen by the practitioner such that αu and ẏ are of the same magnitude; -F is estimated via the measurements of u and y; -F subsumes not only the unknown system structure but also any perturbation. Remark 1 The following comparison with computer graphics is borrowed from [START_REF] Fliess | Model-free control[END_REF]. To produce an image of a complex curve in space, the equations defining that curve are not actually used but, instead an approximation of the curve is made with short straight line segments. Equation ( 1), which might be viewed as an analogue of such a segment, should hence not be considered as a global description but instead as a rough linear approximation. Remark 2 The estimation of the fundamental quantity F in Equation ( 1) via the control and output variables u and y will be detailed in Section 2.2. It connects our approach to the data-driven viewpoint which has been adopted in control engineering (see, e.g., [START_REF] Formentin | A comparison of model-based and data-driven controller tuning[END_REF][START_REF] Hou | From model-based control to data-driven control: Survey, classification and perspective[END_REF][START_REF] Roman | Multi-input multi-output system experimental validation of model-free control and virtual reference feedback tuning techniques[END_REF][START_REF] Roman | Data-driven model-free adaptive control tuned by virtual reference feedback tuning[END_REF]) and in studies about inflammation (see, e.g., [START_REF] Azhar | Integrating data-driven and mechanistic models of the inflammatory response in sepsis and trauma[END_REF][START_REF] Brause | Data driven automatic model selection and parameter adaptation -a case study for septic shock[END_REF][START_REF] Radosavljevic | A data-driven acute inflammation therapy[END_REF][START_REF] Vodovotz | Computational modelling of the inflammatory response in trauma, sepsis and wound healing: implications for modelling resilience[END_REF]). Ideally, data associated with the time courses of the inflammatory response variables would be generated by measurements from real patients. In our case, it would be the pro-inflammatory and anti-inflammatory variables of the model (patient) which we would want to track; and therefore, define these as the reference trajectories (available data) for the model-free setup. Once the quantity F est is obtained, the loop is closed by an intelligent proportional controller, or iP: u = - F est -ẏ * + K P e α , (2) where -F est is an estimate of F; y is the reference trajectory; e = yy is the tracking error; and -K P is an usual tuning gain. With a "good" estimate F est of F, i.e., F -F est 0, Equations ( 1)-(2) yield ė + K P e = F -F est 0 1 This new viewpoint in control engineering has been successfully illustrated in many concrete casestudies (see, e.g., the references in [START_REF] Fliess | Model-free control[END_REF], and [START_REF] Abouaïssa | Energy saving for building heating via a simple and efficient model-free control design: First steps with computer simulations[END_REF][START_REF] Abouaïssa | On ramp metering: Towards a better understanding of ALINEA via model-free control[END_REF][START_REF] Agee | Tip trajectory control of a flexible-link manipulator using an intelligent proportional integral (iPI) controller[END_REF][START_REF] Agee | Intelligent proportional-integral (iPI) control of a single link flexible joint manipulator[END_REF][START_REF] Bara | Model-free load control for high penetration of solar photovoltaic generation[END_REF][START_REF] Chand | Non-linear model-free control of flapping wing flying robot using iPID[END_REF][START_REF] Join | A simple and efficient feedback control strategy for wastewater denitrification[END_REF][START_REF] Lafont | A model-free control strategy for an experimental greenhouse with an application to fault accommodation[END_REF][START_REF] Li | Direct power control of DFIG wind turbine systems based on an intelligent proportional-integral sliding mode control[END_REF][START_REF] Madoński | Model-free control of a two-dimensional system based on uncertainty reconstruction and attenuation[END_REF][START_REF] Menhour | An efficient modelfree setting for longitudinal and lateral vehicle control. Validation through the interconnected pro-SiVIC/RTMaps prototyping platform[END_REF][START_REF] Michel | Commande "sans modèle" pour l'asservissement numérique d'un banc de caractérisation magnétique[END_REF][START_REF] Michel | Model-free based digital control for magnetic measurements[END_REF][START_REF] Mohammadridha | Model free iPID control for glycemia regulation of type-1 diabetes[END_REF][START_REF] De Miras | Active magnetic bearing: A new step for model-free control[END_REF][START_REF] Rodriguez-Fortun | Model-free control of a 3-DOF piezoelectric nanopositioning platform[END_REF][START_REF] Schwalb Moraes | Model-free control of magnetic levitation systems through algebraic derivative estimation[END_REF][START_REF] Tebbani | Model-based versus model-free control designs for improving microalgae growth in a closed photobioreactor: Some preliminary comparisons[END_REF][START_REF] Ticherfatine | Model-free approach based intelligent PD controller for vertical motion reduction in fast ferries[END_REF][START_REF] Wang | ZMP theory-based gait planning and model-free trajectory tracking control of lower limb carrying exoskeleton system[END_REF][START_REF] Wang | Event-driven model-free control in motion control with comparisons[END_REF][START_REF] Wang | Model-free based terminal SMC of quadrotor attitude and position[END_REF][START_REF] Xu | Robustness study on the model-free control and the control with restricted model of a high performance electro-hydraulic system[END_REF][START_REF] Yaseen | Attack-tolerant networked control system: an approach for detection the controller stealthy hijacking attack[END_REF][START_REF] Al-Younes | Robust model-free control applied to a quadrotor UAV[END_REF][START_REF] Zhou | Model-free deadbeat predictive current control of a surfacemounted permanent magnet synchronous motor drive systems[END_REF]). Some of the methods have been patented and some have been applied to life sciences [START_REF] Fliess | Dynamic compensation and homeostasis: a feedback control perspective[END_REF][START_REF] Join | A simple and efficient feedback control strategy for wastewater denitrification[END_REF][START_REF] Lafont | A model-free control strategy for an experimental greenhouse with an application to fault accommodation[END_REF][START_REF] Mohammadridha | Model free iPID control for glycemia regulation of type-1 diabetes[END_REF][START_REF] Tebbani | Model-based versus model-free control designs for improving microalgae growth in a closed photobioreactor: Some preliminary comparisons[END_REF]. Thus e(t) e(0) exp(-K P t), which implies that lim t→+∞ y(t) y (t) if and only if, K P > 0. In other words, the scheme ensures an excellent tracking of the reference trajectory. This tracking is moreover quite robust with respect to uncertainties and disturbances which can be numerous in a medical setting such as considered here. This robustness feature is explained by the fact that F in Equation (1) encompasses "everything," without trying to distinguish between its different components. In our application, sensorless outputs must be driven in order to correct dysfunctional immune responses of the patient. Here, this difficult problem is solved by assigning suitable reference trajectories to those systems variables which can be measured. This feedforward viewpoint is borrowed from the flatness-based control setting [START_REF] Fliess | Flatness and defect of non-linear systems: introductory theory and examples[END_REF] (see also [START_REF] Åström | Feedback Systems: An Introduction for Scientists and Engineers[END_REF][START_REF] Lévine | Analysis and Control of Nonlinear Systems -A flatness-based approach[END_REF][START_REF] Sira-Ramírez | Differentially Flat Systems[END_REF]). After justifying model-free control in Section 2, Section 3 presents results from applying the method to a heterogeneous in silico virtual patient population generated in [START_REF] Day | Using nonlinear model predictive control to find optimal therapeutic strategies to modulate inflammation[END_REF]. The cohort of virtual patients are summarized in Section 3.1. The computer simulations demonstrate the great robustness of the model-free control strategy with respect to noise corruption, as demonstrated in Section 3.4. Concluding remarks in Section 4 discuss some of the potential as well as the remaining challenges of the approach in the setting of controlling complex immune responses. A first draft has already been presented in [START_REF] Bara | Model-free immune therapy: A control approach to acute inflammation[END_REF]. 2 Justification of the model-free approach: A brief sketch Justification of the ultra-local model We first justify the ultra-local model given in [START_REF] Abouaïssa | Energy saving for building heating via a simple and efficient model-free control design: First steps with computer simulations[END_REF]. For notational simplicity, we restrict to a system with a single control variable u and a single output variable y. Assume that the system is a causal, or non-anticipative functional; In other words, for any time instant t > 0, let y(t) = F (u(τ) | 0 ≤ τ ≤ t) , (3) where F depends on the past and present but not the future, various perturbations, and initial conditions at t = 0. Example 1 A representation of rather general nonlinear functionals, also popular in the biological sciences, is provided by a Volterra series (see, e.g., [START_REF] Korenberg | The identification of nonlinear biological systems: Volterra kernel approaches[END_REF]): y(t) =h 0 (t) + t 0 h 1 (t, τ)u(τ)dτ+ t 0 t 0 h 2 (t, τ 2 , τ 1 )u(τ 2 )u(τ 1 )dτ 2 dτ 1 + . . . t 0 . . . t 0 h ν (t, τ ν , . . . τ 1 )u(τ ν ) . . . u(τ 1 )dτ ν . . . dτ 1 + . . . Solutions of quite arbitrary ordinary differential equations, related to input-output behaviors, may be expressed as a Volterra series (see, e.g., [START_REF] Fliess | An algebraic approach to nonlinear functional expansions[END_REF]). Let -I ⊂ [0, +∞[ be a compact subset and -C ⊂ C 0 (I ) be a compact subset, where C 0 (I ) is the space of continuous functions I → R, which is equipped with the topology of uniform convergence. Consider the Banach R-algebra S of continuous causal functionals (3) I × C → R. If a subalgebra contains a non-zero constant element and separates points in I × C , then it is dense in S according to the classic Stone-Weierstraß theorem (see, e.g., [START_REF] Rudin | Functional Analysis[END_REF]). Let A ⊂ S be the set of functionals which satisfy an algebraic differential equation of the form E(y, ẏ, . . . , y (a) , u, u, . . . , u (b) ) = 0, ( 4 ) where E is a polynomial function of its arguments with real coefficients. It has been proven in [START_REF] Fliess | Model-free control[END_REF] that with this, the conditions of the Stone-Weierstraß theorem are satisfied and, therefore, A is dense in S. Assume therefore that our system is "well" approximated by a system defined by Equation (4). Let ν be an integer, 1 ≤ ν ≤ a, such that ∂ E ∂ y (ν) ≡ 0. The implicit function theorem yields locally y (ν) = E (y, ẏ, . . . , y (ν-1) , y (ν+1) , . . . , y (a) , u, u, . . . , u (b) ). This may be rewritten as y (ν) = F + αu. (5) In most concrete situations such as the one here, the order ν = 1 of derivation, as in Equation( 1), is enough. See [START_REF] Fliess | Model-free control[END_REF] for an explanation and for some examples where ν = 2. Closing the loop If ν = 1 in Equation ( 5), we are back to Equation (1). The loop is closed with the intelligent proportional controller (2). Estimation of F Any rather general function [a, b] → R, a, b ∈ R, a < b, may be approximated by a step function F approx , i.e., a piecewise constant function (see, e.g., [START_REF] Rudin | Real and Complex Analysis[END_REF]). Therefore, for estimating a suitable approximation of F in Equation ( 5), the question reduces to the identification of the constant parameter Φ in ẏ = Φ + αu. (6) Here a recent real-time algebraic estimation/identification techniques are employed ( [START_REF] Fliess | Closed-loop parametric identification for continuous-time linear systems via new algebraic techniques[END_REF][START_REF] Sira-Ramírez | Algebraic Identification and Estimation Methods in Feedback Control Systems[END_REF]). With respect to the well-known notations of operational calculus (see, e.g., [START_REF] Erdélyi | Operational Calculus and Generalized Functions[END_REF][START_REF] Yosida | Operational Calculus -A Theory of Hyperfunctions[END_REF]) (which are identical to those of the classic Laplace transform taught everywhere in engineering e.g., [START_REF] Doetsch | Introduction to the Theory and Application of the Laplace Transformation (translated from the German)[END_REF], [START_REF] Åström | Feedback Systems: An Introduction for Scientists and Engineers[END_REF]), Equation ( 6) yields: sY = Φ s + αU + y(0), where U and Y are the operational analogues of u and y. In the literature, U and Y are often called the Laplace transforms of u and y, and s is the Laplace variable (see, e.g., [START_REF] Åström | Feedback Systems: An Introduction for Scientists and Engineers[END_REF]). We eliminate the initial condition y(0) by left-multiplying both sides by d ds or, in other words, by differentiating both sides with respect to s: Y + s dY ds = - Φ s 2 + α dU ds . The product by s corresponds in the time domain to the derivation with respect to time. Such a derivation is known to be most sensitive to noise corruptions. Therefore, multiply both sides on the left by s -2 in order to replace derivations by integrations with respect to time, which are quite robust with respect to noise (see [START_REF] Fliess | Analyse non standard du bruit[END_REF] for more explanations). Recall that d ι ds ι , where ι ≥ 1 is an integer, corresponds in the time domain to the multiplication by (-t) ι . Then F est (t) = - 6 τ 3 t t-τ [(τ -2σ )y(σ ) + ασ (τ -σ )u(σ )] dσ , (7) where τ > 0 might be quite small. This integral may of course be replaced in practice by a classic digital filter. There are other formulas one can use for obtaining an estimate of F. For instance, closing the loop with the iP (2) yields: F est (t) = 1 τ t t-τ ( ẏ -αu -K P e) dσ . (8) Remark 3 Measurement devices are always corrupted by various noise sources (see, e.g., [START_REF] Tagawa | Biomedical Sensors and Instruments[END_REF]). The noise is usually described via probabilistic/statistical laws that are difficult to write down in most concrete situations. Following [START_REF] Fliess | Analyse non standard du bruit[END_REF] where nonstandard analysis is used, the noise is related to quick fluctuations around zero [START_REF] Cartier | Integration over finite sets[END_REF]. Such a fluctuation is a Lebesgue-integrable real-valued time function F which is characterized by the following property: the integral of F over any finite time interval, τ f τ i F (τ)dτ, is infinitesimal. Therefore, noise is attenuated thanks to the integrals in formulas ( 7)-( 8). 3 Computer Simulation Virtual patients In [START_REF] Day | Using nonlinear model predictive control to find optimal therapeutic strategies to modulate inflammation[END_REF], a cohort of virtual patients were defined by using the ODE model of [START_REF] Reynolds | A reduced mathematical model of the acute inflammatory response I. Derivation of model and analysis of antiinflammation[END_REF] for the underlying immune response dynamics for each patient and with each differing in the value of six of the rate parameters and two of the initial conditions. This same cohort was used in this study as well. The ODE model describes an abstract dynamical representation of an acute inflammatory response to pathogenic infection: Ṗ(t) = k pg P(t) 1 - P(t) p ∞ - k pm s m P(t) µ m + k mp P(t) -k pn f (N(t))P(t) (9) Ṅ(t) = s nr R(P(t), N(t), D(t)) µ nr + R(P(t), N(t), D(t)) -µ n N(t) + u p (t) (10) Ḋ(t) = k dn f (N(t)) 6 x 6 dn + f (N(t)) 6 -µ d D(t) (11) Ċa (t) = s c + k cn f (N(t) + k cnd D(t)) 1 + f (N(t) + k cnd D(t)) -µ c C a (t) + u a (t), (12) where R(P, N, D) = f (k np P(t) + k nn N(t) + k nd D(t)) and f (x) = x 1 + C a (t) c ∞ 2 . -Equation ( 9) represents the evolution of the bacterial pathogen population P that causes the inflammation. -Equation ( 10) governs the dynamics of the concentration of a collection of early pro-inflammatory mediators such as activated phagocytes and the pro-inflammatory cytokines produced by N. -Equation ( 11) corresponds to tissue damage (D), which helps to determine response outcomes. -Equation ( 12) describes the evolution of the concentration of a collection of antiinflammatory mediators C a . As explained in [START_REF] Reynolds | A reduced mathematical model of the acute inflammatory response I. Derivation of model and analysis of antiinflammation[END_REF], f (x) represents a Hill function that models the impact of activated phagocytes and their by-products (N) on the creation of damaged tissue. With this modeling construct, tissue damage (D) increases in a switch-like sigmoidal fashion as N increases such that it takes sufficiently high levels of N to incite a moderate increase in damage and that the increase in damage saturates with sufficiently elevated and sustained N levels. The hill coefficient (exponent) 6 was chosen to model this aspect which also ensured that the healthy equilibrium had a reasonable basin of attraction for the N/D subsystem. For the reference set of parameter values which is given in Table I of [START_REF] Reynolds | A reduced mathematical model of the acute inflammatory response I. Derivation of model and analysis of antiinflammation[END_REF], the above model possesses three (positive) stable equilibria which can be qualitatively interpreted as the following clinical outcomes: -Healthy outcome: equilibrium in which P = N = D = 0 and C a is at a background level. -Aseptic death outcome: equilibrium in which all mediators N, C a , and D are at elevated levels, while pathogen, P, has been eliminated. -Septic death outcome: equilibrium in which all mediators N, C a , and D together with the pathogen P are at elevated levels (higher than in the aseptic death equilibrium). Fig. 1 Diagram of the mediators of the acute inflammatory response to pathogen as abstractly modeled in [START_REF] Reynolds | A reduced mathematical model of the acute inflammatory response I. Derivation of model and analysis of antiinflammation[END_REF]. Solid lines with arrow heads and dashed lines with nodes/circular heads represent upregulation and inhibition, respectively. P: replicating pathogen, N: early pro-inflammatory immune mediators, D: marker of tissue damage/dysfunction caused by inflammatory response, C a : inhibitory anti-inflammatory mediators, u a and u p : time-varying input controls for the anti-and pro-inflammatory therapy, respectively. Note that the model was formulated to represent a most abstract form of the complex processes involved in the acute inflammatory response. Hence, as explained in [START_REF] Reynolds | A reduced mathematical model of the acute inflammatory response I. Derivation of model and analysis of antiinflammation[END_REF] the variables N and C a represent multiple mediators with similar inflammatory characteristics, and D is an abstract representation of collateral tissue damage caused by inflammatory by-products. This abstraction reduces the description to four essential variables which also allows for tractable mathematical analysis. Therefore, the units of these variables are in arbitrary units of N-units, C a -units, D-units, since they represent various types of cells and thus, they qualitatively, rather than quantitatively, describe the response of the inflammatory mediators and their by-products. Pathogen, P, units are more closely related to numbers of pathogens or colony forming units (CFU), but abstract units P-units are simply used as well and this population is scaled by 10 6 /cc. More details about the model development can be found in [START_REF] Reynolds | A reduced mathematical model of the acute inflammatory response I. Derivation of model and analysis of antiinflammation[END_REF]. The diagram in Figure 1 characterizes the different interactions between the states of the inflammatory model. A solid line with an arrow head indicates an up-regulation, whereas a dashed line with circular head indicates inhibition or down-regulation of a process. For instance, early pro-inflammatory mediators, N, respond to the presence of pathogen, P, by initiating self-recruitment of additional inflammatory mediators and N is therefore up-regulated by the interaction with P to attempt to efficiently eliminate the pathogen. The self up-regulation that exists for P is due to replication. Furthermore, N inhibits P by eliminating it at some rate. The inflammation caused by N, however, results in tissue damage, D, which can provide a positive feedback into the early inflammatory mediators depending on their intensity. To balance this, anti-inflammatory mediators, such as cortisol, IL-10, and TGF-β , can mitigate the inflammation and its harmful effect by suppressing the response by N and the effects of D in various ways. The C a variable maintains a small positive background level at equilibrium in the presence of no pathogen. Following the setup used in [START_REF] Day | Using nonlinear model predictive control to find optimal therapeutic strategies to modulate inflammation[END_REF], the reference parameter value for C a (0) is set to 0.125 and virtual patients have a value that is ±25% of the reference value. In addition, the values of six other parameters as well as the initial condition for P are set to have differing (positive) values from the reference set. In particular, the values of these parameters and initial conditions were generated from a uniform distribution on defined parameter ranges or on a range that was +/-25% of the (mean) reference value. The remaining parameters retained the same values as those in the reference set. These differences distinguish one virtual patient from another. We use the set of 1000 virtual patients generated by [START_REF] Day | Using nonlinear model predictive control to find optimal therapeutic strategies to modulate inflammation[END_REF] in the way described above to evaluate the performance of the proposed control strategy. The set of patients was classified with respect to their outcome after an open loop simulation for a long enough time to numerically determine outcome without ambiguity. Of the 1000 virtual patients, 369 did not resolve the infection and/or inflammatory response on their own and succumbed to a septic (141) or aseptic (228) death death outcome. On the other hand, 631 exhibited a healthy outcome of which there were two distinct subsets: 1. 379 of the 631 healthy virtual patients did not necessitate treatment intervention because their inflammatory levels did not exceed a specified threshold (defined to be N(t) ≤ 0.05, set in [START_REF] Day | Using nonlinear model predictive control to find optimal therapeutic strategies to modulate inflammation[END_REF]). These virtual patients were excluded from receiving treatment and from our in silico study. 2. The remaining 252 of these virtual patients did surpass the specified threshold, N(t) ≥ 0.05, and are included in the cohort that receives treatment. However, these virtual patients would be able to resolve to health on their own, in the absence of treatment intervention. An important issue for these particular virtual patients is not to harm them with treatment. Thus, 621 of the 1000 generated virtual patients receive treatment via our control design. Once a suitable reference trajectory is provided for the states with sensors, the derivation of the control part is straightforward which we now discuss. Control design As in previous control studies using this model, we assume that the state components P and D in Equations ( 9) and ( 12) are not measurable; whereas, the states N and C a in Equations ( 10) and ( 12), respectively, are: easily measured and influenced by the control variables u p and u a , respectively. We then introduce two equations of type (1): Ṅ = F 1 + α p u p (t) (13) Ċa = F 2 + α a u a (t). (14) We emphasize that, like in [START_REF] Lafont | A model-free control strategy for an experimental greenhouse with an application to fault accommodation[END_REF], the above two ultra-local systems may be "decoupled" so that they can be considered as monovariable systems. It should be nevertheless clear from a purely mathematical standpoint that F 1 (resp. F 2 ) is not necessarily independent of u a (resp. u p ). The two corresponding iPs (2) then read u p = - F 1,est -Ṅ * + K P1 e p α p (15) u a = - F 2,est -Ċ * a + K P2 e a α a . ( 16 ) The tracking errors are defined by e p = N -N * and e a = C a -C * a , where N * and C * a represent the reference trajectories corresponding to pro and antiinflammatory measurements N and C a , respectively. Knowing that F encapsulates all the model uncertainties and disturbances as already explained in the introduction, a good estimate F est provides exponential local stability of the closed-loop system. The following algorithm 1 provides a good summary on the functioning of the proposed methodology for immune regulation: Algorithm 1 Model-free Control Step 1: Initialization, k = 0 u p (0) = 0, define reference trajectories N * , initialize K p and α, fix the sampling time T e ; For 1 ≤ k ≤ T f Step 2, : Get measurements of N and u p ; Step 3 Estimation of F: Estimate F according to a discrete implementation of equation ( 7); Step 4: Close the loop according to equation ( 15) and return to Step 2. Note that the same design procedure can lead to the derivation of the control u a ; however, this time we associate the measurement C a with the control u a (See equation ( 14)). The interesting fact about this approach is that we do not need to control the state variables P and D, which are not measurable. Solving the tracking problem consisting of following closely the reference trajectory of N and C a is enough to drive the pathogen and damage to values in the basin of attraction of the healthy equilibrium where they would converge to this state as time progressed, thereby 'curing' the patient. Results without noise corruption We first examine the performance of the control approach with respect to the set of virtual patients and their individual corresponding initial conditions. The robustness of the control law with the addition of corrupting measurement noise will be discussed afterward. In what follows, the reference trajectories which are inspired from [START_REF] Bara | Immune therapeutic strategies using optimal controls with L 1 and L 2 type objectives[END_REF], correspond to the measurable states N and C a . They will be highlighted in dashed lines. The simulations for all patients were performed under the following conditions: a sampling time of 1 minute, α p = 1, α a = 10 in Equations ( 13)-( 14), -K P1 = K P2 = 0.5 in Equations ( 15)-( 16), and -250 hours simulation time to numerically determine outcomes without ambiguity; though we note stress that our control objectives were reached in less than 250 hours. The use of the same reference trajectory for all simulations emphasizes the robustness of the proposed control approach with respect to the variability among virtual patient parameter values and initial conditions. Figure 2 represents a successful outcomes related to 92 out of the 141 septic patients who were cured when applying a Fig. 3 Time evolution of the control u p and u a for the set of septic patients on which the strategy was implemented. The zoomed-in plot for u p provides more details on the duration of the control dose, where the x-axis is shown for only two hours since it is zero for the remaining time control given by Figure 3. The patients that converged to the septic death equilibrium (as explained in Section 3.1) are obviously the ones who were not cured with the approach. The criteria to classify successful therapeutic control is to determine if the levels of pathogen (P) and damage (D) are reduced to very low values (< 0.2). All virtual patients not meeting this criteria were either classified as septic death outcomes if, in addition, the pathogen state did not also approach zero or aseptic death outcomes otherwise. These two latter cases correspond to virtual patients not saved by the applied dosage. A closer look during the first hours in Figure 3 shows that the amplitude of the control variables is the main difference between different dosing profiles. Similar remarks apply fo u a . Analyzing u a shows that it is applied for a longer period of time than u p , but with smaller amplitude. This was not observed in the optimal control setting of [START_REF] Bara | Immune therapeutic strategies using optimal controls with L 1 and L 2 type objectives[END_REF], where dosing strategy ended at 30 hrs. It is thus interesting to see if purposefully restraining the dose quantity would have a sizable impact on the result. Surprisingly, however, we observe that forcing the anti-inflammatory control input, u a , to be zero after 28 hours does not affect the number of cured patients in this current study. This is an important insight to have in order to prevent unnecessary and lengthy dosing protocols. Whereas the maximum duration of the derived optimal control doses in [START_REF] Bara | Immune therapeutic strategies using optimal controls with L 1 and L 2 type objectives[END_REF] is 30 hours, it is much longer in the model-free control simulations. An extended duration in the model-free setting is the price to pay. On the other hand, the model-free control tracks a single given reference trajectory for all the virtual patients whereas the optimal control strategy [START_REF] Bara | Immune therapeutic strategies using optimal controls with L 1 and L 2 type objectives[END_REF] strives to infer the trajectories from a mathematical model that is required to be a 'good' model for all the virtual patients. Table 1 displays the results from our study for the 621 patients that qualified for therapy because of sufficiently elevated inflammation. The first column displays the outcomes in the absence of intervention, labeled the placebo outcome. Without intervention, 40% (252) will resolve to a healthy outcome, while the remaining 60% (369) fall into one of the two unhealthy outcome categories. We use the total of 369 unhealthy placebo outcomes to determine the percentage of those that the treatment rescued. Likewise, we use the total of 252 healthy placebo outcomes to determine the percentage of those harmed (i.e. they would have resolved to healthy without treatment but converged to one of the death states instead after receiving treatment). Figures 2 and3 display the time courses for the sensorless states, P and D. These were guided via the reference trajectories for the states with sensors, N and C a , along with the corresponding control input. The results are reminiscent of [START_REF] Bara | Optimal control of an inflammatory immune response model[END_REF][START_REF] Bara | Immune Therapy using optimal control with L 1 type objective[END_REF][START_REF] Day | Using nonlinear model predictive control to find optimal therapeutic strategies to modulate inflammation[END_REF]: first apply a large dose of pro-inflammatory therapy, u p , followed by an anti-inflammatory dose, u a . The latter attempts to prevent excessive tissue damage resulting from the additional pro-inflammatory signals form the first dose. The information we can derive from Table 1 is that the control strategy obviously improves the percentage of cured patients when compared to the placebo case. Our therapy rescued 85.66% of the total patient population (621) and 75.88% of the combined septic and aseptic population (369). Additionally, 0% of the healthy patients are harmed. Figure 4 shows the evolution of the unobservable state P and D together with the measured states corresponding to N and C a for a set of 228 aseptic patients. Of these, 188 were able to recover from an aseptic placebo outcome, when the generated controls in Figure 5 are applied, driving the pathogen P and the level of damage D to zero. Again, one can observe from Figure 4 that some trajectories diverge to the unhealthy aseptic region, where the pathogen is known to have a zero value but the other state variables remain elevated. Overall, the simulation results with respect to successful control of the number of outcomes for both the septic and aseptic placebo outcomes are very encouraging when one considers that only a unique reference trajectory was used for the heterogeneous population. The absence of perfect tracking should not be seen as a weakness of the model free control approach, since the control objective has been attained in most scenarios. One of the important features of the presented data driven control approach is the necessity to have suitable choice(s) for the reference trajectories. To be more explicit, consider a naive choice of the reference trajectories: a trajectory exponential decaying to zero for N re f and another trajectory exponentially decaying to the C A steady state value 0.125. This would not satisfy the control objective since the generated control doses are negative and the level of pathogen will converge to its maximum allowable value. The reason for this behavior can be explained by the fact that the iP controller is only concerned about reducing the tracking error without imposing any constraints on the control inputs. Constraints on the control are not implemented simply because the model-free approach is not formulated as an optimization problem. 2 However, choosing a reference trajectory that will account for the correct time-varying dynamics of the inflammatory response will eventually generate the correct doses. That is, if we chose, for example, a reference trajectory with a smaller amplitude or with slower rising dynamics, other than what is presented in this work, then it is highly probable that the patient would not converge to the healthy state with respect to the generated control doses received. Similar remarks can be made to Figure 5 as discussed previously for the set of placebo septic patients. It is not surprising to notice a very close pattern with respect to the generated control doses for both sets of septic and aseptic patients. This can be explained in part by a common control objective consisting of tracking the same reference trajectory and also because of what has been discussed before regarding how the inflammatory immune system needs to react in order to eliminate the pathogen without incurring a significant damage. Results with noise corruption Consider the effects of corrupting measurement noise on our control problem. Here, a white Gaussian noise is taken into account as in many academic studies, (see, e.g., [START_REF] Blanc-Lapierre | Théorie des fonctions aléatoires -Applications à divers phénomènes de fluctuation[END_REF][START_REF] Rabiner | Theory and Application of Digital Signal Processing[END_REF]). Otherwise, the same setting as the previous section is kept. Figures 6 and7 display the states and the corresponding controls for the set of 141 septic placebo patients in which 90 were cured. The addition of measurement noise with a standard deviation equal to 10 -3 only changes the outcome for two of the septic patients, when compared to the initial simulations where no noise was included. However, for the aseptic set of patients, there is a difference of 16 additional patients that did not survive when measurement noise is considered. Remark 4 For the model-free simulation with measurement noise, there are mainly two important remarks to make with regard to the discussion of the previous Section. First, for the case of septic patients, restraining the control u p and u a to be zero after 2 hours and 28 hours, respectively, will not considerably affect the number of cured patient since 90 patients were cured. One would fail to obtain a similar result when altering the control in the same way for the aseptic case. Although not shown here, a decrease of around 45 patients was observed when compared to the 172 who were cured without restraining control inputs. Concluding remarks In this study we propose a new data-driven control approach in order to appropriately regulate the state of an inflammatory immune response in the presence of pathogenic infection. The performance of the proposed control strategy is investigated in the context of a set of 621 heterogenous model-based virtual patient population having model rate parameter variability. The results of the model-free strategy presented here in the presence of measurement noise are also explored and discussed. The robustness of the approach to parameter variability and noise disturbances is seen in the fact that a single reference trajectory was used to inform the approach about desirable inflammatory dynamics and from this, the individual dosing strategies found largely produced healthy outcomes. The downside of the proposed control approach to this specific application is the necessity to apply the control for a longer period of time although with small doses. However, we have seen that artificially restricting this small dose from being provided does not affect the outcome of the states in the case when no measurement noise is used; thought it did in the scenarios with measurement noise. We want to emphasize the importance of a suitable choice for the reference trajectory and further studies may provide better insights in this direction. Past successes of the model-free control feedback approach in other realistic casestudies should certainly be viewed as encouraging for the future development of our approach to the treatment of inflammation. Additionally, the model-free control approach seems to be both theoretically and practically simpler when compared to model-based control designs. This newer viewpoint for control problems in biomedicine needs to be further analyzed in order to confirm its applicability in these complex dynamic systems where the ability to realistically obtain frequent measurement information is limited. Fig. 2 2 Fig. 2 Dashed (--) curves in the panels for the variables N and C a denote the reference trajectories used in the simulation. The various colored curves display the closed loop state responses for the set of 141 septic patients, of which 92 resolved to the healthy outcome. Fig. 4 4 Fig. 4 Dashed (--) curves in the panels for the variables N and C a denote the reference trajectories used in the simulation. The various colored curves display the closed loop state responses results for the set of 228 aseptic placebo patients, of which 188 were cured. Fig. 5 5 Fig.5Time evolution of the control inputs u p and u a for the set of aseptic patients shown in Figure4. The zoomed-in plot for u p provides a better perspective on the duration of the control dose, where the x-axis is shown for two hours only since it is zero afterward. Fig. 6 6 Fig. 6 Dashed (--) curves in the panels for the variables N and C a denote the reference trajectories used in the simulation. The various colored curves display the closed loop state responses for the set of 141 septic placebo patients, of which 90 were cured. Note that the measurements N and C a were corrupted with Gaussian noise. Fig. 7 7 Fig. 7 Time evolution of the control u p and u a for the set of septic placebo patients when the measurements N and C a were corrupted with Gaussian noise. The zoomed-in plot for u p provides more detail on the duration of the control dose, with an x-axis shown only for two hours since the doses are zero afterward. Fig. 8 8 Fig. 8 Dashed (--) curves in the panels showing the time courses of the variables N and C a denote the reference trajectories used in the simulation. The various colored curves display the closed loop state responses for the set of 228 aseptic placebo patients, 172 of which were cured. Note that the measurements N and C a were corrupted with Gaussian noise. Fig. 9 9 Fig. 9 Time evolution of the control u p and u a for the set of 228 aseptic placebo patients when the measurements N and C a were corrupted with Gaussian noise. The zoomed-in plot for u p provides more details on the duration of the control dose, with an x-axis shown only for two hours since the doses are zero afterward. Table 1 1 Results of the model-free immune therapy strategy without measurement noise compared to the placebo outcomes. Therapy Type: Placebo Model-free control therapy Percentage Healthy: 40% (252) 85.66% (518) Percentage Aseptic: 37% (228) 6.4% (40) Percentage Septic: 23% (141) 7.8% (49) Percentage Harmed (out of 252) n/a 0% (0/252) Percentage Rescued (out of 369) n/a 75.88% (280/369) Table 2 2 Results of the model-free immune therapy strategy with measurement noise compared to the placebo outcomes. Therapy Type: Placebo Model-free control therapy Percentage Healthy: 40% (252) 82.76% (514) Percentage Aseptic: 37% (228) 9.02% (56) Percentage Septic: 23% (141) 8.21% (51) Percentage Harmed (out of 252) n/a 0% (0/252) Percentage Rescued (out of 369) n/a 71% (262/369) Allowing the control to be only positive semidefinite will result in a zero control all the time. Work partially supported by the NSF-DMS Award 1122462 and a Fullbright Scholarship.
50,878
[ "942293", "835507" ]
[ "253019", "2071", "230810", "185180", "230810", "525219", "397422", "253019" ]
00175675
en
[ "shs" ]
2024/03/05 22:32:10
2010
https://shs.hal.science/halshs-00175675/file/span.mood.pdf
Brenda Laca Introduction This paper is mainly concerned with the uses of the subjunctive in Modern Spanish (section 3). 1 Section 2 gives a brief sketch of those aspects of the temporal-aspectual system of Spanish that constitute a necessary background for the interpretation of subjunctive forms. Section 4 briefly describes the conditional, which exhibits very close links to some subjunctive forms. The imperative mood is discussed in the subsection devoted to the subjunctive in root contexts (3.3.4). The temporal-aspectual system of Spanish The neo-Reichenbachian system proposed by Demirdache & Uribe-Etxeberria (2007) proves particularly useful for representing the tense-aspect system of Spanish. In this system, tense is modelled as a relation between the time of evaluation (Ast-T, a direct descendant of the Reichenbachian R understood as an interval) and a highest anchor, which is normally the time of speech (Utt-T) in matrix contexts. Possible relations are anteriority (anchor after anchored), inclusion or coincidence, and posteriority (anchor before anchored). These relations are replicated for aspect, which expresses a relation between the time of the event (Ev-T) and Ast-T. The analysis I propose for Spanish is summarized in Table 1. Tenses are illustrated with the 1 st Pers. Sing. of the verb cantar 'sing', aspects with aspectualized infinitival forms. Table 1 : Tense and grammatical aspect in Spanish I assume that aspect is not expressed by simple tenses in Spanish, with the notable exception of the preterite, which is a perfective tense requiring that Ast-T includes Ev-T [START_REF] Laca | Périphrases aspectuelles et temps grammatical dans les langues romanes[END_REF]. All other simple forms leave the relation between Ast-T and Ev-T unspecified and are in this sense "aspectually neutral" [START_REF] Smith | The parameter of aspect[END_REF][START_REF] Reyle | Ups and downs in the theory of temporal reference[END_REF], Schaden 2007: Chap.3). "Aspectually neutral" forms are not totally unconstrained, but whatever preferences they exhibit result (a) from polarisation effects due to the existence of an aspectually marked competing form -thus, an imperfect will strongly prefer imperfective interpretations in the contexts in which it contrasts with the preterite, a simple future will prefer perfective interpretations by contrast with a progressive future [START_REF] Laca | Périphrases aspectuelles et temps grammatical dans les langues romanes[END_REF]; (b) from the temporal structure of the eventuality description, according to a very general pattern which essentially excludes imperfective interpretations of bona-fide telic descriptions (Demirdache & Uribe-Etxeberria 2007). Deictic and anaphoric (or zero) tenses are distinguished by supposing that the latter do not have Utt-T, but an interval not identical with Utt-T, dubbed here Tx, as anchor. The notion of anaphoric tense used here is very restrictive, in that this anchor can only be provided by an embedding predicate of propositional attitude: in the sense used here, an anaphoric tense can only appear in reported speech or reported thought contexts. The introduction of Tx in the system 2 is designed to provide a unified interpretation of forms exhibiting imperfect morphology (the imperfect itself, as well as the pluperfect and the conditional) and to solve the well known problem posed by perfect conditional forms (habría cantado 'would have sung') without assuming a third layer of temporal relations next to Tense and Aspect. It can also prove useful in accounting for the fact that such forms consistently develop counterfactual uses. The splitting of the imperfect into two uses, a bona fide past tense and an anaphoric "present of the past", is justified by the behavior of the imperfect with modal verbs -a point that cannot be developed here. Aspect is explicitly expressed in Spanish by a set of periphrastic combinations exhibiting a characteristic behavior [START_REF] Laca | Périphrases aspectuelles et temps grammatical dans les langues romanes[END_REF]. Periphrastic combinations formed with haber + PP are uniformly treated as compound or perfect tenses (perfect, pluperfect, future and conditional perfect) in the Spanish grammatical tradition, and carry the main bulk of the expression of secondary anteriority relations. The Spanish progressive shows a very similar distribution to that of the English progressive, and the prospective closely parallels the English be-going-to-construction. The subjunctive Subjunctive morphology In Modern Spanish, the subjunctive exhibits two simple forms, the present and the imperfect, as well as two compound forms, the perfect and the pluperfect. The present is built on the present stem of the verb by a change of the thematic vowel, a > e for verbs of the first conjugation class and e/i > a for verbs of the second and third classes. To the exception of a handful of irregular cases, the stem normally appears in the form it takes in the 1 st pers. sing. present indicative. The imperfect is built on the preterite/perfect stem (the one of preterite indicative), and exhibits the peculiarity of having two distinct markers, -ra-and -se-, which are traditionally held to be in allomorphic variation. Person marking corresponds to the general pattern of the language, with -∅ for 1 st and 3 rd pers. sing., -s for 2 nd pers. sing., -mos for 1 st pers. pl., -is for 2 nd pers. pl., and -n for 3 rd pers. pl [START_REF] Boyé | The structure of allomorphy in Spanish verbal inflection[END_REF]. Compound forms are built with the past participle and the auxiliary haber, which appears in the present subjunctive in the formation of the perfect (haya cantado), and in the imperfect subjunctive in the pluperfect (hubiera/ hubiese cantado). The set of forms of the subjunctive is radically reduced in comparison to that of the indicative on account of the lack of a perfective/imperfective (neutral) contrast in the past forms, on the one hand, and of the lack of a present/future contrast on the other. Medieval and Classical Spanish had a form for the future subjunctive. Built on the preterite/perfect stem with the marker -re-(cantare/ quisiere/ saliere), this simple form was flanked by a corresponding compound form with haber in the future subjunctive (hubiere cantado). Although surviving in some set expressions (sea como fuere 'be as it may'), in juridical language, and possibly in some reduced dialectal areas, future subjunctive forms seem to have disappeared from general usage as far back as the 18 th century [START_REF] Ridruejo | ¿Cambios iterados en el subjuntivo español?[END_REF][START_REF] Camus Bergareche | El futuro de subjuntivo en español[END_REF][START_REF] Eberenz | Sea como fuere. En torno a la historia del futuro de subjuntivo español[END_REF]/1990). A comparison with the original Latin subjunctive paradigm shows that the main differences are directly or indirectly related to the loss of a conjugation system based on the contrast between infectum and perfectum and to the concomitant generalization of compound forms for perfects. The main reinterpretation processes are the following: (i) the Latin pluperfect subjunctive is reinterpreted as a general past (imperfect) subjunctive (canta(vi)sse > cantase). (ii) the Latin perfect subjunctive conflates with the Latin future perfect indicative to give the form of the future subjunctive (canta(ve)rim/ canta(ve)ro > cantare). The resulting form is indistinguishable from the Latin imperfect subjunctive for all verbs lacking a perfect stem form distinct from the present stem (cantare(m)), so that the latter can be held either to have been entirely given up or to have concurred in the formation of the future subjunctive [START_REF] Ridruejo | ¿Cambios iterados en el subjuntivo español?[END_REF]). (iii) the Latin pluperfect indicative is reinterpreted as a subjunctive form (canta(ve)ram > cantara), and ends up being largely equivalent to the imperfect subjunctive arisen from the Latin perfect subjunctive. This process sets on in Medieval Spanish and stretches well into the contemporary language. The details of this semantic development are extremely complex, though clearly linked to a cross-linguistically widespread phenomenon which consists in exploiting past morphology for the expression of counterfactuality [START_REF] Ridruejo | ¿Cambios iterados en el subjuntivo español?[END_REF][START_REF] Iatridou | The grammatical ingredients of counterfactuality[END_REF]. In contemporary language, the -ra-form preserves some of its etymological uses in contexts from which the -se-form is excluded, most notably as a pluperfect or preterite indicative in subordinate clauses, and with the modals deber 'must', poder 'can', and querer 'wish' in independent clauses. It tends to fully replace the -se-form for the expression of the imperfect subjunctive in a large number of regional, specially American varieties. Although lacking any direct impact on the stock of subjunctive forms, the emergence of a conditional in Romance -together with changes in the uses of the imperfect indicativehas profoundly affected the distribution and interpretation of the subjunctive. Temporal and aspectual relations The comparatively poor stock of subjunctive forms and the fact that the subjunctive is a dependent mood appearing mainly in sequence-of-tense contexts have given rise to a debate as to the temporal interpretation of subjunctive forms. This debate concentrates on the contrast between the present and the imperfect subjunctive and is formulated in the generative tradition as a question concerning the existence of an independent Tense feature in subjunctive clauses. The issue carries over to the contrast between the compound forms of the subjunctive: although both convey a secondary anteriority relation, they contrast as to the possible highest anchors for this relation. On the basis of the distribution in (1a-b), [START_REF] Picallo | El nudo FLEX y el parámetro de sujeto nulo[END_REF][START_REF] Picallo | El nudo FLEX y el parámetro de sujeto nulo[END_REF]) has argued that subjunctive clauses lack independent Tense, and that subjunctive forms are selected via a necessary anaphoric link with the temporal features of the matrix sentence, in such a way that a PAST in the matrix determines an imperfect/pluperfect subjunctive, whereas a NON-PAST in the matrix determines a present/perfect subjunctive: This claim has been challenged on a number of grounds. Strict temporal selection holds only in a restricted type of contexts, particularly those involving subjunctive selection by a forward-shifting predicate 3 or in causative constructions, and subjunctive licensing in subject clauses of copular sentences. Even in these contexts, it often takes the form of a constraint banning certain crossed combinations, but not others. Thus, forward-shifting predicates exclude an imperfect subjunctive under a matrix NON-PAST (2c), but allow a present subjunctive under a matrix PAST (2b), whereas copular sentences allow an imperfect subjunctive under a matrix NON-PAST (3b), but exclude a present subjunctive under a matrix PAST (3c) (for further details, see [START_REF] Suñer | Concordancia temporal y subjuntivo[END_REF]/1990[START_REF] Kempchinsky | Más sobre el efecto de referencia disjunta del subjuntivo[END_REF][START_REF] Quer | Mood at the interface[END_REF] The conclusion seems thus inescapable that subjunctive forms make a temporal contribution of their own: what appears as strict temporal selection is a result of the interaction between the semantic properties of the context and this temporal contribution. 1 a. The interpretation of the PAST / NON-PAST combinations in (2b) and (3b) offers an immediate clue as to what this contribution is. ( 2b) is an instance of a double access configuration, in which the time of the subordinate clause is calculated with Utt-Time as anchor [START_REF] Kempchinsky | Más sobre el efecto de referencia disjunta del subjuntivo[END_REF]): the requested arrival must follow Utt-Time. On the other hand, (3b) contrasts with (3c): the arrival must precede the time of epistemic evaluation in (3b), which reports present epistemic uncertainty about an already settled matter, whereas it follows the time of evaluation in (3c), which reports past metaphysical uncertainty about a matter not yet settled at that time. I would like to suggest that the contrast between the present and the imperfect subjunctive parallels that between the corresponding indicative tenses. The present is a deictic tense, always anchored with regard to Utt-Time. The imperfect can be an anaphoric tense, taking Tx as anchor ("present of the past"), but it can also have a deictic interpretation, in which case it signals anteriority with regard to Utt-Time. This latter interpretation becomes prominent whenever the matrix context does not provide a past temporal anchor, i.e. a suitable Tx. The temporal contrast between the present and the imperfect subjunctive is somewhat obscured by the fact that the latter gives rise to interpretations in which the event time is simultaneous or forward-shifted with regard to Utt-T. The imperfect subjunctive cannot be understood either as a deictic past or as an anaphoric "present of the past" in main clauses expressing wishes (with ojalá as a licensing adverb), nor in the antecedent of conditionals. It does not contrast in temporal location with the present subjunctive or with the present indicative, respectively: 5 a. Ojalá estuvieran/ estén en casa. hopefully be.IMPF.SBJ.3PL /be.PRS.SBJ.3PL in house 'I wish they were/ I hope they are at home' b. Si estuvieran/ están en casa... if be.IMPF.SBJ.3PL /be.PRS.IND.3PL in house 'If they were/ are at home... Such cases can be assimilated to the numerous instances of past tenses being used for signaling counterfactuality or non-realistic modal bases (see [START_REF] Iatridou | The grammatical ingredients of counterfactuality[END_REF]. 4 By contrast with the present subjunctive resp. indicative versions, which only indicate epistemic uncertainty, in the imperfect subjunctive versions the world of evaluation w0 is assumed not to be a world in which they are at home in (5a-b). In fact, counterfactual uses of the imperfect subjunctive rather reinforce the analogy with the imperfect indicative, which in some so-called modal uses locates event-time simultaneouly or subsequently to Utt-T, but does not locate it in the world of evaluation: 6 Yo que tú no se lo contaba. I that you not him/her it tell IMPF.IND.1SG 'If I were you, I wouldn't tell him/her' The simple forms of the subjunctive are aspectually neutral. The compound forms convey an anteriority relationship whose highest anchor can be Utt-T in the case of the perfect subjunctive, and is normally a Tx preceding Utt-T in the case of the pluperfect: The subjunctive is compatible with the periphrastic expression of prospective aspect, but prospective subjunctives are excluded in forward-shifting matrix contexts such as volitionals and directives. To sum up, the temporal-aspectual organization of the subjunctive does not differ radically from that of the indicative. It has a deictic form indicating coincidence with Utt-T, the present, and a form that can function anaphorically, indicating coincidence with Tx, or deictically, indicating precedence with regard to Utt-T, the imperfect. This latter form is exploited for counterfactual uses, signaling coincidence with Utt-T in a "world history" different from w0. Forms indicating coincidence regularly give rise to forward-shifted readings, sometimes as a function of the forward-shifting properties of the matrix context, but often simply as a result of the type of eventuality described in the clause. Compound forms indicate a secondary anteriority relation. When the highest anchor for this secondary anteriority relationship is Utt-T, the perfect subjunctive is very close to a deictically functioning imperfect subjunctive. The meaning and uses of the subjunctive General semantic characterizations of mood are notoriously difficult. The subjunctive is clearly an expression of modality, in as far as all its uses involve consideration of sets of alternative possible worlds, i.e. non totally realistic modal bases. However, this characterization captures a necessary, but not a sufficient condition for subjunctive use. Whereas the indicative corresponds to the default mood, appearing in main assertions, but also in questions and in a number of dependent clauses, the subjunctive is a dependent mood, which is subject to specific licensing conditions. This does not mean that the subjunctive is restricted to subordinate clauses, although the widest array of its uses does involve syntactic subordination. We will first discuss the subjunctive in dependent clauses and then the subjunctive in root contexts. Argument clauses Two distinctions have proven particularly useful when describing uses of the subjunctive. The first opposes intensional contexts to polarity contexts [START_REF] Quer | Mood at the interface[END_REF]) as subjunctive licensors. In intensional contexts, the subjunctive is triggered by the lexical properties of a predicate, which can be a verb, but also an adjective or a noun: 9 Quiere que hablen de él. want.PRS.IND.3SG that talk.PRS.SBJ.3PL of him 'He wants people to talk about him' In polarity contexts, it is essentially a negation in the matrix context that licenses a subjunctive which would be otherwise excluded. 10 Nunca dijo que estuviera enfermo. never say PRT.IND.3SG that be IMPF.SBJ.3SG ill 'S/he never said that he was ill' The second, more traditional distinction, opposes contexts of rigid subjunctive selection to contexts in which mood alternation is possible. Thus, the subjunctive is the only possible option in ( 9) whereas (10) also admits the indicative. However, the two distinctions do not overlap: (11a-c) show cases of mood alternation for "intensional" subjunctives. 'S/he consented not to be paid/ admitted that s/he was not being paid' The clear meaning differences between the subjunctive and the indicative versions in (11a-c) give precious clues as to the semantic contribution of the subjunctive. In (11a), the subjunctive version reports a directive speech act, whereas the indicative version reports a statement of fact. This sort of contrast extends to a large class of verbs of communication. In (11b), the indicative version asserts that the subject of the main verb checked a fact. The subjunctive version signals that the subject has a vested interest in this fact, and has possibly contributed to its coming about, for instance by closing the door herself. Finally, in (11c), the indicative version conveys acknowledgement of the truth of the propositional content of the object clause, whereas the subjunctive version indicates acquiescense or agreement with a suggestion. What is common to all the subjunctive versions is (a) an "element of will" on the side of the subject of the propositional attitude verb as to the coming about of the state of affairs described in the subordinate clause, and (b) the fact that the subject is involved as a causal factor that can possibly favor or prevent this coming about. Bouletic modality and causation are involved in most cases of rigid subjunctive selection, 5 namely with volitionals, as in (9), and with directives, implicatives, and causatives (12). Note that the latter two cases assert the truth of the propositional content of the subjunctive clause, thus infirming the widely held view that the subjunctive signals lack of assertion: However, some emotive-factive predicates exhibit uses as verbs of communication. They report speech acts which convey at the same time the assertion of a fact and an evaluation of this fact by the subject of the propositional attitude. In such uses, they lose their factive status, in as far as they do not presuppose the truth of their complement, and they occasionally give rise to mood alternation: 15 Se lamenta -injustificadamente-de que nadie REFL complain PRS.IND.3SG unjustifiedly of that nobody lo comprende/ comprenda. him understand.PRS.IND.3SG/ understand.PRS.SBJ.3SG 'He unjustifiedly complains about not being understood by anybody' Mood alternation is sensitive, in such contexts, to the foregrounding of the propositional content of the subjunctive clause (indicative) or of the emotive-factive predicate (subjunctive), as shown by the fact that the focus of pseudo-cleft structures allows the indicative even in the absence of reported-speech readings [START_REF] Quer | Mood at the interface[END_REF][START_REF] Grae | /to appear. Nueva gramática de la lengua española[END_REF] As argued by [START_REF] Quer | Mood at the interface[END_REF], the causation component in the semantics of emotivefactive predicates is a decisive factor in mood selection. At the same time, these predicates convey the (positive or negative) evaluation of a fact on the side of the Experiencer. Evaluative predicates constitute another major class of subjunctive selectors. They include a couple of verbs such as bastar'suffice', convenir'be advisable', urgir'be urgent', and a large class of adjectives and nouns, as well as the adverbs bien 'well, right, proper' and mal 'bad, unfair, inappropriate' [START_REF] Grae | /to appear. Nueva gramática de la lengua española[END_REF] The subjunctive is triggered whenever the propositional content of the argument clause is not merely asserted, but located in a space of possibilities. This is the case with modal predicates expressing epistemic or metaphysical possibility or necessity, but also with predicates expressing frequency and with those expressing falsity: 18 Es probable/ usual/ erróneo. be.PRS.IND.3SG likely/ usual/ mistaken que surjan conflictos. that arise.PRS.SBJ.3PL conflicts 'It is likely/ usual/ false that conflicts (should) ensue' Among predicates of propositions, only those that are equivalent to the assertion of the proposition, as for example es verdad/ cierto/ exacto/ seguro 'it is true/ correct/ exact/ sure', consistently select the indicative mood. Note that with modal predicates, the truth of the subjunctive proposition may be entailed in some cases. Together with the implicative subjunctive triggers mentioned above (12), this fact casts some doubt on the role of nonveridicality [START_REF] Giannakidou | The Landscape of Polarity Items[END_REF] in the distribution of the subjunctive. To sum up, subjunctive triggering in intensional contexts is intimately related to the notions of causation and evaluation. Mood selection is usually rigid in such contexts, which is probably an indication of the fact that argument clauses in such configurations cannot escape the scope of the selecting predicate. Note that the more complex scope configurations involved in pseudo-clefts, possibly disrupting subordination, permit the indicative, as in ( 16) above, and that when causal relations do not involve the embedding of an argument clause, no subjunctive is licensed ( 19a-b Possible subjunctive licensors include first and foremost sentential negation, but also nonupward entailing environments, such as contexts containing downward-entailing elements, questions and conditional antecedents [START_REF] Ridruejo | Modo y modalidad. El modo en las subordinadas sustantivas[END_REF][START_REF] Grae | /to appear. Nueva gramática de la lengua española[END_REF] This can be taken to mean that indicative clauses in polarity contexts convey the speaker's endorsement of the truth of the complement. The indicative version of ( 21), in which subject of belief and speaker coincide, seems to report contradictory beliefs. By contrast, the subjunctive in polarity contexts does not convey any attitude of the speaker as to the truth of the complement clause: it indicates that the complement clause is under the scope of the propositional attitude verb and the operator affecting it. This scopal dependency of the subjunctive -contrasting with the outscoping effects of the indicative-is further confirmed by the fact that polarity contexts license negative polarity items in subjunctive, but not in indicative complement clauses [START_REF] Bosque | Las bases gramaticales de la alternancia modal[END_REF] Relative clauses As stated above, in polarity contexts the subjunctive indicates that the clause containing it is in the scope of the licensing context. Mood alternation in relative clauses follows an analogous interpretive pattern. The descriptive content of an indicative relative is evaluated in w0 (the world in which non-modalized assertions are evaluated). By contrast, the descriptive content of a subjunctive relative is evaluated in a non-totally realistic modal base contributed by an intensional environment. This explains the well known fact that noun phrases containing subjunctive relatives are typically interpreted non-specifically (23a) or attributively (23b): a. Pidieron un libro que fuera fácil de leer. ask. PRT.IND.3PL a book that be.IMPF.SBJ.3SG easy of read.INF 'They asked for a book that was easy to read' b. Le dieron un libro a cada cliente que him give.PRT.IND.3PL a book to every customer that hubiera gastado más de 10 euros. have. IMPF.SBJ.3SG spend.PP more of 10 euros 'They gave a book to any customer having spent over 10 euros' Non-specific relatives do not entail the existence in w0 of an object verifying the description. Attributive relatives are characterized by the fact that the link between the content of the nominal description and the property denoted by the rest of the sentence is a law-like one, grounded in generalizations that extend to counterfactual cases and usually involve causality. Mood alternation is excluded in appositive relatives. Since these constitute independent subsidiary assertions, they only take the indicative [START_REF] Ridruejo | Modo y modalidad. El modo en las subordinadas sustantivas[END_REF]). 6 The licensing environments for subjunctive relatives share, to a certain extent, the properties of the environments licensing subjunctive argument clauses. As a matter of fact, restrictive relatives contained in subjunctive argument clauses admit themselves the subjunctive [START_REF] Grae | /to appear. Nueva gramática de la lengua española[END_REF] As for subjunctive relatives not contained in subjunctive clausal environments, they are excluded in contexts involving totally realistic modal bases (25a), and they are licensed in modal environments such as those involving bouletic modality (25b), but also in those containing modal verbs (25c), future tense or prospective aspect, or exhibiting a habitual/generic interpretation [START_REF] Quer | Mood at the interface[END_REF][START_REF] Grae | /to appear. Nueva gramática de la lengua española[END_REF] The problem is that, in relative clauses, the subjunctive itself can be the only overt element triggering a non-totally realistic interpretation of the environment. Usually, unexpected subjunctives are linked to the possibility of establishing an intentional link between the will of an agent and the descriptive content of the noun phrase, and are thus assimilable to bouletic modality. This is particularly clear in the case of so-called "purpose relatives" exemplified in (26) [START_REF] Ridruejo | Modo y modalidad. El modo en las subordinadas sustantivas[END_REF]), but also extends to subtler cases: 26 Hicieron un cobertizo build PRT.IND.3PL a shed que los protegiera de la lluvia. that them protect IMPF.SBJ.3SG of the rain 'They built a shed as a protection against the rain' Note that such cases are analogous to subjunctive-triggering with implicative verbs, in as far as entailments of existence are not suspended by the subjunctive, which only adds a forwardshifting element of will. Although we have exemplified subjunctive relatives mainly in indefinite nounphrases, all determiners, to the notable exception of demonstratives, are compatible with subjunctive relatives [START_REF] Quer | Mood at the interface[END_REF]. Occasional difficulties with the definite article should probably be attributed to a mismatch between the presuppositions of the article and the descriptive content of the noun phrase. Semantic definites -those in which the unicity presupposition is guaranteed by the descriptive content of the noun phrase, such as superlatives or descriptions containing ordinals-pose no problem for the subjunctive [START_REF] Ridruejo | Modo y modalidad. El modo en las subordinadas sustantivas[END_REF]): 27 Iban a comprar el libro go.IMPF.IND.3PL to buy.INF the book que contuviera #(más) ilustraciones. that contain IMPF.SBJ.3SG more illustrations ' They were going to buy the book with the greatest number of illustrations' Bare plurals (28), but also count singular algún 'some', free choice items, and negative indefinites strongly favor subjunctive relatives [START_REF] Quer | Mood at the interface[END_REF]). In the first case, this is a consequence of the scopal dependency of bare plurals; in the other cases, scopal dependency is reinforced by the fact that the items in question require licensors roughly corresponding to those required by the subjunctive: 28 Buscan libros que search.PRS.IND.3PL books that ??contienen/ contengan ilustraciones. contain.PRS.IND.3PL /contain PRS.SBJ.3PL illustrations 'They are looking for books containing illustrations' Free relatives also clearly favor the subjunctive, possibly as a consequence of the tendency to interpret them attributively and of their proximity to free-choice items [START_REF] Giannakidou | The Landscape of Polarity Items[END_REF][START_REF] Quer | Mood at the interface[END_REF] To sum up, relatives clauses exhibit mood alternation. The subjunctive requires that the descriptive content of the clause be evaluated in a non-totally realistic modal base, which is more often than not guaranteed by its dependence from an intensional context and gives rise to non-specific or attributive readings for the NP containing it. Adverbial and/or adjunct clauses Due to space limitations, only information concerning some prominent types of subjunctive adverbial/adjunct clauses and some limited types of mood alternation will be given in this section. Subjunctive use in these contexts is sensitive to roughly the same type of semantic factors we have been discussing. Thus, for instance, purpose clauses (30a), which involve bouletic modality, and clauses negating concomitance (30b), in which the proposition expressed is necessarily under the scope of the negative sin 'without', take the subjunctive. Both types of interclausal relations are expressed by a preposition governing a complement clause (GRAE 2008): 7 30 a. Lo hice para que se enterara. it do.PRT.IND.1SG for that REFL inform.IMPF.SBJ.3SG 'I did it so that he would notice it' b. Lo hice sin que se enterara. it do.PRT.IND.1SG without that REFL inform.IMPF.SBJ.3SG 'I did it without his noticing it' Modern Spanish exhibits the peculiarity that all forward shifted temporal clauseswhose time of evaluation is ordered after the highest anchor Utt-T or Tx -take the subjunctive. This holds of temporal clauses introduced by any syntactic type of subordinating expression, and expressing simultaneity, posteriority or anteriority: 31 Cuando llegue, se lo decimos. when arrive. PRS.SBJ.3SG him/her it tell.PRS.IND.1PL 'When s/he arrives, we'll tell him/her' Some authors classify these uses of the subjunctive as "suppletive" future tenses, but the assumption of a "different" subjunctive seems unwarranted. Furthermore, before-temporal clauses always take the subjunctive (i.e., not only when they are forwardshifted), whereas after-temporal clauses only take it in European Spanish [START_REF] Grae | /to appear. Nueva gramática de la lengua española[END_REF]. Conditional antecedents and subjunctive concessive clauses figure prominently among the contexts in which the temporal contrast between present and imperfect subjunctive forms is reinterpreted, with imperfect subjunctive forms being used for the expression of nonrealistic modal bases. Thus, both (32a-b) and (33a-b) locate the time of the subordinate after resp. before Utt-T. But (32b) The factors linked to the presence of the subjunctive in main clauses parallel those we find in dependent clauses, in as far as they involve evaluation with regard to non-totally realistic modal bases. The conditional The verbal form built on the infinitive/future stem by adding to it the desinences of the imperfect (cantar-ía/ habr-ía cantado) is predominantly classified as a temporal form of the indicative mood. It is not surprising that it should have modal uses: the future and imperfect indicative are known to exhibit a number of modal uses, which clearly predominate over temporal uses in the case of the former, so that it is only to be expected that a form combining the morphology of both tenses will have a still more pronounced modal profile. I would like to suggest, however, that there are good reasons for assuming a split in uses of this form, with some of them corresponding to a tense ("future of the past" and, more interestingly, "past of the future"), and others constituting the mood of choice when non-realistic modal bases are involved, i.e. when w0 is excluded from the domain of quantification in a modal environment. In "future of the past" uses, the conditional behaves as a strictly anaphoric tense: it requires a past anchor contributed by a verb of thinking or speaking (41a), which may be implicit in free indirect speech contexts and in so called quotative or evidential uses of the conditional [START_REF] Squartini | The internal structure of evidentiality in Romance[END_REF] What I'd like to label "past of the future" readings are practically equivalent to future perfects in contexts expressing a conjecture [START_REF] Squartini | The internal structure of evidentiality in Romance[END_REF]. Spanish makes abundant use of future morphology for indicating that the propositional content is advanced as a possibility, and not as an unqualified assertion. If the propositional content concerns a time preceding Utt-T, anteriority can be expressed by the future perfect, but also by the conditional: 42 No vino a la fiesta. What the semantics of conditionals, want-verbs, and modals have in common is the fact that they require consideration of non-totally realistic modal bases. It is thus natural to assume that "conditional mood" requires sets of alternative worlds to operate on. To judge from its effects in conditional sentences, what it does in such contexts is to exclude the world of evaluation from the domain of quantification, signaling that w0 does not belong to the modal base. The non-totally realistic modal base contributed by the modal element on which the conditional is grafted becomes a non-realistic modal base. When talking about the past -by means of perfect morphology on the conditional or on an embedded infinitive-non-realistic modal bases result in clearly counterfactual interpretations involving non-realized possibilities or unfulfilled wishes: "Conditional mood" -by contrast with the temporal conditional-is a counterfactual form. As such, it interferes in a number of contexts with imperfect and pluperfect subjunctives, which have been shown to exhibit counterfactual interpretations. As stated above, pluperfect subjunctives compete with perfect conditionals in the consequent of past counterfactuals. The same competition exists with modals and with verbs of wish. This connection is reinforced by the use of the imperfect subjunctive in root clauses containing the modals poder, querer and deber. Notes 1 I am greatly indebted to Ignacio Bosque (GRAE, Madrid). The materials and analyses proposed in GRAE (2008/ to appear) have profoundly influenced my views on the Spanish subjunctive. I gratefully acknowledge the support by the Fédération Typologie et Universaux CNRS for the programm Temporalité: typologie et acquisition. 2 A temporal anchor different from Utt-T, labelled Tx, is introduced, albeit with different characterizations, by [START_REF] Giorgi | Tense and aspect. From semantics to morphosyntax[END_REF] and by [START_REF] Iatridou | The grammatical ingredients of counterfactuality[END_REF]. Adoption of Tx could lead to a more precise formulation of the intuition regarding "inactual" tenses on which [START_REF] Coseriu | Das romanische Verbalsystem[END_REF] based his analysis of the Romance verbal system. 3 Forward-shifting predicates are characterized by the fact that the clauses they introduce are evaluated at a time that cannot precede the matrix time. Volitionals, directives, and verbs of planning belong to his class. For a discussion, see [START_REF] Abusch | On the temporal composition of infinitives[END_REF], for an analysis of modal verbs as forward-shifting, see [START_REF] Condoravdi | Temporal interpretations of modals. Modals for the present and for the past[END_REF]. 4 Non-realistic modal bases are domains excluding the world of evaluation (w0). They are contrasted in the text to non-totally realistic modal bases, which contain w0 but are nonsingleton sets of worlds, and to totally realistic modal bases, which are singleton sets whose only member is w0. The latter form the background for factual, non-modalized statements. For a discussion, see [START_REF] Kaufmann | Formal approaches to modality[END_REF], as well as Giorgi & Pianesi (1997: 205-217). 5 Assertions as to rigid subjunctive selection or exclusion should be taken with a pinch of salt whenever the verb involved is a modal [START_REF] Grae | /to appear. Nueva gramática de la lengua española[END_REF], since modals can appear in the indicative in subjunctive-selecting contexts, and in the subjunctive in indicative-selecting contexts. 6 This means that the -ra-forms appearing in appositive relatives in certain registers should be analysed as indicative forms. As for their role in restrictive relatives, it is subject to debate (see [START_REF] Rivero | Especificidad y existencia[END_REF][START_REF] Rivero | Especificidad y existencia[END_REF]). 7 Mood alternation distinguishes purpose (subjunctive) from result clauses (indicative) with prepositional expressions such as de manera/modo/forma tal (que) 'so as/ so that'. 8 A possible exception is that of counterfactual suggestions or wishes, for instance: (i) Me lo hubieras dicho antes. me it have IMPF.SBJ.2SG say.PP before 'You should have told me before' 9 3 rd person imperatives are not usually acknowledged as such in the Spanish descriptive tradition, which assimilates the sentences containing them to desideratives [START_REF] Grae | /to appear. Nueva gramática de la lengua española[END_REF]. However, some of their uses cannot be semantically assimilated to desideratives: (i) Que hagan el menor error, y los denuncio. that make.PRS.SBJ.3SG the least mistake and them report..PRS.IND.1SG 'Let them commit the slightest mistake, and I'll report them' 11 Habrías podido/ tenido que prestar atención. have COND..2SG can.PP/ have PP that lend.INF attention 'You could/ should paid attention' b. Preferiría haberme enterado inmediatamente. prefer. COND.1SG have.INF-me inform immediately 'I'd have rather learnt about it right away' Table 2 : The morphology of the simple forms of the subjunctive 2 IMPF.IND.3SG likely that arrive.PRS.SBJ.3PL / arrive.IMPF.SBJ.3PL to time 'It was likely that they would arrive on time' ): 2 a. Les pidió que llegaran a tiempo. them ask.PRT.IND.3SG that arrive.IMPF.SBJ.3PL to time 'S/he asked them to arrive on time' b. Les pidió que lleguen a tiempo. them ask.PRT.IND.3SG that arrive.PRS.SBJ.3PL to time 'S/he asked them to arrive on time' c. Les pide que lleguen/ *llegaran a tiempo. them ask.PRS.IND.3SG that arrive.PRS.SBJ.3PL /arrive.IMPF.SBJ.3PL to time 'S/he asks them to arrive on time' 3 a. Es probable que lleguen a tiempo. be. PRS.IND.3SG likely that arrive PRS.SBJ.3PL to time 'It's likely that they will arrive on time' b. Es probable que llegaran a tiempo. be. PRS.IND.3SG likely that arrive.IMPF.SBJ.3PL to time 'It is likely that they arrived on time' c. Era probable que *lleguen/ llegaran a tiempo. be. me surprise PRS.IND.1SG that it have PRS.SBJ.3PL/ have IMPF.SBJ.3PL seePP 'I'm surprised that they (should) have seen it' b. Me sorprendió que lo hayan/ hubieran visto. me surprise PRT.IND.1SG that it have PRS.SBJ.3PL/ have IMPF.SBJ.3PL seePP 'I was surprised that they had/ should have seen it'However, just like the imperfect subjunctive can locate event-time simultaneously with Utt-T in counterfactually interpreted contexts, the pluperfect subjunctive can express a single anteriority relation, locating the eventuality before Utt-T in such contexts. Pluperfect and perfect do not contrast in temporal location in (8a-b), but the pluperfect versions indicate that w0 is not assumed to be a world in which they arrived on time, whereas the perfect versions merely express epistemic uncertainty as to w0 being or not such a world IMPF.SBJ.3PL / have.PRS.SBJ.3PL arrive.PP to time 'I wish they had / I hope they have arrived on time' b. Si hubieran / han llegado a tiempo... if have.IMPF.SBJ.3PL / have.PRS.IND.3PL arrive.PP to time 'If they had / have arrived on time... a. Me sorprende que lo hayan/ *hubieran visto. 8 a. Ojalá hubieran / hayan llegado a tiempo. hopefully have. insist PRS.IND.3SG in that arrive.PRS.SBJ.3PL/ arrive.PRS.IND.3PL at three 'S/he insists on their arriving/ that they arrive at 3 o'clock' b. Se aseguró de que la puerta estuviera/ REFL make-sure. PRT.IND.3SG of that the door be.IMPF.SBJ.3SG/ estaba cerrada. be. IMPF.IND.3SG closed 'S/he saw to it/ checked that the door was closed' c. Admitió que no le pagaran/ pagaban. admit. PRT.IND.3SG that not him/her pay IMPF.SBJ.3PL/ pay IMPF.IND.3PL a. Insiste en que lleguen/ llegan a las tres. PRT.IND.3SG/ obtain. PRT.IND.3SG/ make PRT.IND.3SG que le pagaran. that him/her pay IMPF.SBJ.3PL 'S/he demanded/ managed to be paid'/ 'S/he made them pay him/her' In some cases, causation alone triggers rigid subjunctive selection. This is the case when a causal relation between two eventualities is established by means of a verbal predicate (13a-b), but also in the complement clauses of nouns and adjectives denoting causal relations: That REFL refuse IMPF.SBJ.3PL to pay--him/her gave place to a quarrel Their refusal to pay him/her caused a quarrel' Emotive-factive predicates express a relationship between an Experiencer and a Stimulus, such that the Stimulus causes a psychological reaction in the Experiencer. They consistently select the subjunctive in their argument clauses: Him/her surprise PRS.IND.3SG that have-PRS.SBJ.3SG arrive.PP late 'S/he is surprised that s/he should have arrived late' Exigió/ Consiguió/ Hizo. demand. 13 a. El mal tiempo explica que llegara tarde. The bad weather explains that arrive.IMPF.SBJ.3SG late 'The bad weather explains his/her late arrival' b. Que se negaran a pagarle dio lugar a una disputa. 14 Le sorprende que haya llegado tarde. PRS.SBJ.3SG arrive.PP/ arrive.PRT.IND.3SG late 'What surpriseshim/her is that s/he (should have) arrived late' ): 16 Lo que le sorprende es que That.N.SG that him/her surprise PRS.IND.3SG be.PRS.IND.3SG that haya llegado/ llegó tarde. have- Subjunctive selection in argument clauses is much less rigid in polarity contexts. ): 19 a. ¿Le molesta si fumo? you/him/her bother.PRS.IND.3SG if smoke PRS.IND.1SG 'Do you/ Does s/he mind if I smoke?' b. Se aburrió porque siempre lo criticaban. REFL annoy.PRT.IND.3SG because always him criticize.IMPF.IND.3PL 'S/he got fed up because he was always being criticized' . Thus, the indicative is the only possible choice in (20a), but the subjunctive is allowed in (20b):Mood alternation in polarity contexts produces extremely subtle effects which involve the attitude of the speaker towards the propositional content of the argument clause. Note that first person present negated belief reports select the subjunctive[START_REF] Quer | Mood at the interface[END_REF]): 20 a. Creían/ Afirmaban que Juan believe. IMPF.IND.3PL/ claim. IMPF.IND.3PL that Juan *estuviera/ estaba enfermo. be. IMPF.SBJ.3SG/ be.IMPF.IND.3SG ill 'They believed/ claimed that Juan was ill' b. No creían/ afirmaban que Juan not believe. IMPF.IND.3PL/ claim. IMPF.IND.3PL that Juan estuviera /estaba enfermo. be. IMPF.SBJ.3SG/ be.IMPF.IND.3SG ill They didn't believe/ claim that Juan was ill' 21 No creo que not believe. PRS.IND.1SG that estuviera/ *estaba enfermo. be. IMPF.SBJ.3SG/ be.IMPF.IND.3SG ill 'I don't believe s/he was ill' and (33b) signal that the speaker views Pedro's confession as improbable resp. as contrary to fact: Conditionals and subjunctive concessives show parallel patterns in tense-mood distribution[START_REF] Quer | Mood at the interface[END_REF], with one important exception: conditionals introduced by the conjunction si 'if' never accept present/perfect subjunctive forms (34a). This restriction does not hold of other expressions, as shown by (34b): They hold my hand// We hold his/her hand' Thus, clitic position is held to discriminate between subjunctive and imperative in cases such as le tenga/ téngale, etc.[START_REF] Grae | /to appear. Nueva gramática de la lengua española[END_REF].Apart from certain set expressions and set patterns expressing wishes (39), desiderative sentences require a licensing element preceding the subjunctive. The most usual are the complementizer que and the particle ojalá 'hopefully' illustrated above.Adverbs expressing uncertainty license the subjunctive in main clauses when they precede the verb, but never when they follow it, as shown by the following contrast: a. Aunque Pedro confiese, even-that Pedro confess. PRS.SBJ.3SG a. tenme / tenedme / téngame yo seguiré hold-IMP.2SG-me/ hold-IMP.2PL-me/ hold-IMP.3SG-me negando. I follow.FUT.IND.1SG deny.GER ténganme/ tengámosle la mano 'Even if Pedro confesses, I'll go on denying it. hold-IMP.3PL-me/ hold-IMP.1PL-him/her the hand b. Aunque Pedro confesara, 'Hold my hand / Let's hold his/her hand' even-that Pedro confess.IMPF.SBJ.3SG b. me tienes/ me tenéis / me tenga yo seguiría.COND.1SG negando. me hold-IND.2SG-me/ me hold-IND.2PL-me/ me hold-SBJ.1/3SG I follow deny.GER me tengan/ le tengamos la mano 'Even if Pedro confessed, I would go on denying it' me hold-SBJ.3PL/ him/her hold.SBJ.1PL the hand 33 'You/ I/ S/he/ 39 a. Aunque Pedro haya even-that Pedro have.PRS.SBJ.3SG confess.PP confesado, yo seguiré FUT.IND.1SG negando. I follow deny.GER 'Even if Pedro has confessed, I'll go on denying it' b. Aunque Pedro hubiera confesado, even-that Pedro have.IMPF.SBJ.3SG confess.PP yo seguiría negando. a. Dios te ayude. I follow COND.1SG deny.GER God you help PRS.SBJ.3SG 'Even if Pedro had confessed, I would go on denying it' '(May) God help you' 40 a. Quizás/ Probablemente esté/ está enfermo. perhaps/ probably be.PRS.SBJ.3SG be.PRS.IND.3SG ill 34 a. Si Pedro confiesa/ 'Maybe/ Probably s/he is ill' /*confiese, if Pedro confess.PRS.IND.3SG / confess.PRS.SBJ.3SG b. *Esté/ Está enfermo, quizás/ probablemente. yo también confesaré. be.PRS.SBJ.3SG be.PRS.IND.3SG ill maybe/ probably I also 'S/he is ill, maybe/probably' confess. FUT.IND.1SG 'If Pedro confesses, I will confess too' b. En caso de que Pedro *confiesa/ confiese, in case of that Pedro confess.PRS.IND.3SG / confess.PRS.SBJ.3SG yo también confesaré. I also confess. FUT.IND.1SG 'If Pedro confesses, I will confess too' Counterfactual conditionals and subjunctive concessives with an imperfect or a pluperfect subjunctive normally have conditional forms in the main clause. However, there is a marked tendency to replicate a pluperfect subjunctive in the main clause: 35 Si/Aunque hubiera confesado, if even-that have IMPF.SBJ.3SG confess.PP lo habrían/ %hubieran condenado. him have.COND.3PL/ have.IMPF.SBJ.3PL condemn.PP 'If / Even if he had confessed, he would have gotten a sentence' Causal subordinates do not of themselves license the subjunctive (36a). However, under negation, as well as under emotive-factive or evaluative predicates, the subjunctive is . It thus contrasts with prospective aspect, whose past anchor can be contributed by an adverbial (41b) or by the tense of an independent previous sentence not go-out PRT.IND.1PL because rain COND.3SG / go IMPF.IND.3SG to rain 'We didn't go out because it would rain/ was going to rain' (41c): 41 a. Pensó/ Afirmó que llovería / iba a llover. think.PRT.IND.3SG/claim PRT..3SG that rain.COND.3SG/ go.IMPF.IND.3SG to rain 'S/he thought/claimed that it would rain/ it was going to rain' b. Ayer *llovería / iba a llover. yesterday rain. COND.3SG / go IMPF.IND.3SG to rain 'Yesterday it would rain/ was going to rain' c. No salimos porque *llovería / iba a llover. not come PRT.IND.3SG to the party Estaría/ Habrá estado enfermo. be.COND.3SG/ have.FUT.IND.3SG be.PP ill 'He didn't come to the party. He might have been ill' Modal uses of the conditional, on the other hand, are only licensed in a particular subset of modal environments, comprising (a) modal verbs; (b) verbs expressing wishes or preferences; (c) the consequent of counterfactual or hypothetical conditional sentences (Laca 2006). 43 a. Podrías/ Tendrías que prestar atención. can.COND.2SG/ have. COND.2SG that lend.INF attention 'You could/ should pay attention' b. Querría/ Preferiría/ Me gustaría want. COND.1SG/ prefer.COND.1SG/ me like.COND.3SG que prestaras atención. that lend.IMPF.SBJ.2SG attention 'I wish/ I'd prefer you would pay attention/ I'd like it for you to pay attention' c. Si te importara, prestarías atención. if you.DAT mind. IMPF.SBJ.3SG lend.COND..2SG attention 'If you minded, you would pay attention' Root contexts In main clauses, the subjunctive invariably signals that the propositional content is not being asserted. This is the case in directive (37a) and desiderative sentences (37b), but also in sentences expressing some forms of epistemic modality (37c Note that in all cases, the subjunctive requires a licensor that precedes it: negation in (37a), the complementizer que or the particle ojalá (37b), and an adverb in ((37c). 8 The subjunctive alternates with the imperative in directives. Negative directives cannot be expressed in the imperative, and 3 rd pers. imperatives are indistinguishable from subjunctive forms. 9 Since the politeness form of address is a 3 rd pers. form, and the only form for plural addressees in American Spanish is the politeness form, this leads to considerable overlap between imperative and subjunctive. Table 3 shows that there are only two distinct forms for the imperative in European Spanish, and only one in American Spanish : Although the wisdom of maintaining a separate mood for two, resp. one distinct inflection may be questioned, imperative sentences not introduced by negation or by a complementizer share with infinitives and gerunds the peculiarity of not allowing proclitics:
51,445
[ "9859" ]
[ "204862" ]
01756773
en
[ "info" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01756773/file/ICRA2018_FinalVersion.pdf
Banglei Guan email: banglei.guan@hotmail.com Pascal Vasseur email: pascal.vasseur@univ-rouen.fr Cédric Demonceaux email: cedric.demonceaux@u-bourgogne.fr Friedrich Fraundorfer email: fraundorfer@icg.tugraz.at Visual odometry using a homography formulation with decoupled rotation and translation estimation using minimal solutions In this paper we present minimal solutions for two-view relative motion estimation based on a homography formulation. By assuming a known vertical direction (e.g. from an IMU) and assuming a dominant ground plane we demonstrate that rotation and translation estimation can be decoupled. This result allows us to reduce the number of point matches needed to compute a motion hypothesis. We then derive different algorithms based on this decoupling that allow an efficient estimation. We also demonstrate how these algorithms can be used efficiently to compute an optimal inlier set using exhaustive search or histogram voting instead of a traditional RANSAC step. Our methods are evaluated on synthetic data and on the KITTI data set, demonstrating that our methods are well suited for visual odometry in road driving scenarios. I. INTRODUCTION Visual odometry and visual SLAM [START_REF] Scaramuzza | Visual odometry [tutorial] part1: The first 30 years and fundamentals[END_REF] play an immensely important role for mobile robotics. Many different approaches for visual odometry have been proposed already, and for a wide variety of applications visual odometry has been used successfully. However, reliability, long-term stability and accuracy of visual odometry algorithms are still a topic of research as can be seen by the many contributions to the KITTI visual odometry benchmark [START_REF] Geiger | Are we ready for autonomous driving? the kitti vision benchmark suite[END_REF]. Most approaches for visual odometry follow the scheme where first feature correspondences between subsequent views are established, then they are screened for outliers and then egomotion estimation is done on inliers only [START_REF] Scaramuzza | Visual odometry [tutorial] part1: The first 30 years and fundamentals[END_REF]. The reliability and robustness of such a scheme is heavily dependent on the outlier screening step. In addition the outlier screening process has to be fast and efficient. The use of RANSAC [START_REF] Fischler | RANSAC random sampling concensus: A paradigm for model fitting with applications to image analysis and automated cartography[END_REF] is widely accepted for this step. However, the complexity of the RANSAC process being exponentially related to the minimal number of points necessary for the solution estimation, reducing this number is very interesting. For instance, a standard solution for twoviews egomotion estimation is to use the essential matrix with 5 matching points [START_REF] Nistér | An efficient solution to the five-point relative pose problem[END_REF] in a RANSAC process to increase the robustness. Nevertheless, the number of points needed for estimating the parameters is really crucial for the RANSAC algorithm. Indeed, the runtime of the RANSAC increases exponentially according to the number of points we need. Thus, before estimating the parameters, we have to be sure that we use the minimal number of points for that. One such idea is to take motion constraints into account, e.g. a planar motion (2pt algorithm) or the Ackermann steering motion for self-driving cars (1pt algorithm [START_REF] Scaramuzza | Real-time monocular visual odometry for on-road vehicles with 1point ransac[END_REF]). Another idea is to utilize an additional sensor like an inertial measurement unit to improve this step. Traditional sensor fusion methods [START_REF] Weiss | Real-time metric state estimation for modular vision-inertial systems[END_REF] perform a late fusion of the individual vision and IMU measurements. However, it is possible to utilize the IMU measurements much earlier to aid the visual odometry algorithm for outlier screening. This idea has already been utilized in [START_REF] Fraundorfer | A minimal case solution to the calibrated relative pose problem for the case of two known orientation angles[END_REF], [START_REF] Gim | Relative pose estimation for a multi-camera system with known vertical direction[END_REF], [START_REF] Naroditsky | Two efficient solutions for visual odometry using directional correspondence[END_REF], [START_REF] Saurer | Homography based egomotion estimation with a common direction[END_REF] in which partial IMU measurements have been used to design more efficient motion estimation algorithms for outlier screening. In this paper we follow this idea by proposing a low complexity algorithm for unconstrained two-view motion estimation that can be used for efficient outlier screening and initial motion estimation. Our method assumes a known gravity vector (measured by an IMU) and is based on a homography relation between two views. In [START_REF] Saurer | Homography based egomotion estimation with a common direction[END_REF] a 2pt algorithm has been proposed exactly for this case. In this work we will improve on [START_REF] Saurer | Homography based egomotion estimation with a common direction[END_REF] and show that actually an algorithm can be found that needs fewer than 2 data points for a motion hypothesis. To achieve this, the first step is to separate the rotation and translation estimation. This is possible if the scene contains features that are far away. Such features are only influenced by rotation and only the x-coordinate of a single feature point is sufficient to find the remaining rotational degree of freedom (DOF), so we call this the 0.5pt method. After this the remaining 3 DOFs for the translation t x , t y , t z are computed. We present a linear solution that needs 1.5pt correspondences. However, more important is our proposal of using a discrete sampling for determining one of the remaining parameters and then use a 1pt algorithm for the remaining 2 parameters. This makes it possible to completely determine a motion hypothesis from a single point correspondence. Thus, we obtain an extremely fast algorithm even within a RANSAC loop. The actual motion hypotheses can be computed exhaustively for each point correspondence and the best solution can be found by a voting scheme. The proposed methods are evaluated experimentally on synthetic and real data sets. We test the algorithms under different image noise and IMU measurement noise. We demonstrate the proposed algorithms on KITTI [START_REF] Geiger | Are we ready for autonomous driving? the kitti vision benchmark suite[END_REF] data set and evaluate the accuracy compared to the ground truth. These experiments also demonstrate that the assumptions taken hold very well in practice and the results on the KITTI data set show that the proposed methods are useful within the self-driving car context. II. RELATED WORK With known intrinsic parameters, a minimum of 5 point correspondences is sufficient to estimate the essential matrix [START_REF] Nistér | An efficient solution to the five-point relative pose problem[END_REF], and a minimum of 4 point correspondences is required to estimate the homography if all the 3D points lie on a plane [START_REF] Hartley | Multiple View Geometry in Computer Vision[END_REF]. Then the essential matrix or the homography can be decomposed into the motion of the camera between two views, i.e. a relative rotation and translation direction. A reduction of the number of needed point correspondences between views is important in terms of computational efficiency and of robustness and reliability. Such a reduction is possible if some additional information is available or assumptions about the scene and camera motion are taken. If for instance the motion is constrained to be on a plane, which is typical for ground based robots or self-driving cars, 2 point correspondences are only needed for computing the 3-DOFs motion [START_REF] Ortin | Indoor robot motion based on monocular images[END_REF]. If further the motion is constrained by Ackermann steering typical for cars only 1 point correspondence is necessary [START_REF] Scaramuzza | Real-time monocular visual odometry for on-road vehicles with 1point ransac[END_REF]. In contrast if additional information e.g. from an IMU is available and the complete rotation between the two views is provided by the IMU, the remaining translation can be recovered up to scale using only 2 points [START_REF] Kneip | Robust real-time visual odometry with a single camera and an imu[END_REF]. Using this concept a variety of algorithms have recently been proposed for egomotion estimation when knowing a common direction [START_REF] Fraundorfer | A minimal case solution to the calibrated relative pose problem for the case of two known orientation angles[END_REF], [START_REF] Kalantari | A new solution to the relative orientation problem using only 3 points and the vertical direction[END_REF], [START_REF] Naroditsky | Two efficient solutions for visual odometry using directional correspondence[END_REF]. The common direction between the two views can be given by an IMU (measuring the gravity direction) or by vanishing points extraction in the images. All these works propose different algorithms for solving the essential matrix with 3 point correspondences. For this they start with a simplified essential matrix (due to the known common direction) and then derive a polynomial equation system for the solution. To further reduce the number of point correspondences, the homography relation between two views can be used instead of the epipolar constraint expressed by the essential matrix. Under the assumption that the scene contains a large enough plane that is normal or parallel to the gravity vector measured by an IMU (a typical case for indoor or road driving scenarios) the egomotion can be computed from 2 point correspondences [START_REF] Saurer | Homography based egomotion estimation with a common direction[END_REF]. This idea however can be extended even further which is what we propose in this work. We start with the formulation of [START_REF] Saurer | Homography based egomotion estimation with a common direction[END_REF] where the cameras are aligned to the gravity vector and the remaining DOFs are one rotation parameter and three translation parameters. We solve for rotation and translation separately. This uses the fact that for far scene points, the parallax-shift (induced by translation) between two views is hardly noticeable. The motion of these far points is close enough to a pure rotation case such that the rotation between two views can be estimated firstly (and independent from translation) using these far points. Every single point correspondence can produce a hypothesis for the remaining rotation parameter which can be used in a 1pt RANSAC algorithm for rotation estimation or for histogram voting by computing the hypothesis from all the point matches. This step also allows to separate the correspondences into two sets, a far set and a near set. The further processing for the translation estimation can then be continued on the smaller near set only, as the effect of translation is not noticeable in the far set. Such a configuration is typical for road driving imagery. For estimating the remaining translation parameters we propose a linear 1.5pt algorithm. However, practically this solution does not give a direct computational advantage over the 2pt algorithm of Saurer et al. [START_REF] Saurer | Homography based egomotion estimation with a common direction[END_REF]. Instead we propose to use a combination of discrete sampling and parameter estimation. The idea is that 1 parameter of the remaining 3 is sampled in discrete steps. For each sampled value it is possible to estimate the remaining parameters from a single point correspondence. This can be then done efficiently using a 1pt RANSAC step or an exhaustive search to find the globally optimal value. The great benefit about this approach is that instead of performing 2pt RANSAC a sequence of 1pt RANSAC steps with a constant overhead for bounded discrete sampling is used. This exhaustive search gives us an efficient way to find the globally optimal solution. III. BASICS AND NOTATIONS With known intrinsic camera parameters, a general homography relation between two different views is represented as follows [START_REF] Hartley | Multiple View Geometry in Computer Vision[END_REF]: λx j = Hx i , (1) where x i = [x i , y i , 1] T and x j = [x j , y j , 1] T are the normalized homogeneous image coordinates of the points in views i and j, λ is a scale factor. The homography matrix H is given by: H = R - 1 d tN T , (2) where R = R y R x R z and t = [t x , t y , t z ] are respectively the rotation and the translation from views i to j. R y , R x and R z are the rotation matrices along y-, x-and z-axis, respectively. With the knowledge of the vertical direction the rotation matrix R can be simplified such that R = R y by pre-rotating the feature points with R x R z , which can be measured from the IMU (or alternatively from vanishing points [START_REF] Bazin | Motion estimation by decoupling rotation and translation in catadioptric vision[END_REF]). After this rotation, the cameras are in a configuration such that the camera plane is vertical to the ground plane. d is the distance between the view i frame and the 3D plane. N = [n 1 , n 2 , n 3 ] T is the unit normal vector of the 3D plane with respect to the view i frame. For this gravity aligned camera configuration the plane normal N of the ground plane is [0, 1, 0] T . Consequently, Equation 2 that only considers points on the ground plane can be written as: H =   cos (θ) 0 sin (θ) 0 1 0 -sin (θ) 0 cos (θ)   - t d   0 1 0   T (3) The homography relation is defined up to scale, so there are 4 DOFs remaining being the rotation around the y-axis and the translation parameters t x , t y , t z . IV. 0.5PT ROTATION ESTIMATION METHOD The rotation angle can be computed from a single point correspondence. In fact, it can be computed from only the x-coordinate of a single feature point. Rotation estimation can be done independently from the translation estimation if the scene contains far points. These points, that can be considered at infinity, are not affected any more by a translation. Consequently, the translation component is zero for these points and the Equation 3 simplifies to a rotation matrix: H =   cos (θ) 0 sin (θ) 0 1 0 -sin (θ) 0 cos (θ)   (4) In order to further eliminate the unknown scale factor λ, multiplying both sides of Equation 1 by the skew-symmetric matrix [x j ] × , yields the equation: [x j ] × Hx i = 0. (5) Substituting Equation 4into the above equation and expand it:   0 -1 y j 1 0 -x j -y j x j 0     cos (θ) 0 sin (θ) 0 1 0 -sin (θ) 0 cos (θ)     x i y i 1   = 0 (6) By rewriting the equation, we obtain: -x i y j sin(θ) + y j cos(θ) -y i = 0 (7) (x i x j + 1)sin(θ) + (x i -x j )cos(θ) = 0 (8) -y j sin(θ) -x i y j cos(θ) + x j y i = 0 (9) These equations are now derived for the case of points that lie on a plane normal to the y-axis of the cameras and which are infinitely far away. In this case, all the points have to lie on the horizon, which means that the y-coordinate of such a point in normalized coordinates is 0 (normalized coordinates are image coordinates in pixel after multiplication with the inverse calibration matrix). This equates Equation 7and Equation 9 to zero. Only Equation 8remains to be used. Considering the trigonometric constraint sin 2 (θ) + cos 2 (θ) = 1, the rotation parameter sin(θ) can be obtained: sin(θ) = ± x i -x j x i 2 x j 2 + x i 2 + x j 2 + 1 (10) Due to the sign ambiguity of sin(θ), we obtain two possible solutions for the rotation angle. For every point correspondence a rotation hypothesis can be calculated. A 1pt RANSAC loop can be utilized to find a consistent hypothesis with only a few samples. Alternatively the globally optimal solution can be computed by performing an exhaustive search or histogram voting. The exhaustive search is linear in the number of point correspondences and a hypothesis can be computed for every point correspondence. The hypothesis with the maximum number of inliers is the globally optimal solution. To avoid computing the inliers and outliers for every hypothesis a histogram voting method can be used. For this all the hypothesis are collected in a histogram with discrete bins (e.g. a bin size of 0.1 degree) and the bin with the maximum count is selected as the best solution. Alternatively the mean of a window around the peak can be computed for a more accurate result. The inliers of the pure rotation formulation belong to scene points that are very far away and don't influence the translation. For further translation estimation these point correspondences can be removed to reduce the number of data points to process. Translation estimation only needs to consider the outlier set of the rotation estimation. V. TRANSLATION ESTIMATION METHOD After estimation of the rotation parameter as described in the previous section, the feature points in views j can be rotated by the rotation matrix around the yaw axis: xj = R T y x j , (11) This aligns both views such that they only differ in translation t = [ tx , ty , tz ] for x i ↔ xj . Equation 3 therefore is written as: H =   1 0 0 0 1 0 0 0 1   - 1 d   tx ty tz     0 1 0   T (12) In the following subsections we describe 4 different methods of how to estimate the translation parameters. A. 1.5pt linear solution This subsection describes a linear method to compute the remaining translation parameters. Three equations from two point correspondences are used to set up a linear equation system to solve for the translation. The camera-plane distance d is unknown but the translation can be known only up to scale. Therefore d can be absorbed by t. We then obtain: H =   1 -tx 0 0 1 -ty 0 0 -tz 1   (13) Substituting Equation 13into the Equation 5, the homography constraints between x i and xj can be expressed:   0 -1 ỹj 1 0 -x j -ỹ j xj 0     1 -tx 0 0 1 -ty 0 0 -tz 1     x i y i 1   = 0 (14) By rewriting the equation, we obtain:      y i ty -y i ỹj tz = y i -ỹj -y i tx + xj y i tz = xj -x i y i ỹj tx -xj y i ty = x i ỹj -xj y i (15) Even though Equation 15has three rows, it only imposes two independent constraints on t, because the skewsymmetric matrix [x j ] × has only rank 2. To solve for the 3 unknowns of t = [ tx , ty , tz ], one more equation is required which has to be taken from a second point correspondence. In principle, an arbitrary equation can be chosen from Equation 15, for example, the second and third rows of the first point x i ↔ xj , and the second row of the second point x i ↔ x j are stacked into 3 equations in 3 unknowns:      -y i tx + xj y i tz = xj -x i y i ỹj tx -xj y i ty = x i ỹj -xj y i -y i tx + x j y i tz = x j -x i (16) The linear solution for t = [ tx , ty , tz ] can be obtained by:                    tx = x i x j y i + xj y i x j -xj x j y i -xj y i x i y i x j y i -xj y i y i ty = y i x j y i + y i ỹj x j + x i ỹj y i -y i ỹj x i -xj y i y i -ỹj x j y i y i x j y i -xj y i y i tz = x i y i + y i x j -xj y i -y i x i y i x j y i -xj y i y i (17) The translation t from views i to j can be obtained: t = R T y t. (18) For finding the best fitting translation t a RANSAC step should be used. Here it is however possible to evaluate the full point set. From the estimated rotation and translation parameters an essential matrix can be constructed (E = [t] × R y ) and the inliers can be tested against the epipolar geometry. This test is not limited to points on the ground plane and the final inlier set contains all scene points. For a most accurate result a non-linear optimization of the Sampson distance on all the inliers is advised. The techniques of constructing the epipolar geometry and performing the nonlinear optimization are also applicable to other translation estimation methods. B. 1pt method by discrete sampling of the relative height change Translation estimation as explained in the previous section needs 1.5pt correspondences. However, if one of the remaining parameters is known only a single point correspondence is needed for computing the remaining two parameters. This leads to 1pt algorithm for the translation. It is possible to perform a discrete sampling of a suitable parameter within a suitable bounded range. This allows to perform an exhaustive search to find the global optimal solution which produces the highest number of inliers. The time complexity of this exhaustive search is linear in the number of point correspondences if the number of discrete samples is significantly smaller than the number of point correspondences. In order to use the sampling method, Equation 12 can be written as: H =   1 0 0 0 1 0 0 0 1   - ty d   tx / ty 1 tz / ty     0 1 0   T (19) In this variant the relative height change over ground a = ty /d is sampled in discrete steps which leads to an equation system with only 2 unknowns, b = tx / ty , c = tz / ty . Only 1pt is needed to compute a solution b and c for a given a. In the same way, we can choose the second and third row from Equation 15to compute b and c:        b = ax j y i + x i ỹj -xj y i ay i ỹj c = aby i + xj -x i ax j y i (20) Based on the value of b and c, we can recover t = [b, 1, c] T up to scale. Then the estimated translation t between views i and j is also recovered up to scale by Equation 18. C. 1pt method by discrete sampling for x-z translation direction The sampling method in the previous section worked by discretizing the relative height change between two views. As this can be up to scale there is no obvious value for the step sizes and bounds. However, if one is to sample the direction vector of the translation in the x-z plane this represents the discretization of an angle between 0...360 degrees. In this case a meaningful step size can easily be defined. For this variant Equation 12 can be written as: H =   1 0 0 0 1 0 0 0 1   - t2 x + t2 z d   cos(δ) ty / t2 x + t2 z sin(δ)     0 1 0   T (21) The translation direction can be represented as an angle δ and e.g. can be sampled in steps of 1 degree from 0 • to 360 • , which leads to an equation system with only 2 unknowns, a = t2 x + t2 z /d, b = ty / t2 x + t2 z . Only 1pt is needed to compute a solution a and b for a given angle δ. In the same way, we can choose the second and third row from Equation 15to compute a and b:        a = xj -x i xj y i sin(δ) -y i cos(δ) b = ay i ỹi cos(δ) + xj y i -x i ỹj ax j y i (22) Based on the vector [cos(δ), b, sin(δ)] T , we can recover t up to scale. Then we obtain t by Equation 18. D. 1pt method by discrete sampling of the in-plane scale change The method described in this section is another variant of choosing a meaningful parameter for discrete sampling. For easy explanation of this idea one should imagine a camera setup with downward looking cameras with a camera plane parallel to the ground plane. The previously aligned camera setup with the camera plane normal to the ground plane can easily be transformed into such a setup by rotating the feature points about 90 • around the x-axis of camera by multiplication with a rotation matrix R d . Moving a camera looking downwards at a plane (e.g. the street) up and down results in an in-plane scale change of the image, i.e. the points will move inwards to or outwards from the center. The scale change directly corresponds to the effects of a translation in z-direction. This makes the scale change a good parameter for discrete sampling, as the in-plane scale change can be expressed in pixel distances. In this approach discrete values for the scale change are sampled and the remaining translation direction in the x-y plane can be computed for every point correspondence from a single point. Points lying on the same plane, have exactly the same translation shift for all the feature matches. Now, we derive the formula in detail. We assume that (x i , y i , 1) ↔ (x j , y j , 1) are the normalized homogeneous image coordinates of the points in downward looking views i and j. The heights of the downward looking views i and j are h i and h j , respectively. The ground points can be represented in the camera coordinate system of views i and j: X i = h i * [x i , y i , 1] T , X j = h j * [x j , y j , 1] T . The translation between views i and j can be computed directly: td = h j   x j y j 1   -h i   x i y i 1   =   (x j h j -x i h i ) (y j h j -y i h i ) h j -h i   (23) We set the in-plane scale κ = f * (h i /h j ), f is the focal length. We substitute κ into the above equation. We can obtain the translation vector directly as the difference of the image coordinates: td =   f x j -κx i f y j -κy i f -κ   (24) By sampling the in-plane scale κ, we can compute the translation vector td using one point, and choose the solution which has the maximum number of inliers. This sampling interval is defined in pixels and allows setting a meaningful step size (e.g. 1 pixel). The final translation t between two views can be obtained: t = R T y R T d td (25) VI. EXPERIMENTS We validate the performance of the proposed methods using both synthetic and real scene data. The experiments with synthetic scenes will demonstrate the behavior of our derivations in the presence of image noise and IMU noise. The experiments using the KITTI data set [START_REF] Geiger | Are we ready for autonomous driving? the kitti vision benchmark suite[END_REF] will demonstrate the suitability of the methods for use in road driving scenarios. This experiments will also demonstrate that the assumptions taken will hold for real scenarios. A. Experiments with synthetic data To evaluate the algorithms on synthetic data we choose the following setup. The distance of the ground to the first camera center is set to 1. The baseline between two cameras is set to be 0.2 and the direction is either along the x-axis of the first camera (sideways) or along the z-axis of the first camera (forward). Additionally, the second camera is rotated around every axis, three rotation angles varies from -90 • to 90 • . The Roll angle (around z-axis) and Pitch angle (around x-axis) are known. The generated scene points can be set to lie on the ground plane or be distributed freely in space. We evaluate the accuracy of the presented algorithms on synthetic data under different image noise and IMU noise. The focal length is set to 1000 pixels. The solutions for relative rotation and translation are obtained by RANSAC or histogram voting. We assess the rotation and translation error by the root mean square error of the errors. We report the results on the data points within the first two intervals of a 5quantile partitioning 1 (Quintile) of 1000 trials. The proposed methods are also compared against the 2pt method [START_REF] Saurer | Homography based egomotion estimation with a common direction[END_REF]. In all of the experiments, we compare the relative rotation and translation between views i and j separately. The error measure compares the angle difference between the true rotation and estimated rotation. Since the estimated translation between views i and j is only known up to scale, we compare the angle difference between the true translation and estimated translation. The errors are computed as follows: • Rotation error: ξ R = arccos((T r(R gt R T ) -1)/2) • Translation error: ξ t = arccos((t T gt t)/( t gt t )) R gt , t gt denote the ground-truth transformation and R, t are the corresponding estimated transformations. 1) Pure planar scene setting ("PLANAR"): In this setting the generated scene points are constrained to lie on the ground plane. The scene points consist of two parts: near points (0 to 5 meters) and far points (5 to 500 meters). Both parts have 200 randomly generated points. Figure 1(a) and (b) show the results of the 0.5pt method and histogram voting for rotation estimation for gradually increased image noise levels with perfect IMU data. It is interesting to see that our method performs better for forward motion than for sideways motion. The histogram voting has an higher error, because of the binning which has more effect than the image noise. Figure 1 (b) does not show a clear trend with increased image noise levels. It seems that the influence of the sideways motion is stronger than the influence of the image noise. Figure 1(c)-(f) show the influence of increasing noise on the IMU data while assuming image noise with 0.5 pixel standard deviation. Figure 2 shows the results of the 1.5pt linear solution method, 1pt method by sampling for x-z translation direction and 2pt method, for gradually increased image noise levels with perfect IMU data or increasing noise on the IMU data while assuming image noise with 0.5 pixel standard deviation. Note that we use the histogram voting method to compute the rotation first, in order to compare the accuracy of different translation estimation methods. The 1.5pt linear solution method and the 1pt method by sampling for x-z translation direction are robust to the increased image noise and IMU data noise. 2) Mixed scene setting ("MIXED"): In this experiment not all the scene points were constrained to lie on the ground plane. Far points (from 5 to 500 meters distance) do not lie on the ground only. They are generated a heights that vary from 0 to 10 meters. The near points however (from 0 to 5 meters distance) are constrained to lie on the ground plane. Both sets, near points and far points consist of 200 randomly generated points. Figure 3 shows the results of the 0.5pt method and histogram voting for rotation estimation for gradually increased image noise levels with perfect IMU data, or increasing noise on the IMU data while assuming image noise with 0.5 pixel standard deviation. Figure 4 shows the results of the 1.5pt linear solution method, 1pt method by sampling for x-z translation direction and 2pt method, for gradually increased image noise levels with perfect IMU data or increasing noise on the IMU data while assuming image noise with 0.5 pixel standard deviation. In Figure 4 (f) a sensitivity of the 1.5pt linear solution method to noise in Pitch angle can be seen in sideways motion. The experiments on synthetic data validate the derivation of the minimal solution solvers and quantify the stability of the solvers with respect to image noise and IMU noise. The synthetic data did not contain outliers. The use of the methods within a RANSAC loop for outlier detection is part of the experiments using real data. B. Experiments on real data Experiments on real data were performed on the KITTI data set [START_REF] Geiger | Are we ready for autonomous driving? the kitti vision benchmark suite[END_REF]. For the evaluation we utilized all the available 11 sequences which have ground truth data (labeled from 0 to 10 on the KITTI webpage) and together consist of around 23000 images. The KITTI data set provides a challenging environment for our experiments, however, such a road driving scenario does fit our method very well. In all the images a large scene plane is visible (the road) and features at far distances are present as well. For our experiments we performed SIFT feature matching [START_REF] Lowe | Distinctive image features from scale-invariant keypoints[END_REF] between consecutive frames. The ground truth data of the sequences is used to pre-rotate the feature-points by R x R z , basically simulating IMU measurements. Then the remaining relative rotation and translation are estimated with our methods. We perform 3 sets of experiments with the KITTI data set. In a first experiment we test the effectiveness of our proposed quick test for rotation inliers. In the second experiment we compute rotation and translation using all our proposed methods and compare it to the ground truth. We also compare the results to the 5pt method [START_REF] Nistér | An efficient solution to the five-point relative pose problem[END_REF] and 2pt method [START_REF] Saurer | Homography based egomotion estimation with a common direction[END_REF]. In a third experiment we test the quality of the inlier detection by using the different methods. 1) Rotation estimation inlier selection using y-coordinate test: The 0.5pt method for rotation estimation is working under the assumption that a scene point is far away. Using RANSAC or histogram voting the inliers of this assumption can be found. However, even before computing rotation hypothesis with the 0.5pt method the point correspondences can already be checked if they stem from far points, to remove all the near points. For a far point in this setting the y coordinate of a point feature does not change. So any point correspondence where the y coordinate changes is a near feature that can be discarded for the rotation estimation. This allows to significantly remove a big part of feature points for the rotation estimation to make it more efficient. Table I can be removed already based on this simple criteria. For this test a feature was classified as a far feature if the y coordinate does not change more than 1 pixel. N umSIF T is the average number of point correspondences within a sequence, N umRemove is the average number of outliers removed, and Ratio = N umRemove/N umSIF T is the average of the percentage of the removed outliers. It can be seen that at least more than 52% feature matches can be removed due to this criteria. 2) Comparison of rotation and translation estimation to ground truth: In this experiment we compare the rotation and translation estimates of our methods to ground truth and also to the results of the 5pt method [START_REF] Nistér | An efficient solution to the five-point relative pose problem[END_REF] and the 2pt method [START_REF] Saurer | Homography based egomotion estimation with a common direction[END_REF]. The rotation value at the peak in the histogram is then selected. Both 0.5pt method and histogram voting method provide better results than the 5pt method and 2pt method. The histogram voting method is slightly more accurate than the 0.5pt method. In subsequent experiments, we use the histogram voting method to estimate the rotation first, then estimate the translation using the different methods. The translation error for all sequences is shown in Table III. All the four methods for translation estimation, the 1.5pt linear solution method (1.5pt lin), the 1pt method by discrete sampling of the relative height change (1pt h), the 1pt method by discrete sampling for x-z translation direction (1pt d) and the 1pt method by discrete sampling of the inplane scale change (1pt s) are compared to ground truth. The 1.5pt method is used within a RANSAC loop with a fix number of 100 iteration and an inliers threshold of 2 pixel. For the 1pt methods an exhaustive search is performed and the solution with the highest number of inliers is used. The table shows that all of our methods provide better results than the 2pt method. The 1pt methods for translation estimation and the 5pt method are more accurate than the linear solution using 1.5pt. The 1pt d method offers the best overall performance among all the translation estimation methods. 3) Inlier recovery rate: The main usage for our proposed algorithms should be to efficiently find a correct inlier set which can then be used for accurate motion estimation using e.g. non-linear optimization (maybe also using our motion estimates as initial value). We therefore perform an experiment that tests how many of the real inliers (calculated from the ground truth) can be found by our methods. This inlier recovery rate is shown in Table IV as an average over all sequences (an inlier threshold of 2 pixels). All of our four methods can be used to find a correct inlier set, and provide a more complete inlier set than the 2pt method. The inlier recovery rate of 1pt d method is slightly better than the 5pt method. Inlier detection using the 1pt d method is shown in Figure 5. VII. CONCLUSION The presented algorithms allow to compute motion estimation and inlier sets by exhaustive search or by histogram voting. This is an interesting alternative to the traditional RANSAC method. RANSAC finds an inlier set with high probability but there is no guarantee that it is really a good one. Also, our experiments demonstrate that the assumptions taken in these algorithms are commonly met in road driving scenes (e.g. the KITTI data set), which could be a very interesting application area for it. Fig. 1 . 1 Fig. 1. Rotation error with "PLANAR" setting: Evaluation of the 0.5pt method, histogram voting and 2pt method. Left: forward motion, right: sideways motion. (a) and (b) are with varying image noise. (c), (d), (e) and (f) are under different IMU noise and constant image noise 0.5 pixel standard deviation. (c) and (d) are with Roll angle noise. (e) and (f) are with Pitch angle noise. Fig. 2 . 2 Fig. 2. Translation error with "PLANAR" setting: Evaluation of the 1.5pt linear solution method, 1pt method by sampling for x-z translation direction and 2pt method. Left: forward motion, right: sideways motion. (a) and (b) are with varying image noise. (c), (d), (e) and (f) are under different IMU noise and constant image noise 0.5 pixel standard deviation. (c) and (d) are with Roll angle noise. (e) and (f) are with Pitch angle noise. Fig. 3 . 3 Fig. 3. Rotation error with "MIXED" setting: Evaluation of the 0.5pt method, histogram voting and 2pt method. Left: forward motion, right: sideways motion. (a) and (b) are with varying image noise. (c), (d), (e) and (f) are under different IMU noise and constant image noise 0.5 pixel standard deviation. (c) and (d) are with Roll angle noise. (e) and (f) are with Pitch angle noise. Fig. 4 . 4 Fig. 4. Translation error with "MIXED" setting: Evaluation of the 1.5pt linear solution method, 1pt method by sampling for x-z translation direction and 2pt method. Left: forward motion, right: sideways motion. (a) and (b) are with varying image noise. (c), (d), (e) and (f) are under different IMU noise and constant image noise 0.5 pixel standard deviation. (c) and (d) are with Roll angle noise. (e) and (f) are with Pitch angle noise. Fig. 5 . 5 Fig. 5. Inlier detection example, left: previous frame; right: current frame. (a). Ground truth inliers: 885 matches; (b). Inliers detected by the 1pt d method: 884 matches. ) 0.2 Forward motion 0.2 Sideways motion 0.18 2pt method 1.5pt method 1pt sampling method 0.18 2pt method 1.5pt method 1pt sampling method 0.16 0.16 Translation error (degree) 0.06 0.08 0.1 0.12 0.14 Translation error (degree) 0.06 0.08 0.1 0.12 0.14 0.04 0.04 0.02 0.02 0 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Noise standard deviation (pixel) Noise standard deviation (pixel) 14 Forward motion 35 Sideways motion 2pt method 1.5pt method 1pt sampling method 2pt method 1.5pt method 1pt sampling method 12 30 Translation error (degree) 4 6 8 10 Translation error (degree) 10 15 20 25 2 5 0 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Rotation Noise in Degree (Roll) Rotation Noise in Degree (Roll) 30 Forward motion 45 Sideways motion 2pt method 1.5pt method 1pt sampling method 40 2pt method 1.5pt method 1pt sampling method 25 35 Translation error (degree) 10 15 20 Translation error (degree) 15 20 25 30 10 5 5 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Rotation Noise in Degree (Pitch) Rotation Noise in Degree (Pitch) shows how many of the feature points 0.016 Forward motion 0.045 Sideways motion 0.014 2pt method 0.5pt method Histogram voting 0.04 2pt method 0.5pt method Histogram voting 0.012 0.035 (a) Rotation error (degree) 0.006 0.008 0.01 (b) Rotation error (degree) 0.015 0.02 0.025 0.03 0.004 0.01 0.002 0.005 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Noise standard deviation (pixel) Noise standard deviation (pixel) 0.12 Forward motion 0.4 Sideways motion 2pt method 0.5pt method Histogram voting 2pt method 0.5pt method Histogram voting 0.35 0.1 0.3 (c) Rotation error (degree) 0.04 0.06 0.08 (d) Rotation error (degree) 0.15 0.2 0.25 0.1 0.02 0.05 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Rotation Noise in Degree (Roll) Rotation Noise in Degree (Roll) 0.25 Forward motion 0.6 Sideways motion 2pt method 0.5pt method Histogram voting 2pt method 0.5pt method Histogram voting 0.2 0.5 (e) Rotation error (degree) 0.1 0.15 (f) Rotation error (degree) 0.2 0.3 0.4 0.05 0.1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Rotation Noise in Degree (Pitch) Rotation Noise in Degree (Pitch) TABLE I EFFECT I OF THE Y-COORDINATE TEST FOR OUTLIER REMOVAL. Sequences N umSIF T N umRemove Ratio 00 (4541 images) 634 463 74.15% 01 (1101 images) 398 213 52.26% 02 (4661 images) 648 508 78.74% 03 (801 images) 886 572 65.50% 04 (271 images) 561 421 74.83% 05 (2761 images) 672 450 69.82% 06 (1101 images) 539 383 71.32% 07 (1101 images) 750 446 63.97% 08 (4071 images) 658 460 71.65% 09 (1591 images) 581 434 75.27% 10 (1201 images) 652 454 73.35% Table II lists the results of the rotation estimation and Table III lists the results for the translation estimation. In this experiment the relative rotations and translations between two consecutive images are compared to the ground truth relative poses. The tables show the median error for each 0.2 Forward motion 0.2 Sideways motion 0.18 2pt method 1.5pt method 1pt sampling method 0.18 2pt method 1.5pt method 1pt sampling method 0.16 0.16 (a) Translation error (degree) 0.06 0.08 0.1 0.12 0.14 (b) Translation error (degree) 0.06 0.08 0.1 0.12 0.14 0.04 0.04 0.02 0.02 0 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Noise standard deviation (pixel) Noise standard deviation (pixel) 16 Forward motion 12 Sideways motion 2pt method 1.5pt method 1pt sampling method 2pt method 1.5pt method 1pt sampling method 14 10 (c) Translation error (degree) 4 6 8 10 12 (d) Translation error (degree) 4 6 8 2 2 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Rotation Noise in Degree (Roll) Rotation Noise in Degree (Roll) 35 Forward motion 40 Sideways motion 2pt method 1.5pt method 1pt sampling method 2pt method 1.5pt method 1pt sampling method 30 35 (e) Translation error (degree) 10 15 20 25 (f) Translation error (degree) 10 15 20 25 30 5 5 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Rotation Noise in Degree (Pitch) Rotation Noise in Degree (Pitch) TABLE II ROTATION II ERROR FOR KITTI SEQUENCES [DEGREES]. Seq. 0.5pt method Histogram voting 5pt 2pt 00 0.060 0.051 0.13 0.24 01 0.073 0.063 0.12 0.23 02 0.068 0.057 0.12 0.26 03 0.068 0.057 0.11 0.21 04 0.074 0.030 0.11 0.17 05 0.049 0.032 0.11 0.21 06 0.073 0.050 0.11 0.21 07 0.052 0.034 0.11 0.19 08 0.051 0.037 0.12 0.19 09 0.079 0.094 0.12 0.24 10 0.059 0.048 0.12 0.22 individual sequence. For rotation estimation the RANSAC variant and the histogram voting scheme was tested. For the RANSAC variant a fixed number of 100 iterations with an inlier threshold of 2 pixels has been used. For the histogram voting a rotation hypothesis is computed for every point correspondence exhaustively and entered into a histogram. TABLE III TRANSLATION III ERROR FOR KITTI SEQUENCES [DEGREES]. Seq. 1.5pt lin 1pt h 1pt d 1pt s 5pt 2pt 00 4.23 1.90 1.64 1.58 1.93 8.03 01 7.34 2.01 1.18 1.20 1.41 10.74 02 3.68 1.83 1.53 1.54 1.53 6.47 03 4.69 2.13 1.88 2.58 2.12 8.61 04 2.64 0.95 0.88 0.92 1.19 5.45 05 3.92 1.57 1.37 1.34 1.67 7.90 06 4.02 1.27 1.20 1.12 1.37 5.57 07 4.89 2.20 1.82 1.89 2.37 10.09 08 4.23 2.17 1.86 1.84 2.06 7.41 09 4.20 2.04 1.53 1.53 1.54 7.20 10 3.90 1.78 1.61 1.58 1.73 7.39 TABLE IV INLIER IV RECOVERY RATE FOR ALL KITTI SEQUENCES. Seq. 1.5pt lin 1pt h 1pt d 1pt s 5pt 2pt all 88.47% 96.97% 98.29% 96.51% 98.27% 84.37% ACKNOWLEDGMENT This work has been partially funded by CopTer Project of Grands Réseaux de Recherche Haut-Normands.
45,940
[ "174822", "3900" ]
[ "495277", "23832", "495876", "488717", "163126", "484144" ]
01756824
en
[ "spi", "math" ]
2024/03/05 22:32:10
2017
https://hal.science/hal-01756824/file/Uncertainty%20Analysis_renewable_energy_2nd%20review_bf.pdf
Xingyu Yan Dhaker Abbes Bruno Francois email: bruno.francois@ec-lille.fr Uncertainty Analysis for Day Ahead Power Reserve Quantification in an Urban Microgrid Including PV Generators Keywords: Power reserve scheduling, renewable energy sources, forecast errors, uncertainty analysis, reliability. I Setting an adequate operating power reserve (PR) to compensate unpredictable imbalances between generation and consumption is essential for power system security. Operating power reserve should be carefully sized but also ideally minimized and dispatched to reduce operation costs with a satisfying security level. Although several energy generation and load forecasting tools have been developed, decision-making methods are required to estimate the operating power reserve amount within its dispatch over generators during small time windows and with adaptive capabilities to markets, as new ancillary service markets. This paper proposes an uncertainty analysis method for power reserve quantification in an urban microgrid with a high penetration ratio of PV (photovoltaic) power. First, forecasting errors of PV production and load demand are estimated one day ahead by using artificial neural networks. Then two methods are proposed to calculate one day ahead the net demand error. The first perform a direct forecast of the error, the second one calculates it from the available PV power and load demand forecast errors. This remaining net error is analyzed with dedicated statistical and stochastic procedures. Hence, according to an accepted risk level, a method is proposed to calculate the required PR for each hour. to maintain the security and reliability of grids with a high share of renewable generators, primary, secondary and tertiary regulation as well as spinning reserve are now required from renewable generators in more and more grid codes [START_REF] Ipakchi | Grid of the future[END_REF][START_REF] Galiana | Scheduling and pricing of coupled energy and primary, secondary, and tertiary reserves[END_REF]. This operating power reserve should be ideally minimized to reduce system costs with a satisfying security level. Typically, PV power generation forecasting is needed to optimize the operation and to reduce the cost of power systems, especially for the scheduling and dispatching of required hourly operating [START_REF] Mills | Integrating Solar PV in Utility System Operations[END_REF]. However, the predicted uncertainty associated with forecast cannot be eliminated even with the best model tools. In addition to the load demand uncertainty, the combination of power generation and consumption variability with forecast uncertainty makes the situation more difficult for power system operators to schedule and to set power reserve level. Therefore, the uncertainties from both generation and consumption must be taken into account by an accurate stochastic model for power system management. In addition, forecasting errors from system uncertainty analysis could be used to set power reserve [START_REF] Sobu | Dynamic optimal schedule management method for microgrid system considering forecast errors of renewable power generations[END_REF]. Historically, most conventional utilities have adopted deterministic criteria for the reserve requirement: the operating rules required PR to be greater than the capacity of the largest on-line generator or a fraction of the load, or equal to some function of both of them. Those deterministic criteria are wildly used because of their simplicity and understandable employing. However, I these deterministic calculation methods are gradually replaced by probabilistic methods that respond to the stochastic factors corresponding to the system reliability. Several research works have been focused on calculating the total system uncertainty from all the variable sources. Based on dynamic simulations, the study in [START_REF] Delille | Dynamic frequency control support by energy storage to reduce the impact of wind and solar generation on isolated power system's inertia[END_REF] is focusing on dynamic frequency control of an isolated system and the reduction of the impact due to large shares of wind and PV powers. However this work did not consider other aspects, such as variability and forecast accuracy. A deterministic approach is proposed in [START_REF] Vosa | Revision of reserve requirements following wind power integration in island power systems[END_REF] to analyze the flexibility of thermal generation to balance wind power variations and prediction errors. A stochastic analysis could improve it in order to be able to quantify the power reserve with a risk index. A stochastic model was developed in [START_REF] Sansavini | A stochastic framework for uncertainty analysis in electric power transmission systems with wind generation[END_REF] to simulate the operations and the line disconnection events of the transmission network due to overloads beyond the rated capacity. But the issue is clearly that analysis of the system states, in terms of power request and supply, are critical for network vulnerability and may induce a cascade of line disconnections leading to a massive network blackout. An insurance strategy is proposed in [START_REF] Yang | Insurance strategy for mitigating power system operational risk introduced by wind power forecasting uncertainty[END_REF] to cover the possible imbalance cost that wind power producer may incur in electricity markets. Monte Carlo simulations have been used to estimate insurance premiums for further analysis excesses and so require a significant calculation time. Our previous works in [START_REF] Buzau | Quantification of operating power reserve through uncertainty analysis of a microgrid operating with wind generation[END_REF][START_REF] Yan | Operating power reserve quantification through PV generation uncertainty analysis of a microgrid[END_REF] showed that forecasting errors from system uncertainty analysis could be used for PR setting. Following these promising results and experiences, we have carried out further investigations on rigorous methods to quantify the required PR. The task is to calculate it by considering uncertainties from PV prediction and load forecast or with uncertainty estimation. In the second part of this paper, PV power and load uncertainty and variability are analyzed. Then, the artificial neural network based prediction methods are applied to forecast PV power, load demand and errors. In the third part, the Net Demand (ND) Forecasted uncertainty is obtained, for each hour of the next day, as the difference between the forecasted production uncertainty and the forecasted load uncertainty. Two methods are detailed to calculate the ND forecast errors. An hourly probability density function of all predicted ND forecasted errors has been used for the error analysis. In the fourth part, a method is explained to assess the accuracy of these predictions and to quantify the required operating PR to compensate the system power unbalancing due to these errors. Power reserve is obtained by choosing a risk level related to two reliability assessment indicators: loss of load probability (LOLP), and expected energy not served (EENS). Finally, this management tool is proved through an illustrative example. II.METHODOLOGY A.PV Power and Load Uncertainty Analysis The PV power variability is the expected change in generation while PV power uncertainty is the unexpected change from what was anticipated, such as a suddenly cloud cover. The former depends on the latitude and the rotation of Earth, while the latter is mostly caused by uncertainty conditions, such as cloud variations over the PV. The movement of clouds introduces a significant uncertainty that can result in rapid fluctuations in solar irradiance and therefore PV power output. However, the influence of a moving cloud and, hence, the shading of an entire PV site depends on the PV area, cloud speed, cloud height and many other factors. Data from solar installations covering a large spatial extent have an hourly temporal dynamic, while individual zones have instantaneous dynamics as in local distribution networks or micro grids. The daily operation of a power system should be matched to load variations to maintain system reliability. This reliability refers to two areas: -system adequacy, which depends on sufficient facilities within the system to satisfy system operational constraints and load demand, and -system security, which is the system ability to respond to dynamic disturbances. When RES represent a significant part of the power generation, system operating power reserve must be larger to regulate the variations and maintain the security level. This additional power is required to stable the electrical network. Classically this power reserve is provided by controllable generators (gas turbines, diesel plant, ..). Today, the increasing of balancing power reserves leads to a significant increase in the power system operating cost and so system may limit the PV power penetration is due to the variability and uncertainty over short time scales. There are different ways to manage variability and uncertainty. In general, system operators and planners use mechanisms including forecasting, scheduling, economic dispatch, and power reserves to ensure performances that satisfy reliability standards with the least cost. The earlier system operators and planners know what kind of variability and uncertainty they will have to deal with, the more options they will have to accommodate it and cheaper it will be. The key task of variability and uncertainty management is to maintain a reliable operation of the power system (grid connected or isolated) while keeping down costs. Energy management of electrical systems is usually implemented over different time scales. One day ahead, system operators have to balance the load demand with electrical generation by planning the starting and set points of controllable generators on an hourly time step. Risks are also considered and thus a power reserve has also to be hourly planned. During the day, unexpected PV power lack is compensated by injecting a primary power reserve. The PV variability can be separated into different time scales associated with different impacts, onto the grid management and costs. Consequently, more capacity to compensate errors in forecasts or unexpected events must be accommodated. The instantaneous PV power output is affected by many correlated external and physical inputs, such as irradiance, humidity, pressure, cloud cover percentage, air/panels temperature and wind speed. The per unit surface power output is modeled by [START_REF] Fuentes | Application and validation of algebraic methods to predict the behaviour of crystalline silicon PV modules in Mediterranean climates[END_REF]: ) ) 25 ) ( ( . - 1 ( ). ( . . = ) (  t T C t I A t P p r PV  (1) where is the power conversion efficiency of the module (%), A is the surface area of PV panels (m 2 ), I r is the global solar radiation (kW/m 2 ) and T is the outside air temperature (°C), p C is the cell maximum power temperature coefficient (equal to 0.0035 but it can varies from 0.005 to 0.003 per °C in crystalline silicon). The PV power, solar irradiance and temperature of our lab PV plant have been recorded during three continuous days (22/06/2010 -24/06/2010) and are presented respectively in Fig. 1. The PV power variability is highly correlated with irradiance, so as to the temperature, while the PV power uncertainty is almost caused by the irradiance change. Sensed PV power data points can be drawn according to sensed irradiance and temperature data points in order to highlight correlations (Fig. 2). The local load consumption demand is also highly unpredictable and quite random. It depends on different factor, such as the economy, the time, the weather, and other random effects. However, for power system planning and operation, load demand variation and uncertainty analysis are crucial for power flow study or contingency analysis. As for PV production, load demand variations exist in all time scales and system actions are needed for power control in order to maintain the balancing. B.Power Forecasting Methodology 1)PV Power Forecasting In recent decades, several forecasting models of energy production have been published [START_REF] Espinar | Photovoltaic Forecasting: A state of the art[END_REF][START_REF] Yan | Solar radiation forecasting using artificial neural network for local power reserve[END_REF][START_REF] De Rocha | Photovoltaic forecasting with artificial neural networks[END_REF][START_REF] Mohammed | Probabilistic Forecasting of Solar Power: An Ensemble Learning Approach[END_REF][START_REF] Golestaneh | Generation and evaluation of space-time trajectories of photovoltaic power[END_REF]. For PV power, one method consists in forecasting solar radiation and then forecasting PV power with a mathematical model of the PV generator. A second one proposes to directly predict the PV power output from environmental data (irradiance, temperature, etc.). Statistical analysis tools are generally used, such as linear/multiple-linear/non-linear regression and autoregressive models that are based on time series regression analysis [START_REF] De Giorgi | Photovoltaic power forecasting using statistical methods: impact of weather data[END_REF]. These forecasting models rely on modeling relationships between influent inputs and the produced output power. Consequently, mathematical model calibration and parameters adjustment process take a long time. Meanwhile, some intelligent based electrical power generation forecast methods, as expert systems, fuzzy logic, neural networks, are widely used to deal with uncertainties of RES power generation and load demand [START_REF] Espinar | Photovoltaic Forecasting: A state of the art[END_REF][START_REF] Ghods | Different methods of long-term electric load demand forecasting; a comprehensive review[END_REF]. In daily markets, the hourly PV power output for the next day (day D+1) at time step h is represented as the sum of a day ahead hourly forecast PV power ( h v P ~) and the forecast error (ε h Pv ): Pv h h h v P Pv   ~ = (2) 2)Load Demand Forecasting For load demand forecast, numerous variables affect directly or indirectly the accuracy. Until now, many methods and models have already been tried out. In [START_REF] Ghods | Different methods of long-term electric load demand forecasting; a comprehensive review[END_REF], several long-term (month or year) load forecasting methods are introduced and are very important for planning and developing future generation, transmission and distribution systems. In [START_REF] Hong | Long term probabilistic load forecasting and normalization with hourly information[END_REF], a long term probabilistic load forecasting method is proposed with three modernized elements: predictive modeling, scenario analysis, and weather normalization. Long-term and short-term load forecast play important roles in the formulation of secure and reliable operating strategies for the electrical power system. The objective is to improve the forecast accuracy in order to optimize power system planning and to reduce costs. ): The day ahead actual load demand at time step h ( L h h h L L   ~ = (3) 3)Net Demand Forecasting Knowing the PV power forecasting and the load demand forecasting, the net demand forecasting ( h D N ~) for a given time step h is expressed as: h h h v P L D N   (4) The real net demand (ND h ) is composed of the forecasted day ahead ND and a forecast error (ε h ND ): ND h h h D N ND   ~ = (5) C.Application of Back-Propagation ANN to Forecast In order to predict the net demand errors, as well as PV and load forecast errors, we have developed several back-propagation (BP) Artificial Neural Networks (ANN) [START_REF] Francois | Orthogonal considerations in the design of neural networks for function approximation[END_REF]. Compared with conventional statistical forecasting schemes, ANN has some additional advantages, such as simplicity in adaptability to online measurements, data error tolerance and lack of any excess information. Since the fundamentals of ANN based predictors can be found in many sources, it will not be recalled again. III.NET DEMAND UNCERTAINTY ANALYSIS A.Net Demand Uncertainty In order to simplify the study, the uncertainties coming from conventional generators and network outages are ignored and only load and PV power uncertainties are considered. Then the error of the ND forecasting is representing the ND uncertainty. Two possible methods are proposed to calculate the forecasted net demand error. 1)First Method: Forecast of the Day-ahead Net Demand Error The real ND is the difference of the sensed load and the sensed PV power. Based on the historical sensed and forecasted database of the load demand and PV production, the past forecasted ND is calculated as the difference between past forecasted load and past forecasted PV power at time step h. Then by using past real ND ( ). Hence these data are used to calculate the day-ahead forecast of ND errors ND h  ~ (D+1) (Fig. 3). The obtained ND error forecast can be characterized by the mean and variance (respectively, ND h  and ND h  ). 2)Second Method: Calculation from the PV Power and the Load Forecast Errors Estimation A second method is to define the ND uncertainty as the combination of PV power and load uncertainties. It is generally assumed that PV power and load forecast errors are unrelated random variables. So, firstly the day-ahead PV power and load forecasting errors ( PV h  ~and L h  ~) are estimated independently. Then, last 24-hour load forecast errors and PV power forecast errors are calculated as the difference of the sensed load and PV power, and forecasted load and PV power, respectively (Fig. 4). The mean values and standard deviations of those forecasting errors can be obtained. Then the ND forecasting error can be attained as a new variable which comes from those two independent variables. The new obtained pdf is also a normal distribution with the following mean and variance [START_REF] Ortega-Vazquez | Estimating the spinning reserve requirements in systems with significant wind power generation penetration[END_REF][START_REF] Bouffard | Stochastic security for operations planning with significant wind power generation[END_REF]: PV h L h ND h      (6) 2 2 ) ( ) ( PV h L h ND h      ( ) ( , ND h ND h   2 ) ( , PV h PV h   2 ) ( , L h L h   Mean average and Standard deviation Equations ( 6) and ( 7 B.Assessment of the Forecasting Uncertainty The predicted errors ( ND h  ~) of the ND forecast ( ND h D N ~) can be obtained with the normal probability density function (Fig. 5). ) , ( 2 1 2 2 ) ( 2 ) ( ND h ND h B ND h ND h B F d e pdf ND h ND h                (8) The forecasting uncertainty can be represented as upper and lower bound margins around the ND forecast. Bound margins (B) are extracted by a normal inverse cumulative distribution function for a desired probability index x (Fig. 5):   x B F B x F B ND h ND h ND h ND h    ) , ( : ) , ( 1 -     (9) Lower bound (Blower) Normal Inverse Cumulative Distribution Function (F -1 ) Normal Probability Density Function (F(B|µ ND ,σ ND )) Upper bound (Bup) + + _ + ND h   ND h   Net Demand Forecast IV.POWER RESERVE QUANTIFICATION A.Reliability Assessment Resulting from the uncertainty assessment, the pdf of the forecasted ND errors in a given time step is considered for the calculation of the power reserve [START_REF] Holttinen | Methodologies to determine operating reserves due to increased wind power[END_REF]. To estimate the impact of forecast ND uncertainty, two common reliability assessment parameters are used: the loss of load probability (LOLP) and the expected energy not served (EENS) [START_REF] Li | A multi-state model for the reliability assessment of a distributed generation system via universal generating function[END_REF][START_REF] Wang | Spinning reserve estimation in microgrids[END_REF][START_REF] Liu | Quantifying spinning reserve in systems with significant wind power penetration[END_REF]. LOLP represents the probability that the load demand (L h ) exceeds PV power (P h ) at time step h:      R h h h pdf P L prob LOLP   )d ( ) 0 ( = ( 10 ) prob(L h -P h >0) is also the probability that the power reserve (R) is insufficient to satisfy the load demand in the time step h. Meanwhile, EENS measures the magnitude of the load demand not served: ) ( ) 0 ( = h h h h h P L P L prob EENS     (11) where (L h -P h ) is the missed power in the time step h. In this situation, the grid operator can either disconnect a part of loads or use the power reserve to increase the power production. After obtaining each of the next 24 hours forecast ND pdf s , an hourly day ahead reliability assessment can be attained. Electrical system operators can use this reliability to calculate the system security level. B.Risk-constrained Energy Management A reserve characteristic according to a risk level and for each time step can be obtained. With a fixed risk index, the operator can then easily quantify the power reserve [START_REF] Vosa | Revision of reserve requirements following wind power integration in island power systems[END_REF]. As shown in Fig. 6, the sum of the shaded areas represents the accepted risk of violation with x % of LOLP. R is the needed power reserve to compensate the remaining power unbalance. So, the reliability assessment can be done with the hourly cumulative distribution function (cdf) obtained from the normal difference distribution of ND errors. Then the cdf represents the probability that the random variable (here the ND error) is less or equal to x. This assessment has been made under the assumption of a positive hourly forecasted ND. Otherwise, if the forecasted ND is negative, the reserve power for the same reliability level will be unnecessary (the power generation is more than the load demand). So the reliability has been assessed by considering only positive forecasted ND errors (L h -P h ) for each time step. Then, LOLP is deduced with:       R h h h pdf P L prob LOLP   )d ( - 1 0) ( - 1 = (12) When the LOLP equals to the risk index x %, the reserve power (R) covers the remaining probability that the load demand exceeds the PV power generation (blue part in Fig. 6). V.ILLUSTRATIVE CASE STUDY A.Presentation and Data Collection The studied urban microgrid is a 110 kW load peak and is powered with 17 kW PV panels and three micro-gas turbines 30 kW, 30 kW and 60 kW each. Sensed data from our 17 kW PV plant located on the lab roof have been recorded in 2010 and 2013. For the load forecasting, past daily French power consumptions have been scaled to obtain per unit values of locally power consumption with the same characters and dynamics. A part of this database has been used to design the ANN based forecasting tool, a part to assess the estimation quality and a third one to implement the application of the proposed method in a real situation [START_REF] De Rocha | Photovoltaic forecasting with artificial neural networks[END_REF]. The ANN has been trained with past recorded data from the training set to predict hourly PV output power.    n i k k y y n nRMSE 1 2 ) ( 1 = (13)    n i k k y y n nMAE 1 1 = (14) B.ANN Based Power Forecast and Net Demand Forecast 1)ANN based PV Power Forecasting A three-layer ANN has been developed for the PV power generation prediction with: one input layer including last n hours of measured PV power, of irradiance and of forecasted average temperature (obtained from our local weather information service) (Fig. 7); one hidden layer with 170 neurons; -one output layer with the 24 predicted PV power points (for each hour). Various hidden layer neurons have been tested until getting an nRMSE inferior to 5%. First, 60% of previously sensed data (representing one year of data) have been used for training the ANN based PV power forecasting tool. Next 20% of sensed data are used to create a validation pattern set in order to assess the prediction quality. The test set (with the remaining 20% data) is used to implement the forecast error calculation. Obtained nRMSE and nMAE for next 24 hours PV power predictions are given in Table I. Predicted errors for 120 test days are given in Fig. 8. Absolute values are less than 0.4 p.u. of the PV power output. The largest errors are in the middle of the day when the PV power production is the highest. 2)ANN based Load Forecasting Another neural network has been used for load forecast. The load demand prediction model includes: an input layer with last 48 hours load demand measurements and predicted temperatures for next 24 hours, one hidden layer with 70 neurons (in order to get an nRMSE inferior to 4%) and an output layer that predicts next 24 hours load demand. 60% of available data are used for the neural network training, 20% for the validation and 20% for tests. The predicted errors for 120 test days are shown in Fig. 9. As it can be seen, the largest forecast error occurs at 8:00 and 18:00. Yet the total absolute errors are less than 0.2 p.u. of the load demand. Obtained results of nRMSE and nMAE are listed in Table II. Following the method highlighted in Fig. 3, another ANN is applied for ND errors forecast: an input layer with last 24 hours predicted net demand errors, one hidden layer with 70 neurons and an output layer that predicts next 24 hours forecasted net errors. Application of the first method (Fig. 3) for the time step at 12 am gives: 1.781 0.1282, - C.Forecasting Uncertainty Assessment By applying both proposed methods, the uncertainties of PV power forecasting, load forecasting and net forecasting with various probability indices (from 90% to 60%) in a random day are represented as a function of the forecasting data and the predicted errors of forecasting. In order to simplify the explanation, results are given with the second method (corresponding to Fig. 5). As shown in Fig. 11, the uncertainty of PV power forecasting is higher in the middle of the day, when the PV system generates the highest power. While in the morning (from 6:00 to 10:00) and afternoon (from 17:00 to 21:00), the uncertainty is smaller. Obviously, PV power forecasting uncertainty increases, and decreases with PV power increase and decrease respectively. Also, the uncertainty is increased when the time horizon is larger. For example, at 10:00 and at 17:00 power outputs are almost at the same level (about 6.5 kW), but uncertainty is larger at 17:00 then at 9:00. The load forecasting has the same variation trend (Fig. 12). Fig. 13 depicts the obtained ND uncertainty with the first method. If the forecasted ND is positive, then additional power sources have to be programmed to cover the difference. Otherwise, if forecasted ND is negative then three actions must be considered to meet the low forecasted demand: -A part of PV power generators must be switched off (or can work at a sub-optimal level). -Controllable loads (as electrical vehicles, heating loads.) must be switched on to absorb excess available power. -Export the available excess energy to the main grid. D.Power Reserve Calculation with Fixed Risk Indices The forecasted ND uncertainty assessment has been done with the hourly cumulative distribution function (cdf) obtained from the ND forecast errors. Then, the hourly risk/reserve curve takes into account all the errors from the cdf s . Since the forecast ND errors can be expressed as an x % of the rated power, the PR can be drawn according to the LOLP. Fig. shows the required PR variation according to LOLP and EENS (with the second method in Section III). Therefore, an operating PR under x % of LOLP would cover a part of the forecast ND uncertainty. For example, with 10% of LOLP, the reserve power will be 7 kW and the EENS will be 0.2 kW. In general, this operating reserve is limited not only by the risk indices but also by the availability of micro- On Fig. 15, an assessment of hourly reserve power required with the second method for different LOLP has been deduced. Much more reserve will be needed when the LOLP rate is very low, which means a high security level. While less reserve power will be needed with a high LOLP rate, but then the risk will be higher. For example with 1% of LOLP, the necessary PR will be 14 kW (EENS is almost zero) at 12:00 am, while the necessary reserve power will be 7 kW with a 10% of LOLP and EENS increases to 0.25 kWh. If a constant LOLP rate is set, the power reserve for each hour can be obtained. As shown in Fig. 16, with a 1% of LOLP, more power reserve is needed in the middle of the day when larger PV power is generated. Moreover, power reserve with the second method is higher than the method with direct ND forecast. The most likely explanation of this result is because the load forecast uncertainty and PV forecast uncertainty are not totally independent. Sharing a common temperature, integrated PV power uncertainty and load uncertainty is greater than the direct ND forecast uncertainty. This result can be used for power dispatch management. VI.CONCLUSION This work proposed a new technique to quantify the power reserve of a microgrid by taking into account the PV power forecasting uncertainty and load forecasting uncertainty. In order to assess these uncertainties, a three-layer BP ANN is used to estimate errors of PV power and load forecastings. Two methods are proposed to obtain the ND forecast uncertainties. With the first method, a probabilistic model is proposed to forecast ND uncertainty distribution by integrating the uncertainties from both PV power and load. The other method is desired to directly forecast the ND errors. The power reserve quantification results demonstrate that with a fixed risk index, the power reserve for next day or next 24 hours can be evaluated to cover the risk. As the uncertainty from forecasting errors increases with time horizon, future research works are oriented toward the implementation of the intraday adjustment. The dispatch of the calculated power reserve onto micro-gas turbines, controllable loads and also new "PV based active power generators" is also an interesting way to pave. Fig. 1 .Fig. 2 . 12 Fig. 1. PV power, solar irradiance and temperature in three continuous days. h L ) is assumed to be the sum of the day ahead forecasted load ( h L ~) and an error (ε h L Fig. 3 . 3 Fig. 3. Net demand uncertainty calculation from ND error forecast. Fig. 4 . 4 Fig. 4. Net demand uncertainty calculation from PV power and load forecasting errors prediction. Fig. 5 5 Fig. 5 Net uncertainty calculation at hour h with a given probability. Fig. 6 . 6 Fig. 6. Calculation of power reserve requirements (R) based on forecast ND uncertainty ( ND h  ~) with x% of LOLP, at time step h. 24 .Fig. 7 . 247 Fig. 7. PV power, load forecasting and errors prediction with ANN. Fig. 8 . 8 Fig. 8. PV power prediction errors on 120 test days. Fig. 9 . 9 Fig. 9. Load prediction on 120 test days. Fig. 10 . 10 Fig. 10(c) with obtained parameters: Fig. 11 . 11 Fig. 11. PV forecasting with uncertainty (a random day). Fig. 12 . 12 Fig. 12. Load forecasting with uncertainty (a random day). Fig. 13 . 13 Fig. 13. Next 24 hours NFD with uncertainty (a random day). Fig. 14 . 14 Fig. 14. Risk/reserve curve for LOLPh+12 and EENSh+12 at 12:00. Fig. 15 .Fig. 16 . 1516 Fig. 15. Required power reserve for each hour with x% LOLP. 0 1 2 3 4 5 6 7 8 9 10 0 4 8 12 16 20 24 0 5 10 15 x % of LOLP Time (Hour) Power Reserve (kW) TABLE I . I Errors of the PV Power Forecast with ANN nRMSE [%] nMAE [%] Training Set 4.67 2.69 Validation Set 5.58 3.13 Test Set 5.95 3.12 TABLE II II nRMSE [%] nMAE [%] Training Set 3.18 2. 45 Validation Set 3.57 2.76 Test Set 3.67 2.84 3)Net Demand Uncertainty a) First Method: Direct Net Demand Forecast 0.4 Error (p.u.) 0 0.2 -0.2 -0.4 0 4 8 12 16 20 24 Time (hours) 0.3 0.2 Error (p.u.) -0.1 0 0.1 -0.2 -0.3 0 4 8 12 16 20 24 Time (hours) . ERRORS OF THE LOAD DEMAND FORECAST WITH ANN 12 ACKNOWLEDGMENT The authors would like to thank the China Scholarship Council and Centrale Lille for their cofounding supports
32,655
[ "18655" ]
[ "13338", "13338", "13338" ]
01756831
en
[ "info" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01756831/file/NIMFA-INFOCOM-final.pdf
Vineeth S Varma Irinel-Constantin Morȃrescu Yezekael Hayel Continuous time opinion dynamics of agents with multi-leveled opinions and binary actions Keywords: Opinion dynamics, Social computing and networks, Markov chains, agent based models This paper proposes and analyzes a stochastic multiagent opinion dynamics model. We are interested in a multileveled opinion of each agent which is randomly influenced by the binary actions of its neighbors. It is shown that, as far as the number of agents in the network is finite, the model asymptotically produces consensus. The consensus value corresponds to one of the absorbing states of the associated Markov system. However, when the number of agents is large, we emphasize that partial agreements are reached and these transient states are metastable, i.e., the expected persistence duration is arbitrarily large. These states are characterized using an N-intertwined mean field approximation (NIMFA) for the Markov system. Numerical simulations validate the proposed analysis. I. INTRODUCTION Understanding opinion dynamics is a challenging problem that has received an increasing amount of attention over the past few decades. The main motivation of these studies is to provide reliable tools to fight against different addictions as well as propagation of undesired unsocial behaviors/beliefs. One of the major difficulties related to opinion dynamics is the development of models that can capture many features of a real social network [START_REF] Kerckhove | Modelling influence and opinion evolution in online collective behaviour[END_REF]. Most of the existing models assume that individuals are influenced by the opinion of their neighbors [START_REF] Ising | Contribution to the theory of ferromagnetism[END_REF], [START_REF] Degroot | Reaching a consensus[END_REF], [START_REF] Sznajd-Weron | Opinion evolution in closed community[END_REF], [START_REF] Deffuant | Mixing beliefs among interacting agents[END_REF], [START_REF] Hegselmann | Opinion dynamics and bounded confidence models, analysis, and simulation[END_REF], [START_REF] Morȃrescu | Opinion dynamics with decaying confidence: Application to community detection in graphs[END_REF]. Nevertheless, it is very hard to estimate these opinions. In order to relax this constraint of measuring the opinions and to model more realistic behaviors, a mix of continuous opinion with discrete actions (CODA) was proposed in [START_REF] Martins | Continuous opinions and discrete actions in opinion dynamics problems[END_REF]. This model reflects the fact that even if we often face binary choices or actions which are visible by our neighbors, the opinions evolve in a continuous space of values which are not explicitly visible to the neighbors. A multi-agent system with a CODA model was proposed and analyzed in [START_REF] Chowdhury | Continuous opinions and discrete actions in social networks: a multi-agent system approach[END_REF]. It was shown that this deterministic model leads to a variety of asymptotic behaviors including consensus. Due to the complexity of the opinion dynamics, we believe that stochastic models are more suitable than deterministic ones. Indeed, we can propose a realistic deterministic update rule but many random events will still influence the interaction network and consequently the opinion dynamics. For this reason, we consider that it is important to reformulate the model from [START_REF] Chowdhury | Continuous opinions and discrete actions in social networks: a multi-agent system approach[END_REF] in a stochastic framework, specifically, as an interactive Markov chain. Similar approaches for the Deffuant and Hegselmann-Krause models have been considered in the literature (see for instance [START_REF] Lima | Agent based models and opinion dynamics as markov chains[END_REF], [START_REF] Lorenz | Consensus strikes back in the hegselmann-krause model of continuous opinion dynamics under bounded confidence[END_REF]). Although the asymptotic behavior of the model can be given by characterizing the absorbing states of the Markov chain, the convergence time can be arbitrarily large. Moreover, transient but persistent local agreements, called metastable equilibria, are very interesting because they describe the finite-time behavior of the network. Consequently, we consider in this paper an N-intertwined mean field approximation (NIMFA) based approach in order to characterize the metastable equilibria of the Markov system. It is noteworthy that NIMFA was successfully used to analyze and validate some epidemiological models [START_REF] Van Mieghem | Virus spread in networks[END_REF], [START_REF] Trajanovski | Decentralized protection strategies against sis epidemics in networks[END_REF]. In this work, we model the social network as a multi-agent system in which each agent represents an individual whose state is his opinion. This opinion can be understood as the preference of the agent towards performing a binary action, i.e. action can be 0 or 1. These agents are interconnected through an interaction directed graph, whose edge weights represent the trust given by an agent to his neighbor. We propose continuous time opinion dynamics in which the opinions are discrete and belong to a given set that is fixed a priori. Each agent is influenced randomly by the actions of his neighboring agents and consequently influences its neighbors. Therefore the opinions of agents are an intrinsic variable that is hidden from the other agents, the only visible variable is the action. As an example, consider the opinion of users regarding two products Coca-cola and Pepsi. A user may prefer Coca-cola strongly, while some other users might be more indifferent. However, what the other users see (and is therefore influenced by) is only what the user buys, which is the action taken. Our goal here is to analyze the behavior of the opinions in the network under the proposed stochastic dynamics. One of the main results states that the opinions always asymptotically reach a consensus defined by one of the extreme opinions. Nevertheless, for large networks, we emphasize that intermediate local agreements are reached and preserved for a long duration of time. The contributions of this paper can be summarized as follows. Firstly, we formulate and analyze a stochastic version of the CODA model proposed in [START_REF] Chowdhury | Continuous opinions and discrete actions in social networks: a multi-agent system approach[END_REF]. Secondly, we characterize the local agreements which are persistent for a long duration by using the NIMFA for the original Markov system. Thirdly, we give a complete characterization of the system behavior under symmetric allto-all connection assumption. Finally, we provide conditions for the preservation of the main action inside one cluster as well as for the propagation of actions. The rest of the paper is organized as follows. Section II introduces the main notation and concepts and provides a description of the model used throughout the paper. The analysis of the asymptotic behavior of opinions described by this stochastic model is provided in Section III. The presented results are valid for any connected networks with a finite number of agents. Moreover Section III contains the description of the NIMFA model and an method to compute its equilibria. In Section IV, we analyze the particular network in which each agent is connected to all the others. In this case, we show that only three equilibria exist and two of them are stable, and correspond to the absorbing states of the system. Section V presents the theoretical analysis of the system under generic interaction networks. It also emphasizes conditions for the preservation of the main action (corresponding to a metastable state) in some clusters as well as conditions for the action propagation. The results of our work are numerically illustrated in Section VI. The paper ends with some concluding remarks and perspectives for further developments. Preliminaries: We use E for the expectation of a random variable, 1 A (x) the indicator function which takes the value 1 when x ∈ A and 0 otherwise, R + the set of non-negative reals and N = {1, 2, . . . } the set of natural numbers.. II. MODEL Throughout the paper we consider N ∈ N an even number of possible opinion levels, and the set of agents K = {1, 2, . . . , K} with K ∈ N. Each agent i is characterized at time t ∈ R + by its opinion represented as a scalar X i (t) ∈ Θ where Θ = {θ 1 , θ 2 , . . . , θ N } is the discrete set of possible opinions, such that θ n ∈ (0, 1)\{0.5} and θ n < θ n+1 for all n ∈ {1, 2, . . . , N }. Moreover Θ is constructed such that θ N/2 < 0.5 and θ N/2+1 > 0.5. In the following let us introduce some graph notions allowing us to define the interaction structure in the social network under consideration. Definition 1 (Directed graph): A weighted directed graph G is a couple (K, A) with K being a finite set denoting the vertices, and A being a K × K matrix, with elements a ij denoting the trust given by i on j. We say that agent j is a neighbor of agent i if a ij > 0. We denote by τ i the total trust in the network for agent i as τ i = K j=1 a ij . Agent i is said to be connected with agent j if G contains a directed path from i to j, i.e. if there exists at least one sequence (i = i 1 , i 2 , . . . , i p+1 = j) such that a i k ,i k+1 > 0, ∀k ∈ {1, 2, . . . , p}. Definition 2 (Strongly connected): The graph G is strongly connected if any two distinct agents i, j ∈ K are connected. In the sequel we suppose the following holds true. Assumption 1: The graph (K, E) modeling the interaction in the network is strongly connected. (t) = X i (t) ∈ {0, 1} a ij Trust of agent i in agent j τ i Trust of agent i in the network τ i = j∈K a ij v i,n (t) Probability that X i (t) = θn R i (t) Influence on i to shift opinion to 1 R i (t) = j∈K a ij Q j (t) L i (t) Influence on i to shift opinion to 0 L i (t) = j∈K a ij (1 -Q j (t)) r i (t) Expected influence on i to shift opinion to 1 r i (t) = j∈K a ij N n=N/2+1 v j,n (t) l i (t) Expected influence on i to shift opinion to 0 l i (t) = j∈K a ij N/2 n=1 v j,n (t) νn(t) Expected fraction of population with opinion θn, νn = i∈K v i,n (t) K ν + (t), ν -(t) Expected fraction of population with action 1, ν + (t) = i∈K N n=N/2+1 v i,n (t) K or action 0, ν -= i∈K N/2 n=1 v i,n (t) K ν C n (t) Expected fraction of population in C ⊆ K with opinion θn, ν C n (t) = i∈C v i,n (t) |C| ν C + (t), ν C -(t) Expected fraction of population in C ⊆ K with action 1, ν C + = i∈C N n=N/2+1 v i,n (t) |C| or action 0, ν C -= i∈C N/2 n=1 v i,n (t) |C| TABLE I: Notations used The action Q i (t) taken by agent i at time t is defined by the opinion X i (t) through the following relation Q i (t) = X i (t) , where . is the nearest integer function. This means that if an agent has an opinion more than 0.5, it will take the action 1 and 0 otherwise. This kind of opinion quantization is suitable for many practical applications. For example, an agent may support the left or right political party, with various opinion levels (opinions close to 0 or 1 represents a stronger preference), however, in the election, the agent's action is to vote with exactly two choices (left or right). Similarly, an agent might have to choose between two cars or other types of merchandise like cola as mentioned in the introduction. Although its preference for one product is not of the type 0 or 1, its action will be, since it cannot buy fractions of cars, but one of them. For ease of exposition, we provide Table I, a list of notations and their meanings. A. Opinion dynamics In this work, we look at the evolution of opinions of the agents based on their mutual influence. We also account for the inertia of opinion, i.e., when the opinion of the agent is closer to 0.5, he is more likely to shift as he is less decisive, whereas someone with a strong opinion (close to 1 or 0) is less likely to shift his opinion as he is more convinced by his opinion. The opinion of agent j may shift towards the actions of its neighbors with a rate β n while X j (t) = θ n . If no action is naturally preferred by the opinion dynamics, then we construct θ n = θ N +1-n and assume that β n = β N +1-n for all n ∈ {1, 2, . . . , N }. At each time t ∈ R + we denote the vector collecting all the opinions in the network by X(t) = (X 1 (t), . . . , X K (t)). Notice that the evolution of X(t) is described by a continuous time Markov process with N K states and its analysis is complicated even for small number opinion levels and relatively small number of agents. The stochastic transition rate of agent i shifting its opinion to the right, i.e. to have opinion θ n+1 when at opinion θ n , with n ∈ {1, 2, . . . , N -1}, is given by β n N j=1 a ij 1 (0.5,1] (X j (t)) = β n K j=1 a ij Q j (t) = β n R i (t). Similarly, the transition rate to the left, i.e. to shift from θ n to θ n-1 is given by β n N j=1 a ij 1 [0,0.5) (X j (t)) = β n N j=1 a ij (1 -Q j (t)) = β n L i (t). for n ∈ {2, . . . , N }. Therefore, we can write the infinitesimal generator M i,t (a tri-diagonal matrix of size N × N ) for an agent i as: M i,t =    -β 1 R i (t) β 1 R i (t) 0 . . . β 2 L i (t) -β 2 τ i β 2 R i (t) . . . . . .    (1) with elements corresponding the n-th rown and m-th column given by M i,t (m, n) and ∀n ∈ {1, . . . , N -1}, M i,t (n, n + 1) = β n R i (t), ∀n ∈ {2, . . . , N }, M i,t (n, n -1) = β n L i (t), ∀|m -n| > 1, M i,t (m, n) = 0 and Mi,t(n, n) =    -β1Ri(t) for n = 1, -βnτi for n ∈ {2, . . . , N -1}, -βN Li(t) for n = N. Let v i,n (t) := E[1 {θn} (X i (t))] = Pr(X i (t) = θ n ) be the probability for an opinion level θ n for user i at time t. Then, in order to propose an analysis of the stochastic process introduced above, we may consider the mean-field approximation by replacing the transitions by their expectations. Then, the expected transition rate from state n to state n+1 for K → ∞, is given by: β n K j=1 a ij E 1 (0.5,1] (X j (t)) = β n K j=1 a ij N n=N/2+1 v j,n (t). We have similar expression for transition between state n and n -1. III. STEADY STATE ANALYSIS Define by θ-= (θ 1 , . . . , θ 1 ) and θ+ = (θ N , . . . , θ N ) the states where all the agents in the network have an identical opinion, which correspond to the two extreme opinions. Proposition 1: Under Assumption 1, the continuous time Markov process X(t), with (1) as the infinitesimal generators corresponding to each agent, has exactly two absorbing states X(t) = θ+ and X(t) = θ-. Proof: We can verify that θ+ and θare absorbing states by evaluating the transition rates using [START_REF] Kerckhove | Modelling influence and opinion evolution in online collective behaviour[END_REF]. If X(t) = θ-, then X i (t) = θ 1 for all i, and the transition rate is (1, 0, . . . , 0)M i,t = (-β 1 R i (t), β 1 R i (t), 0, . . . , 0). (2) but as X i = θ 1 for all i, Q i (t) = 0 for all i and so we have R i (t) = 0 for all i. Therefore the transition rate from this state is 0. We can similarly show that θ+ is also an absorbing state. Next, we show that no other state can be an absorbing state. Consider any state with at least one agent i such that X i (t) = θ n with 1 < n < N . The transition rate from such a state is never 0, which can is easy to see, as we have M i,t (n, n) = β n τ i . This implies that as long as such an agent exists, the global state is not an absorbing state. The only states which are not θ+ , θor such a state are those satisfying the following property. Consider X i (t) = θ 1 for all i ∈ S and X i (t) = θ N for all i ∈ K \ S with S ⊂ K and 1 < |S| < K. As the graph is strongly connected, there is at least one agent k in S, which is directly connected to some agent l in K \ S, i.e. a k,l > 0. The transition rate of this k is given by (-β 1 R k (t), β 1 R k (t), 0, . . . , 0) As a k,l > 0 and Q l (t) = 1 as all agents outside S has opinion θ N , we have R k (t) > 0. Therefore such a state is also never an absorbing state, which concludes our proof that θ+ and θare the two absorbing states and no other state is an absorbing state. Considering the NIMFA approximation, we get that the dynamics of the opinion for an agent i are given by: vi,1 = -β 1 r i v i,1 + β 2 l i v i,2 vi,n = -β n r i v i,n + β n+1 l i v i,n+1 -β n l i v i,n + β n-1 r i v i,n-1 vi,N = -β N l i v i,N + β N -1 r i v i,N -1 (3) for all i ∈ K and 1 < n < N where l i = j∈K a ij E[1 -Q j ] = j∈K N/2 n=1 a ij v j,n , r i = j∈K a ij E[Q j ] = j∈K N n=N/2+1 a ij v j,n . (4) and n v i,n = 1. We can easily verify that X i = θ 1 , i.e. v i,1 = 1 for all i is an equilibrium for the above set of equations. When v i,1 = 1 for all i, v i,n = 0 for all n ≥ 2 and as a result, l i = τ i and r i = 0 for all i which gives vi,n = 0 for all i, n. Apart from the extreme solutions θ+ and θ-, the non-linearity of system (3) could give rise to the existence of interior rest points which are locally stable. Such rest points are referred to as metastable states in Physics. Metastability of Markov processes is precisely defined in [START_REF] Huisinga | Phase transitions and metastability in markovian and molecular systems[END_REF], where the exit times from these metastable states are shown to approach infinity. For a given r i = E[R i (t)], the equilibrium state v * i,n must satisfy the following conditions 0 = -β 1 ri τi v * i,1 + β 2 ( τi-ri τi )v * i,2 0 = -β n v * i,n + β n+1 ( τi-ri τi )v i,n+1 +β n-1 ri τi v * i,n-1 0 = -β N ( τi-ri τi )v * i,N + β N -1 ri τi v * i,N -1 (5) We can write any v * i,n based on v * i,1 , by simplification as v * i,n = β 1 β n r i τ i -r i n-1 v * i,1 . (6) As the sum of v i,n over n must be 1, we can solve for v * i,1 as v * i,1 = 1 N n=1 β1 βn ri τi-ri n-1 . (7) We then can use this relationship to construct a fixed-point algorithm that computes a rest-point of the global opinion dynamics for all users. Algorithm outline: The algorithm involves initializing v to a random value. Then, this v can be used to compute the corresponding r i and l i with (4), which is then used to compute the associated v with [START_REF] Hegselmann | Opinion dynamics and bounded confidence models, analysis, and simulation[END_REF]. Repeating this recursively results in a fixed point of the NIMFA which is a potential metastable equilibrium. Other potential equilibria can be found by initializing with another random v. Additionally, we can obtain some nice properties on the relation between r i and v i,n by studying the following function. Lemma 1: Consider the function f : [0, 1] → [0, 1] defined as f (x) := N n=N/2+1 β1 βn x 1-x n-1 N n=1 β1 βn x 1-x n-1 (8) for all x ∈ [0, 1) and with f (1) = 1. We have that f (x) is a monotonically increasing continuous function and takes the values f (0) = 0, f (0.5) = 0.5 and lim x→1 f (x) = 1. Proof: We can easily verify that f (0) = 0 1 + 0 + . . . = 0 As β n is assumed to be symmetric around N/2, i.e. β 1 = β N etc., we have f (0.5) = N N/2+1 β n N 1 β n = 0.5 In order to simplify differentiations, we use some additional variables, ξ = x 1-x , p = β 1 + β 2 ξ + . . . β N/2 ξ N/2-1 and q = β 1 ξ N + β 2 ξ N -1 + . . . β N/2 ξ N/2 which gives f (x) = q p+q . As q = 1 + . . . , q p+q is continuous at all points except when ξ is a pole, i.e., when x = 1. Therefore if we show that f (x) is continuous at x = 1, f (x) is continious is [0, 1]. For this, we use L'Hôpital's rule to calculate lim x→1 f (x), which is lim ξ→∞ β 1 ξ N + β 2 ξ N -1 + . . . β N/2 ξ N/2 β 1 + β 2 ξ + . . . β N ξ N (9) Applying L'Hôpital's rule recursively N times, we get this to be 1. Therefore, we have shown that our function f (x) is continuous in [0, 1]. Next, we show that it is monotonic. We have (f ) = q (p+q)-(p +q )q (p+q) 2 = q p p 2 (p+q) 2 (10) where (•) = d(•) dξ . We see that dξ dx ≥ 0. Therefore, if q p ≥ 0, then we have the monotonicity and increasing property. This can be verified by simplifying it as q p = N n=N/2+1 β n ξ n-1 p (11) Each of these terms in the sum are positive and so we have shown that the first derivative of f (x) w.r.t x is positive. We can use f ( ri τi-ri ) to calculate the probability that an agent i will take the action 1, i.e., 7) and [START_REF] Hegselmann | Opinion dynamics and bounded confidence models, analysis, and simulation[END_REF]. N n=N/2+1 v * i,n = f ( ri τi-ri ) from ( IV. COMPLETE GRAPH WITH IDENTICAL CONNECTIONS Let us first consider a simple graph structure in which each agent is identically influenced by all the others agents i.e., with a i,j = 1 for all i, j ∈ K and i = j, and a i,i = 0. We use ν n to denote the expected fraction of the population of agents with opinion θ n . This is simply ν n = E[ i∈K 1(X i = θ n )] K = i∈K v i,n K . Also recall that we introduced the following notation ν -:= N/2 n=1 ν n and ν + = N n=N/2+1 ν n . Under this specific graph structure, we have lim K→∞ r i τ i = 1 τ i j∈K a ij N n=N/2+1 v j,n = ν + - N n=N/2+1 v i,n K = ν + (12) for any i ∈ K. Therefore, using (3) and the fact that νn = i∈K vi,n K we can get the dynamics of νn , ∀n ∈ K. Moreover, following [START_REF] Van Mieghem | Virus spread in networks[END_REF], for large values of K we can approximate this dynamics as: ν1 K -1 = -β 1 ν 1 ν + + β 2 ν 2 ν -, νn K -1 = -β n ν n + β n+1 ν n+1 ν - +β n-1 ν n-1 ν + , ∀n ∈ {2, 3, . . . , N -1}, νN K -1 = -β N ν N ν -+ β N -1 ν N -1 ν + . ( 13 ) where ν + = N n=N/2+1 ν n and ν -= N/2 n=1 ν n . Theorem 1: When N ≥ 4, β n > 0 and β n = β N -n+1 for all n ∈ {1, 2, . . . , N }, the dynamics described in (13) has exactly three equilibrium points at (1, 0, . . . ), (0, . . . , 0, 1) and at a point with ν + = ν -= 0.5. The first two equilibrium points correspond to the absorbing states and are stable, while the third fixed point is an unstable equilibrium (not metastable). Proof: Let us notice that: • if ν 1 = 1 and ν n = 0 for all n > 1 one gets ν + = 0; • if ν N = 1 with ν n = 0 for all n < N one gets ν -= 0. Consequently is straightforward to verify that (1, 0, . . . ), (0, . . . , 0, 1) are equilibria of [START_REF] Trajanovski | Decentralized protection strategies against sis epidemics in networks[END_REF]. Another equilibrium point of ( 13) is obtained for ν n = ω βn , with 1 ω := N i=1 1 βi . Indeed, for this point one has ν + = ν -= 0.5 and consequently, the right hand side of (13) becomes ω 2 + ω 2 or -ω + ω 2 + ω 2 which are all 0. Next, we show that no other equilibrium exists. For this, we suppose by contradiction that there exists some 0 < ν + < 1, ν + = 0.5 such that ν + = f (ν + ). If such a ν + exists, then that corresponds to another equilibrium point. However, notice that ν + 1 -ν + = N i=N/2+1 β1 βi ν+ 1-ν+ i-1 N/2 i=1 β1 βi ν+ 1-ν+ i-1 (14) replacing ν+ 1-ν+ by ξ and accounting for β n = β N -n+1 (symmetry), we must have N/2 i=1 β 1 β n (ξ -1)ξ i-1 = 0 (15) for an equilibrium point. However, ξ = 1 corresponds to ν + = 0.5. As ξ > 0, the above equation is never satisfied unless β 1 = 0 (which also means β N = 0). Hence, by contradiction, we prove that the only three equilibrium points are those described. In order to study the stability of these equilibrium points, we look at the Jacobian and its eigenvalues. If we denote the Jacobian elements by J i,j , where J i,j = ∂ νi ∂ν j , then for all 1 < i ≤ N 2 , and for all N 2 < j < N , we have: J 1,1 = β 2 ν 2 -β 1 ν + , J i,i = β i+1 ν i+1 -β i , J j,j = β j-1 ν j-1 -β j , J N,N = β N -1 ν N -1 -β N ν -/ We also have ∀ 2 < i ≤ N/2, J 1,i = β 2 ν 2 , and ∀ N/2 < i ≤ N -2, J 1,i = -β 1 ν 1 , ∀ 2 < i ≤ N/2, J N,i = -β N ν N , ∀ N/2 < i ≤ N -2, J N,i = β N -1 ν N -1 . For all i, j ∈ {2, 3, . . . , N -1} such that |i-j| > 1 we have J i,j = β i+1 ν i+1 when j ≤ N/2 and J i,j = β i-1 ν i-1 when j > N/2. Next, we have J 1,2 = β 2 (ν 2 + ν -) and J N -1,N = β N -1 (ν N -1 + ν + ). For all i, j ∈ {2, 3, . . . , N -1} such that |i -j| = 1 we have J i,i+1 = β k ν k + β i+1 ν - where k = i + 1 if i + 1 ≤ N/2 and k = i -1 otherwise; and J i,i-1 = β k ν k + β i-1 ν + where k = i + 1 if i -1 ≤ N/2 and k = i -1 otherwise. At the absorbing state (1, 0, . . . , 0), we have the Jacobian evaluated to have diagonal elements to be 0, -β 2 , β 3 etc. Additionally, the Jacobian becomes an upper triangular matrix as ν n = 0 for all n > N/2. Therefore, the eigenvalues are the elements of the diagonal, which are non-positive and this corresponds to a stable equilibrium. By symmetry, we have the same result for the other absorbing state. When N/2 m=1 ν m = 0.5, we have that the fixed point corresponding to this distribution satisfies ν n β n = ω for some K > 0. Thus, the first column of the Jacobian can be written as (K - β 1 2 , ω + β 1 2 , ω, . . . , K, -K) T The columns j for 2 ≤ j ≤ N/2 are of the form (ω, . . . , ω + β j 2 , ω -β j , ω + β j 2 , . . . , ω, -ω) T where ω + βj 2 is the diagonal term of the Jacobian. The j-th column where N/2 < j ≤ N -1 is of the form (-ω, ω, . . . , ω + β j 2 , ω -β j , ω + β j 2 , . . . , ω) T Finally, the N -th column is given by (-ω, ω, . . . , ω, ω + β N 2 , ω - β N 2 ) T The above matrix is such that each column has exactly one element which is -ω either at the first row (after column index is more than N/2) or at the last row. If we subtract ω(N -2)I from this matrix, it's determinant becomes 0. This can be verified using the properties of determinants that adding a scalar times a row to another row does not change the determinant. We replace the first row with the sum of all rows and this results in the first row becoming all zeroes. This implies that one of the eigenvalues of the Jacobian is ω(N -2) which is positive. Therefore, the equilibrium at ν + = ν -= 0.5 is unstable. The previous theorem characterizes the behavior of the agents' opinion in all-to-all networks with identical connections. Basically, this result states that beside the two stable equilibria in which all the agents rich consensus we may have a metastable equilibria in which the opinions are symmetrically displaced with respect to 0.5. V. GENERIC INTERACTION NETWORKS A way to model generic interaction networks is to consider that they are the union of a number of clusters (see for instance [START_REF] Morȃrescu | Opinion dynamics with decaying confidence: Application to community detection in graphs[END_REF] for a cluster detection algorithm). Basically a cluster C is a group of agents in which the opinion of any agent in C is influenced more by the other agents in C, than agents outside C. When the interactions between cluster are deterministic and very weak, we can use a two time scale-modeling as in [START_REF] Martin | Time scale modeling for consensus in sparse directed networks with time-varying topologies[END_REF] to analyze the overall behavior of the network. In the stochastic framework and knowing only a quantized version of the opinions we propose here a development aiming at characterizing the majority actions in clusters. The notion of cluster can be mathematically formalized as follows. Definition 3 (Cluster): A subset of agents C ⊂ K defines a cluster when, for all i, j ∈ C and some λ > 0.5 the following inequality holds a ij ≥ λ τ i |C| . ( 16 ) The maximum λ which satisfies this inequality for all i, j ∈ C is called the cluster coefficient. One question that can be asked in this context is what are the sufficient conditions for the preservation of actions in a cluster, i.e. regardless of external opinions, agents preserve the majority action inside the cluster C for long time. At the limit, all the agents will have identical opinions corresponding to an absorbing state of the network but, clusters with large enough λ may preserve their action in metastable states (long time). For any given set of agents C ⊂ K, let us denote that ν C -= j∈C N/2 n=1 v j,n |C| , and ν C + = j∈C N n=N/2+1 v j,n |C| . Those values represent the expected fraction of agents within a set C with action 0 and 1, respectively. We also denote by ν C n , the average probability of agents in a cluster to have opinion θ n , i.e., ν C n = i∈C vi,n |C| . Now we can use the definition of a cluster given in ( 16) to obtain the following proposition. Proposition 2: The dynamics of the average opinion probabilities in a cluster C ⊂ K can be written as: νC 1 κ = -β 1 ν C 1 λν C + + (1 -λ)δ +β 2 ν C 2 λν C -+ (1 -λ)(1 -δ) νC n κ = -β n ν C n + β n+1 ν C n+1 λν C -+ (1 -λ)(1 -δ) +β n-1 ν C n-1 λν C + + (1 -λ)δ νC N κ = -β N ν C N λν C -+ (1 -λ)(1 -δ) +β N -1 ν C N -1 λν C + + (1 -λ)δ ( ≥ j∈C aij τi N n=N/2+1 v j,n ⇒ ri τi ≥ λ |C| j∈C N n=N/2+1 v j,n ⇒ ri τi ≥ λν C + ( 18 ) Applying the same derivation for l i , we get li τi ≥ λν C -. Since l i + r i = τ i , we always have λν C + ≤ ri τi ≤ λν C + + (1 -λ) . Thus, we can always rewrite the dynamics of a single agent in the cluster as vi,1 τ i = -β 1 ν 1 λν C + + (1 -λ)δ i +β 2 ν 2 λν C -+ (1 -λ)(1 -δ i ) vi,n τ i = -β n ν n + β n+1 ν n+1 λν C -+ (1 -λ)(1 -δ i ) +β n-1 ν n-1 λν C + + (1 -λ)δ i vi,N τ i = -β N ν N λν C -+ (1 -λ)(1 -δ i ) +β N -1 ν N -1 λν C + + (1 -λ)δ i (19) where δ i ∈ [0, 1] . By taking the sum of each equation over the cluster and dividing by |C|, we get the averages. Each term on the right hand side has a constant factor of the type (l i + r i )β m ν n λν C + and an additional perturbation term of the type The result above shows that instead of looking at individual opinions of agents inside a cluster, we can provide equation (17) for the dynamics of the expected fraction of agents in a cluster with certain opinions. Using proposition 2 and theorem 1, we can immediately obtain some interesting properties for a cluster. (l i + r i )β m ν n (1 -λ)δ i . Corollary 1: Let C be a cluster with coefficient λ → 1. Then the cluster has two metastable equilibria at ν C = (1, 0, . . . , 0), ν C = (0, . . . , 0, 1) and one unstable equilibrium corresponding to ν C + = 0.5. This result holds as if λ → 1, (17) simply becomes [START_REF] Trajanovski | Decentralized protection strategies against sis epidemics in networks[END_REF] and therefore the equilibrium points of (13) must correspond to those of (17), but with the fraction of population being the fraction of agents within the cluster and not the whole graph. For example, consider that K = C 1 ∪ C 2 with C 1 ∩ C 2 = ∅ and |C 1 |, |C 2 | → ∞. Additionally, a ij = 1 for all i, j ∈ C 1 , and for all i, j ∈ C 2 . Finally, we have a set A 1 ⊂ C 1 and A 2 ⊂ C 2 such that a ij = 0 = a ji = 0 for all i ∈ C 1 \ A 1 and j ∈ C 2 \ A 2 , but a ij = a ji = 1 for all i ∈ A 1 and j ∈ A 2 . If |A 1 | < ∞ and |A 2 | < ∞, then we have that λ 1 → 1, λ 2 → 1 which are the cluster coefficients for C 1 and C 2 . In this example, the graph is connected and so the only two absorbing states must be when all opinions are θ 1 or θ N as shown in proposition 1. Corollary 1 implies that if there are two clusters, each with its λ coefficient close to 1 but not 1, then each cluster can stay with contradicting actions in a metastable state. Such a state is not an absorbing state for the system, but can be held for an arbitrarily long duration. A. Action preservation We have shown that if a cluster has its coefficient λ → 1, then it can preserve its action as if it starts with all agents with opinion θ 1 or θ N , this state is a stable state and therefore the local action of the cluster is preserved regardless of the external opinions. However, one problem we want to investigate is if this property hold for other λ < 1. The following proposition provides a necessary condition for a cluster to preserve its action and not collapse to external perturbations. Proposition 3: A necessary condition for C with coefficient λ to preserve its action in a metastable state is if ∃x ∈ (0.5, 1) such that x = f (λx). If no such x exists, then the only equilibrium when the perturbation term δ = 1 from (17) is at ν C + = 1, and when δ = 0, the equilibrium is at ν C + = 0. Proof: The dynamics of the cluster average opinions follow equation (17). We look for equilibrium points for this dynamics under certain values of δ. To find the equilibrium points, we set the left hand side to 0 in (17). For a given δ, by definition of f (•), we obtain that if ν C + is the fraction of agents with action 1 in the cluster, then it must satisfy ν C + = f (λν C + + (1 -λ)δ) (20) Having δ = 0 implies that the agents that interact with the agents in the cluster from outside the cluster have all action 0. If ν C + > 0 is an equilibrium point even with δ = 0, this means that regardless of external opinion, the cluster can preserve an action 1. This is true because f (•) is a monotonic function and therefore f (λν C + + (1 -λ)δ) ≥ f (λν C + ) for all δ ≥ 0. From the properties of f (•) we know that it is monotonic and takes f (x) = x only when x = 0, .5, 1. Additionally, as f (x) < x when x → 0, and as f (•) is continuous, f (λx) < x for all x < 0.5 except at x = 0. However, x = 0 corresponds to the state ν + = 0 which means that this equilibrium is not preserved. If an x > 0.5 exists such that x = f (λx), then regardless of the actions outside C, i.e. δ, we will have ν C + ≥ x as a possible equilibrium. By studying the opposite case with the majority action inside the cluster being 0 and external opinion 1, we get 1 -x = f (λ(1 -x)) with 1 -x > 0.5, which means that the same condition holds for preserving the action 0 as well, to get ν C -≥ x. B. Propagation of actions In the previous subsection, we have seen that a cluster C can preserve its majority action regardless of external opinion if it has a sufficiently large λ. If there are agents outside C with some connections to agents in C, then this action can be propagated. Let τ C i = j∈C a ij denote the total trust of agent i in the cluster C. Let the cluster C be such that it has λ large enough so that νC + > 0.5 exists where νC + = f (λν C + ). Proposition 4: If the cluster C is preserving an action 1 with at least a fraction νC + of the population in C having action 1, then the probability of any agent i ∈ K \ C to chose action 1 at equilibrium is lower bounded as follows. Pr(Q i = 1) ≥ f νC + τ C i τ i . (21) Proof: We know that at equilibrium, we must have: N n=N/2+1 v i,n = f r i τ i (22) by definition of f (•), [START_REF] Morȃrescu | Opinion dynamics with decaying confidence: Application to community detection in graphs[END_REF] and [START_REF] Hegselmann | Opinion dynamics and bounded confidence models, analysis, and simulation[END_REF]. However, we can lower bound R i as r i ≥ j∈C a ij N n=N/2+1 v j,n (23) Since the cluster C has an action 1 by at least νC + of its population, we have N n=N/2+1 v j,n ≥ f λν C + (24) for any j ∈ C. As f (λν C + ) = νC + , we have r i ≥ νC + j∈C a ij (25) Therefore, r i ≥ νC + τ C i and so we have N n=N/2+1 v i,n ≥ f νC + τ C i τ i (26) VI. NUMERICAL RESULTS For all simulations studied, unless otherwise mentioned, we take Θ = {0.2, 0.4, 0.6, 0.8}, β 1 = β 4 = 0.01 per unit of time and β 2 = β 3 = 0.02 per unit of time. A. General graph First, we perform some simulations to validate our NIMFA model for the opinion dynamics. For this purpose, we construct a graph with K = 120 partitioned into two sets B 1 = {1, . . . , 40} and B 2 = {41, . . . , 120}. We take a ij = 1 if a connection exists and 0 otherwise. Connections between agents belonging to the same set (B 1 or B 2 ) are established randomly with a probability of 0.3 and connections from B 1 to B 2 and vice-versa are established with probability of 0.02. For one such randomly generated graph (K, A), we study the opinion dynamics both by simulating a continuous time Markov chain and by looking at the equilibrium points generated by the recursive search algorithm described in Section III. We always start with the initial state given by X i (0) = 0.2 for all i ∈ B 1 , X i (0) = 0.8 for all i ∈ B 2 . Setting this as the initial condition, the algorithm 1 gives us the population fraction of agents at equilibrium with action 1, i.e., ν + to be 0.693. To validate this, we run several simulations of the continuous time Markov process with the same initial state and graph structure, and plot the resulting K i=1 Q i (t). This plot is shown in Figure 1. We observe that on average over time, the population with action 1 in all simulations are close to the value obtained from the NIMFA model. Next we focus on the approximation done by NIMFA for individual agent opinions. Table II shows the estimated probability of an agent choosing action 1 for some selected agents. This estimate is computed in each simulation by averaging the action taken over a large time horizon during which the system is in the metastable state. We also observed that the system collapsed to an absorbing state 16 times when we did 1000 simulations. Therefore, we estimate P r(Q i = 1) by averaging over the 984 simulations for which the system stayed in the metastable state. Notice that, as a result of the starting conditions and the graph structure, most agents in B 1 have a high probability of choosing action 0 at the metastable state (equilibrium of the NIMFA), while most agents in B 2 chose action 1. Some agents like 9 and 10 have trust in some agents of B 1 as well as B 2 . Consequently, these agents constantly shift their opinions resulting in a more random behavior for their actions. B. Graph with clusters For the next set of simulations, we take another graph structure, but with the same K as indicated in Figure 2. We first randomly make links between any i, j ∈ K with a probability of 0.05. When such a link exists, a ij = 1 and a ij = 0 otherwise. Then we construct a cluster C 1 , with the agents i = 1, 2, . . . , 40, and C 2 with agents i = 81, 82, . . . , 120. We also label agents 40 < i ≤ 60 as set B 1 and 60 < j ≤ 80 as set B 2 . To provide the relevant cluster structure, we made the edge weights a ij = 1 for all τi ≥ 0.444 for all 60 < i ≤ 80. We find that the largest x satisfying x = f (λ 1 x) is 0.95 and that satisfying x = f (λ 2 x) is 0.94. Therefore, if all agents in C 1 start with opinion 0.2 and all agents in cluster 2 start with opinion 0.8, we predict from proposition 3 that ν C1 -≥ 0.95 and ν C2 + ≥ 0.94 in the metastable state. Additionally, applying proposition 4 yields ν B1 -≥ f (0.95 × 0.714) = 0.85, ν B2 -, ≥ f (0.95×0.444) = 0.324 and ν B2 + ≥ f (0.94×0.444) = 0.315. Simulations of the continuous time Markov chain show that our theoretical results are valid even when the cluster size is 40. Figure 3 plots the population fraction of agents with action 1 within a certain set for one simulation. We look for this value in the clusters C 1 and C 2 as well as the sets B 1 and B 2 . C 1 and C 2 are seen to preserve their actions which are opposite to each other. Since B 1 has a significant trust in C 1 alone, the opinion of C 1 is propagated to B 1 . However, as B 2 trusts both C 1 and C 2 , its opinion is influenced by the two contradicting actions resulting in having some agents with action 1 and the rest with action 0. We repeat this for several simulations with the same graph structure and initial opinions to validate our results. • i, j ∈ C 1 or i, j ∈ C 2 , VII. CONCLUSION In this paper, we have proposed a stochastic multi-agent opinion dynamics model with binary actions. Agents interact through a network and individual opinions are influenced by neighbors action. Our analysis based on a Markov model of opinion dynamics show a consensus like limiting behavior for a finite number of agents. Whereas, when this number becomes large enough, the stochastic system can enter into a quasistationary regime in which partial agreements are reached. This type of phenomenon has been observed in all-to-all and cluster type topologies. Currently, we have studied the dynamics of opinion without any external control of the network. In the future, we will extend this by accounting for an external entity who has a preferred action (a company selling a product for example) and tries to control the actions of the users in a social network by controlling the opinion of a certain subset of agents. This can be interpreted as a company advertising for its product in order for the other agents in the social network to choose its product over that of a rival company. The addition of the first type of terms and division by |C| simply becomes κβ m ν n λν C + , with m being n -1,n or n + 1. The averaging of the perturbation terms results in a value between 0 and κβ m ν n (1 -λ) as all δ i are in [0, 1]. This can be therefore be written as in (17) with a new δ ∈ [0, 1]. Fig. 1 : 1 Fig. 1: Simulation of K i=1 Q i (t) compared to the NIMFA metastable state. The NIMFA works remarkably well even for K = 120. Fig. 2 : 2 Fig. 2: Structure of the graph. Any two agents in K may be connected with a 0.05 probability. All agents within a cluster are connected, and the arrows indicate directed connections. 1 i 1 making C 1 and C 2 clusters with coefficients λ 1 = 0.833 and λ 2 = 0.816 (for the particular random graph generated for this simulation). • 40 < i ≤ 60 and 1 ≤ j ≤ 20, making agents in B 1 trust C 1 with τ C τi ≥ 0.714 for all 40 < i ≤ 60. • 60 < i ≤ 80 and 1 ≤ j ≤ 20 or 80 < j ≤ 120, making agents in B 2 trust both C 1 and C 2 , with τ Fig. 3 : 3 Fig. 3: Simulation of i∈SQ i (t) for S = C 1 , C 2 , B 1 , B 2 .We see that C 1 and C 2 preserve their initial actions as given by proposition 3. We also see that as B 1 follows only C 1 , it's action is close to C 1 . As B 2 follows both C 1 and C 2 who have contradicting actions, it has a very mixed opinion which keeps changing randomly in time. TABLE II : II P r(Q i = 1) at equilibrium for some selected agents. P r(Q i = 1) values mentioned in simulation realization 1, 2 and 3 are calculated by time averaging over t = 1000 to t = 3000 hours and the simulation average is taken over 984 realizations. Cluster C 1 Set B1 |C 1 | = 40 |B1| = 20 Cluster C 2 Set B2 |C 1 | = 40 |B2| = 20 Table III compares the predictions based on propositions 3 and 4 with the values obtained in three of our simulations. Set S ν S + = E[ i∈S Q i ] from Propositions Simulation 1,2,3 C 1 ≤ 0.05 0.002 0.004 0.002 C 2 ≥ 0.94 0.994 0.995 0.995 B 1 ≤ 0.15 0.008 0.006 0.005 B 2 ≥ 0.315, ≤ 0.685 0.476 0.471 0.483 TABLE III : III ν S + at equilibrium for the indicated set S. Simulation values are calculated by time averaging over t = 100 to t = 1000 hours.
42,606
[ "5857", "5210", "753572" ]
[ "185180", "185180", "100376" ]
01756843
en
[ "spi" ]
2024/03/05 22:32:10
2016
https://hal.science/hal-01756843/file/CEFC2016_Mohamodhosen.pdf
Bilquis Mohamodhosen email: bilquis.mohamodhosen@ec-lille.fr Frédéric Gillon Abdelmounaïm Tounzi Loïc Chevallier Topology Optimisation of a 3D Electromagnetic Device using the SIMP Density-Based Method Keywords: gradient-based algorithm, relaxation of variables, SIMP density method, weighted objective sum The presented paper proposes a topology optimisation methodology based on the density-based method SIMP, and applied to a numerical example to validate the former. The approach and methodology are detailed, and the results for a 3D basic electromagnetic example are presented. The non-linear B(H) curve is also taken into account. I. INTRODUCTION Topology Optimisation (TO) arouses growing interest in the electromagnetic community as it proposes a new way of tackling the general engineering problem: finding the most optimal design to maximize performance and minimize cost. TO offers the freewill of finding the best design without any layout a priori, given appropriate formulation of optimization objectives and constraints to be consistent with the problem. Various authors have investigated the topic, presenting different methods to deal with it, but our main interest in this paper will be focused on the SIMP (density-based) method [START_REF] Rozvany | Generalized shape optimization without homogenization[END_REF] due to its ease of application and reproducibility. However, this method presents some weaknesses related to the unavoidable relaxation of variables (discrete to continuous) while using a gradient-based algorithm, giving rise to undesired intermediate variables. This paper presents a methodology devised to solve this problem, and a numerical example using Finite Element (FE) Analysis for validation. II. PROBLEM FORMULATION In electromagnetic applications, it is usually desired to maximise an objective function such as force or energy, while using the least Material Quantity (MQ) in the design. The structure is modelled by FE, and calculations are done by a laboratory developed calculation code (code_Carmel) coupled with gradient-based fmincon SQP algorithm via a laboratory developed optimisation platform (Sophemis). This has the advantage of outputting an electromagnetically coherent design at each iteration, and hence increasing the probability of obtaining the best 'feasible' design. In this paper, it is desired to produce the best structure made of iron and air as materials, represented by n number of variables ρ varying from 0 to 1, where 0 is air and 1 is iron. To prevent existence of undefined intermediate materials (e.g. at 0.5) in the final design, we propose to add a Feasibility Factor (FF) to force the values to 0 and 1. The optimisation problem is defined as in equation 1. (1) The main objective g(ρ) is maximised, and can also be written as the minimisation of the negative value of g(ρ). The coefficient α is used to weigh the prominence of FF w.r.t the main objective in the problem. If α is less than 0.5, FF will have a reduced, yet substantial importance in problem solving as compared to g(ρ). MQ is constrained to β, where β is the maximum amount of iron allowed in the design. III. NUMERICAL EXAMPLE A basic 3D numerical example (fig. 1) is used to validate the methodology. The initial domain used is a cube meshed into 4096 hexahedral elements (fig. 1a) where a Magnetic Potential Difference is applied to the nodes highlighted with white dots. The number of variables used is 64, with therefore 64 hexahedra per variable (green outlined cube). The aim is to find the most optimum design that maximises the energy (hence minimises negative value of energy), with the MQ constrained to 75%. The weightage α is set to 0.25. The final design is given in fig. 1b, where the white zones represent air and black represent iron, and the Magnetic Flux Density B given in fig. 1c. Optimisations are done using a linear, as well as non-linear B(H) behaviour assumption with same initial parameters. Both yield the same topology, with a calculation time of 5 mins and max B of 3.5T for linear case, and 20 mins with max B of 1.3T for non-linear case. In the extended paper version, more details will be given on the application of the methodology to optimise the topology of an electromagnet as in [START_REF] Okamoto | Improvements in Material-Density-Based Topology Optimization for 3-D Magnetic Circuit Design by FEM and Sequential Linear Programming Method[END_REF]. Fig. 1 . 1 Fig. 1. (a) FE Model, (b) Final design, (c) Magnetic Flux Density, B(T)
4,551
[ "1030077", "930485", "14540", "740251" ]
[ "120930", "13338", "13338", "120930", "13338", "92973", "13338", "92973", "544873", "13338", "92973" ]
01756849
en
[ "sde" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01756849/file/Ols%20et%20al.%20GLOPLACHA.pdf
Clémentine Ols email: clementine.ols88@gmail.com Valérie Trouet Martin P Girardin Annika Hofgaard Yves Bergeron Igor Drobyshev Post-1980 shifts in the sensitivity of boreal tree growth to North Atlantic Ocean dynamics and seasonal climate Tree growth responses to North Atlantic Ocean dynamics Keywords: Climate change, Dendrochronology, Climate-growth interactions, Response functions, Teleconnections, Arctic amplification  A significant boreal tree growth response to oceanic and atmospheric indices emerged during the 1980s.  This was particularly observed in western and central boreal Quebec and in central and northern boreal Sweden.  The post-1980 sensitivity to large-scale indices synchronized with changes in tree growth responses to local climate.  Future large-scale dynamics may impact forest growth and carbon sequestration to a greater extent than previously thought. Introduction Terrestrial biomes on both sides of the North Atlantic Ocean are strongly influenced by Arctic and Atlantic oceanic and atmospheric dynamics [START_REF] D'arrigo | NAO and sea surface temperature signatures in tree-ring records from the North Atlantic sector[END_REF][START_REF] Ottersen | Ecological effects of the North Atlantic Oscillation[END_REF][START_REF] Gerardin | Une classification climatique du Québec à partir de modèles de distribution spatiale de données climatiques mensuelles : vers une définition des bioclimats du Québec. Direction du patrimoine écologique et du développement durable[END_REF]. Some mid-20 th century changes in the dynamics of the North Atlantic Ocean have been considered as early signs of tipping points in the Earth climate system [START_REF] Lenton | Tipping elements in the Earth's climate system[END_REF][START_REF] Lenton | Early warning of climate tipping points[END_REF]. The Atlantic Meridional Overturning Circulation (AMOC) exhibited an exceptional slow-down in the 1970s [START_REF] Rahmstorf | Exceptional twentieth-century slowdown in Atlantic Ocean overturning circulation[END_REF]. The cause of this slow-down is still under debate, but possible explanations include the weakening of the vertical structure of surface waters through the discharge of low-salinity fresh water into the North Atlantic Ocean, due to the disintegration of the Greenland ice sheet and the melting of Canadian Arctic glaciers. A further weakening of the AMOC may possibly lead to a wide-spread cooling and decrease in precipitation in the North Atlantic region [START_REF] Sgubin | Abrupt cooling over the North Atlantic in modern climate models[END_REF], subsequently lowering the productivity of land vegetation both over northeastern North America and northern Europe [START_REF] Zickfeld | Carbon-cycle feedbacks of changes in the Atlantic meridional overturning circulation under future atmospheric CO2[END_REF][START_REF] Jackson | Global and European climate impacts of a slowdown of the AMOC in a high resolution GCM[END_REF]. Despite increasing research efforts in monitoring climate-change impacts on ecosystems, effects of late 20 th century changes in North Atlantic Ocean dynamics on mid-to high-latitude terrestrial ecosystems remain poorly understood. The dynamics of North Atlantic oceanic and atmospheric circulation, as measured through the AMOC, North Atlantic Oscillation (NAO) and Arctic Oscillation (AO) indices, strongly influence climate variability in northeastern North America (NA) and northern Europe (NE) [START_REF] Hurrell | Decadal trends in the north atlantic oscillation: regional temperatures and precipitation[END_REF][START_REF] Baldwin | Propagation of the Arctic Oscillation from the stratosphere to the troposphere[END_REF][START_REF] Wettstein | The influence of the North Atlantic-Arctic Oscillation on mean, variance, and extremes of temperature in the Northeastern United States and Canada[END_REF]. NAO and AO indices integrate differences in sea-level pressure between the Iceland Low and the Azores High [START_REF] Walker | Correlation in seasonal variation of weather. IX. A further study of world weather[END_REF], with high indices representative of increased west-east air circulation over the North Atlantic. Variability in AMOC, NAO and AO indices affects climate dynamics, both in terms of temperatures and precipitation regimes. Periods of high winter NAO and AO indices are associated with below-average temperatures and more sea ice in NA and a warmer-and wetter-than-average climate in NE. Periods of low winter NAO and AO indices are, in turn, associated with above-average temperatures and less sea ice in NA and a colder-and dryer-than-average climate in NE [START_REF] Wallace | Teleconnections in the geopotential height field during the Northern Hemisphere Winter[END_REF][START_REF] Hellström | The influence of the North Atlantic Oscillation on the regional temperature variability in Sweden: spatial and temporal variations[END_REF]. Low AMOC indices induce a wide-spread cooling and decrease of precipitation across the high latitudes of the North Atlantic region [START_REF] Jackson | Global and European climate impacts of a slowdown of the AMOC in a high resolution GCM[END_REF]. Boreal forests cover most of mid-and high-latitude terrestrial regions of NA and NE and play an important role in terrestrial carbon sequestration and land-atmosphere energy exchange [START_REF] Betts | Offset of the potential carbon sink from boreal forestation by decreases in surface albedo[END_REF][START_REF] Bala | Combined climate and carbon-cycle effects of large-scale deforestation[END_REF][START_REF] De Wit | Climate warming feedback from mountain birch forest expansion: Reduced albedo dominates carbon uptake[END_REF]. Boreal forests are sensitive to climate change [START_REF] Gauthier | Boreal forest health and global change[END_REF]. Despite general warming and lengthening of the growing season at mid-and high-latitudes [START_REF] Karlsen | Growing-season trends in Fennoscandia 1982-2006, determined from satellite and phenology data[END_REF]IPCC, 2014), tree growth in many boreal regions lost its positive response to rising temperatures during the late-20 th century [START_REF] Briffa | Reduced sensitivity of recent tree-growth to temperature at high northern latitudes[END_REF]). An increasing dependence on soil moisture in the face of the rapid rise in summer temperatures may counterbalance potential positive effects on boreal forest growth of increased atmospheric CO2 concentrations [START_REF] Girardin | No growth stimulation of Canada's boreal forest under half-century of combined warming and CO2 fertilization[END_REF]. During the late 20 th century, large-scale growth declines [START_REF] Gerardin | Une classification climatique du Québec à partir de modèles de distribution spatiale de données climatiques mensuelles : vers une définition des bioclimats du Québec. Direction du patrimoine écologique et du développement durable[END_REF] and more frequent low growth anomalies [START_REF] Ols | Previous growing season climate controls the occurrence of black spruce growth anomalies in boreal forests of Eastern Canada[END_REF]-in comparison with the early 20 th century-have been reported for pristine boreal spruce forests of NA. In coastal NE, climatic changes over the 20 th century have triggered shifts from negative significant to non-significant spruce responses to winter precipitation [START_REF] Solberg | Shifts in radial growth responses of coastal Picea abies induced by climatic change during the 20th century, central Norway[END_REF]. Annual variability in boreal forest tree growth patterns have shown sensitivity to sea ice conditions [START_REF] Gerardin | Une classification climatique du Québec à partir de modèles de distribution spatiale de données climatiques mensuelles : vers une définition des bioclimats du Québec. Direction du patrimoine écologique et du développement durable[END_REF][START_REF] Drobyshev | Atlantic SSTs control regime shifts in forest fire activity of Northern Scandinavia[END_REF] and variability in SSTs [START_REF] Lindholm | Growth indices of North European Scots pine record the seasonal North Atlantic Oscillation[END_REF]. All changes in boreal tree growth patterns and climate-growth interactions listed above may be driven by the dynamics of the North Atlantic Ocean. Understanding current and projected future impacts of North Atlantic Ocean dynamics on boreal forest ecosystems and their carbon sequestration capacity calls for a deeper spatiotemporal analysis of tree growth sensitivity to large-scale oceanic and atmospheric dynamics. The present study investigates tree growth responses to changes in North Atlantic Ocean dynamics of two widely distributed tree species in the boreal forests of northeastern North America (black spruce) and northern Europe (Norway spruce). We investigated treegrowth sensitivity to seasonal large-scale indices (AMOC, NAO; AO) and seasonal climate (temperature and precipitation) over the second half of the 20 th century. We hypothesize that shifts in tree growth sensitivity to large-scale indices and local climate are linked to major changes in North Atlantic Ocean dynamics. This study aims to answer two questions: (i) has boreal tree growth shown sensitivity to North-Atlantic Ocean dynamics? and (ii) does tree growth sensitivity to such dynamics vary through space and time, both within and across NA and NE? 2 Material and methods Study areas We studied two boreal forest dominated areas under the influence of large-scale atmospheric circulation patterns originating in the North Atlantic: the northern boreal biome of the Canadian province of Quebec (50°N-52°N, 58°W-82°W) in NA and the boreal biome of Sweden (59°N-68°N, 12°E-24°E) in NE (Fig. 1a). The selection of the study areas was based on the availability of accurate annually-resolved tree growth measurements acquired from forest inventories. In northern boreal Quebec, mean annual temperature increases from north to south (-5 to 0.8 °C) and total annual precipitation increases from west to east (550 to 1300 mm), mainly due to winter moisture advection from the North Atlantic Ocean [START_REF] Gerardin | Une classification climatique du Québec à partir de modèles de distribution spatiale de données climatiques mensuelles : vers une définition des bioclimats du Québec. Direction du patrimoine écologique et du développement durable[END_REF]. In boreal Sweden, annual mean temperature increases from north to south (-2 to 6 °C) and annual total precipitation decreases from west to east (900 to 500 mm), mostly because of winter moisture advection from the North Atlantic Ocean that condenses and precipitates over the Scandinavian mountains in the west (Sveriges meteorologiska och hydrologiska institut (SMHI), 2016). The topography in northern boreal Quebec reveals a gradient from low plains in the west (200-350 m above sea level [a.s.l.]) to hills in the east (400-800 m a.s.l.). In boreal Sweden, the topography varies from high mountains (1500-2000 m a.s.l.) in the west to low lands (50-200 m a.s.l.) in the east along the Baltic Sea. However, mountainous coniferous forests are only found up to ca. 400 m a.s.l. in the north (68°N) and ca. 800 m a.s.l. in the south (61°N). Tree growth data We studied tree growth patterns of the most common and widely distributed spruce species in each study area: black spruce (Picea mariana (Mill.) Britton) in Quebec and Norway spruce (P. abies (L.) H. Karst) in Sweden. A total of 6,876 and 14,438 tree-ring width series were retrieved from the Quebec (Ministère des Ressources naturelles du Québec, 2014) and Swedish forest inventory database (Riksskogstaxeringen, 2016), respectively. We adapted data selection procedures to each database to provide as high local coherence in growth patterns as possible. For Quebec, core series were collected from dominant trees on permanent plots (three trees per plot, four cores per tree) between 2007 and 2014. Permanent plots were situated in unmanaged old-growth black spruce forests north of the northern limit for timber exploitation. Core series were aggregated into individual tree series using a robust bi-weighted mean (robust average unaffected by outliers, [START_REF] Affymetrix | Statistical Algorithms Description Document[END_REF]. To enhance growth coherence at the local level, we further selected tree series presenting strong correlation (r > 0.4) with their respective local landscape unit master chronology. This master chronology corresponds to the average of all other tree series within the same landscape unit (landscape units are 6341 km 2 on average and delimit a territory characterized by specific bioclimatic and physiographic factors [START_REF] Robitaille | Paysages régionaux du Québec méridional[END_REF]). This resulted in the selection of 790 tree series that were averaged at the plot level using a robust bi-weighted mean. The obtained 444 plot chronologies had a common period of 1885-2006 (Table 1). Plot chronologies were detrended using a log transformation and a 32-year spline de-trending, and pre-whitened using autocorrelation removal [START_REF] Cook | The smoothing spline: an approach to standardizing forest interior tree-ring width series for dendroclimatic studies Tree-Ring[END_REF]. Detrending aims at removing the lowfrequency age-linked variability in tree-ring series (decreasing tree-ring width with increasing age) while keeping most of the high-frequency variability (mainly linked to climate). Prewhitening removes all but the high frequency variation in the series by fitting an autoregressive model to the detrended series. The order of the auto-regressive model was selected by Akaike Information Criterion [START_REF] Akaike | A new look at the statistical model identification[END_REF]. For Sweden, core series were collected within the boreal zone of the country (59°N-68°N) on temporary plots between 1983 and 2010. Temporary plots were situated in productive forests, i.e. those with an annual timber production of at least 1m 3 /ha. These forests encompass protected, semi-natural and managed forests. In each plot, one to three trees were sampled, with two cores per tree. Swedish inventory procedures do not include any visual and statistical cross-dating of core series at the plot level. To filter out misdated series, we aggregated core series into 4067 plot chronologies using a robust bi-weighted mean, and compared them to Norway spruce reference chronologies from the International Tree-Ring Data Base (International Tree Ring Data Bank (ITRDB), 2016). In total, seven ITRDB reference chronologies were selected (Fig. 1b), all representative of tree growth at mesic sites in boreal Sweden. Plot and reference chronologies were detrended and pre-whitened using the same standard procedures used for the Quebec data. Each plot chronology was then compared with its geographically nearest reference chronology -determined based on Euclidean distance -using Student's t-test analysis [START_REF] Student | The probable error of the mean[END_REF]. Plot chronologies with a t-test value lower than 2.5 with their respective nearest reference chronology were removed from further analyses (the t-test value threshold was set up according to the mean length of plot chronologies (Table 1)). A total of 1256 plot chronologies (with a common period of passed this quality test (Table 1). Spatial aggregation of plot chronologies into regional chronologies in each study area Quality checked chronologies at the plot level were aggregated into 1° x 1° latitude-longitude grid cell chronologies within each study area (Fig. 1b). Grid cell chronologies were calculated as the robust bi-weighted mean of all plot chronologies within each grid cell. 1812-2008 1929-2008 containing less than three plot chronologies were removed from further analyses. This resulted in a total of 36 and 56 grid cell chronologies in Quebec and Sweden, respectively (Fig. 1b, Table 1). Grid cells contained on average 12 and 23 plot chronologies in Quebec and Sweden, respectively (Table 1). To investigate the influence of spatial scale in climate-growth sensitivity analyses, we performed an ordination of grid cell chronologies within each study area over their common period (Fig. 1c). The common period between grid cell chronologies was 1885-2006 and 1936-1995 in Quebec and Sweden, respectively. Ordination analyses were performed in R using Euclidean dissimilarities matrices (dist function) and Ward agglomeration (hclust function) methods. Three main clusters were identified in each study area (Fig. 1c). Spatial extents of all clusters were consistent with well-defined bioclimatic regions, providing support to data selection procedures. In Quebec, clusters identified in the West (Q_W) and the East (Q_E) corresponded well to the drier and wetter northern boreal region, respectively (Fig. 1b &c). In Sweden, the cluster identified in the South (S_S) corresponded to a combination of the nemo-boreal and southern boreal zones [START_REF] Moen | National Atlas of Norway. Vegetation[END_REF]. The Swedish central (S_C) and northern (S_N) clusters corresponded to the mid-boreal and northern boreal zones, respectively (Fig. 1b &c) [START_REF] Moen | National Atlas of Norway. Vegetation[END_REF]. Regional chronologies were built as the average of all grid cell chronologies within a cluster. In Sweden, inter-cluster correlations were all significant and ranged from 0.77 (S_S vs S_N) to 0.94 (S_C vs S_N). In Quebec, inter-cluster correlations were all significant and ranged from 0.44 (Q_W vs Q_E) to 0.52 (Q_C vs Q_E) (see Appendix S1-S3 in Supporting Information). Henceforward, the terms 'local level' and 'regional level' refer to analyses focusing on the grid cell chronologies and the six regional chronologies, respectively. ⋇ (swed011, swed012, swed013, swed014, swed015, swed017 and swed312). The grey shading indicates the boreal zone delimitation according to [START_REF] Brant | An introduction to Canada's boreal zone: ecosystem processes,health, sustainability, and environmental issues[END_REF]. Climate data For each grid cell, we extracted local seasonal mean temperature and total precipitation data from the CRU TS 3.24 1° x 1° [START_REF] Harris | Updated high-resolution grids of monthly climatic observations ---the CRU TS3.10 dataset[END_REF], with seasons spanning from the previous (pJJA) through the current summer (JJA). Climate data were further aggregated at the regional level as the robust bi-weighted mean of climate data of all grid cells contained in each regional cluster (Fig. 1b &c). Seasonal AMOC indices (1961( -2005( , first AMOC measurements in 1961) ) were extracted from the European Center for Medium-Range Weather Forecast (Ocean Reanalysis System ORA-S3). Seasonal AO andNAO indices (1950-2008) were extracted from the Climate Prediction Center database (NOAA, 2016). Seasonal AMOC, NAO, and AO indices included previous summer, winter (DJF), and current summer. All seasonal climate data were downloaded using the KNMI Climate Explorer [START_REF] Trouet | KNMI Climate Explorer: A web-based research tool for high-resolution paleoclimatology[END_REF]) and were detrended using linear regression and thereafter pre-whitened (autocorrelation of order 1 removed from time series). Links between seasonal climate and growth patterns Analyses were run over the 1950-2008 period (the longest common period between tree growth and climate data), except with AMOC indices which were only available for 1961-2005. Tree growth patterns were correlated with seasonal climate variables (previous-tocurrent summer temperature averages and precipitation sums) and seasonal indices (previous summer, winter, and current summer AMOC, NAO, and AO) at the regional and local levels. To minimize type I errors, each correlation analysis was tested for 95% confidence intervals using 1000 bootstrap samples. In addition, moving correlation analyses (21-yr windows moved one year at a time) were performed at the regional level using the same procedures as above. All calculations were performed using the R package treeclim [START_REF] Zang | treeclim: an R package for the numerical calibration of proxyclimate relationships[END_REF]. For more details regarding bootstrapping procedures please see the description of the "dcc" function of this package. Results Tree growth responses to seasonal climate Some significant climate-growth associations were observed at the regional level (Fig. 2). Significant associations at the local level displayed strong spatial patterns and revealed heterogeneous within-region growth responses (Figs. 3 and4). Moving correlations revealed numerous shifts in the significance of climate-growth associations around 1980 (Fig. 5). Quebec No significant climate-growth associations were observed at the regional level in western boreal Quebec over the entire study period (Fig. 2). Some significant positive responses to previous winter and current spring temperatures were observed at the local level, but these concerned a minority of cells (Fig. 3). Moving correlations revealed that Q_W significantly correlated with previous summer precipitation (negatively) before the 1970s, with previous winter temperatures (positively) from the 1970s and with current spring temperatures (positively) from 1980 (Fig. 5). Tree growth in central boreal Quebec significantly and positively correlated with current summer temperatures at the regional and local levels (Figs. 2 and3). Numerous negative correlations between tree growth and spring precipitation were observed at the local level (Fig. 3). Moving correlations revealed an emerging correlation between Q_C and previous winter temperatures in the early 1970s (significant during most intervals up to most recent years) (Fig. 5). No significant climate-growth associations were observed in eastern boreal Quebec at the regional level (Fig. 2). At the local level, some positive significant correlations with current summer temperatures were observed (Fig. 3). Moving correlations revealed that Q_E correlated significantly and positively with current summer temperatures up to the early 1970s (Fig. 5). Significant correlations (P < 0.05) are marked with a star. Analyses were computed between grid cell chronologies and local seasonal climate data extracted for each grid cell from the CRU TS 3.24 1° x 1° [START_REF] Harris | Updated high-resolution grids of monthly climatic observations ---the CRU TS3.10 dataset[END_REF]. Seasons included previous summer (pJJA), previous autumn (SON), winter (DJF); current spring (MAM) and current summer (JJA). To visualize separation between regional clusters (Q_W, Q_C, and Q_E, cf. Fig. 1) correlation values at Q_C grid cells are plotted with circles. Significant correlations (P < 0.05) are marked with a black dot. Sweden Tree growth in southern boreal Sweden correlated significantly and negatively with previous summer and winter temperatures at the regional and local levels, the correlation with winter temperatures concerning however only a minority of cells (Figs. 2 and4). Moving correlations indicated that the negative association with previous summer temperatures remained significant up to the early 1990s and that the negative association with winter temperatures emerged after 1980 (Fig. 5). In central boreal Sweden, tree growth significantly and negatively correlated with previous summer temperatures both at the regional and local levels (Figs. 2 and4). Some additional significant correlations with winter temperatures (negative) and with current summer temperatures (positive) were observed at the local level (Fig. 4). Moving correlation analyses revealed a significant positive correlation between S_C and current summer temperatures that dropped and became non-significant at the end of the study period (Fig. 5). In addition, the correlation between S_C and previous summer precipitation shifted from significantly negative to significantly positive during the 1980s (Fig. 5). S_C became significantly and negatively correlated with previous summer temperatures after the 1980s and stopped being significantly and negatively correlated with previous autumn precipitation and with winter temperatures at the end of the 1970s (Fig. 5). Tree growth in northern boreal Sweden correlated significantly with previous summer (negatively) and current summer temperatures (positively) both at the regional and local levels (Figs. 2 and4). At the local level, tree growth in some cells significantly and negatively correlated with winter temperatures (Fig. 4). Significant and negative responses to current summer precipitation were observed at northernmost cells (Fig. 4). Moving correlations revealed that the positive association with current summer temperatures was only significant at the beginning and at the end of the study period (Fig. 5). After the 1980s, significant positive associations with previous autumn temperatures emerged (Fig. 5) and the significant negative association with winter temperatures disappeared. Analyses were computed between grid cell chronologies and local seasonal climate data extracted for each grid cell from the CRU TS 3.24 1° x 1° [START_REF] Harris | Updated high-resolution grids of monthly climatic observations ---the CRU TS3.10 dataset[END_REF]. Seasons included previous summer (pJJA), previous autumn (SON), winter (DJF); current spring (MAM) and current summer (JJA). To visualize the separation between regional clusters (S_S, S_C, and S_N, cf. Fig. 1) correlation values at S_C grid cells are plotted with circles. Significant correlations (P < 0.05) are marked with a black dot. extracted for each grid cell from the CRU TS 3.24 1° x 1° [START_REF] Harris | Updated high-resolution grids of monthly climatic observations ---the CRU TS3.10 dataset[END_REF] and then aggregated at the regional level by robust bi-weighted mean. Seasons included previous summer (pJJA), previous autumn (SON), winter (DJF); current spring (MAM) and current summer (JJA). Moving correlations were calculated using 21-yr windows moved one year at a time and are plotted using the central year of each window. Windows of significant correlations (P < 0.05) are marked with a dot. Links between tree growth patterns and large-scale indices Some significant associations were found between tree growth and large-scale indices (Figs. 6, 7, and 8). Moving correlation analyses revealed some shifts from pre-1980 insignificant to post-1980 significant correlations (Fig. 9). The seasonal indices involved in these shifts varied across regional chronologies. Quebec Tree growth in western boreal Quebec was significantly and negatively associated with the winter AMOC and the winter AO indices at the regional level (Fig. 6). At the local level, these associations concerned, however, a minority of cells (Fig. 7). Moving correlations revealed that the regional negative association with winter AMOC was only significant in the most recent part of the study period (Fig. 9). Significant negative correlations between Q_W and current summer NAO and AO indices were observed from the 1980s up to the most recent years, at which point they show a steep increase and become non-significant (Fig. 9). In central boreal Quebec, no significant associations between tree growth and seasonal indices were identified at the regional or local level (Figs. 6 and7). Moving correlations indicated significant negative correlations with previous summer NAO and AO indices of during the 1970s, with winter NAO and AO indices during the 1980s and with current summer NAO and AO indices from the 1980s up to the most recent years (Fig. 9). No significant association was identified between large-scale indices and tree growth in eastern boreal Quebec (Figs. 6,7,and 9). are marked with a black dot. Sweden No significant association between tree growth in southern boreal Sweden and seasonal largescale indices was identified at the regional or local level (Figs. 6 and8). Moving correlations revealed, however, significant negative associations between S_S and the winter AMOC index before the 1980s (Fig. 9). In central boreal Sweden, tree growth significantly and positively correlated with the current summer NAO index at the regional level (Fig. 6). At the local level, this correlation concerned, however, a minority of cells (Fig. 8). Moving correlations revealed that the significant positive association with the current summer NAO index emerged in the early 1980s (Fig. 9) and that S_C significantly correlated with the current summer AMOC index during the 1980s (Fig. 9). In northern boreal Sweden, tree growth significantly correlated with the current summer NAO index (positively) and with the winter AO index (negatively) at the regional level (Fig. 6). At the local level, the positive association with summer NAO concerned a large majority of cells and the negative association with the winter AO index concerned only very few cells (Fig. 8). Moving correlation analyses indicated that the positive association between S_N and the current summer NAO index was only significant after the 1980s and that S_N significantly correlated with current summer AMOC during most of the 1980s (Fig. 9). Significant correlations (P < 0.05) are marked with a black dot. Spatial aggregation of tree growth data The high correlation between the regional chronologies in NE (Appendix S1), especially between the central and northern chronologies, could have supported the construction of one single boreal Sweden-wide regional chronology. Climate-growth analyses at the regional and local level revealed, nevertheless, clear differences across space in tree growth sensitivity to climate (Fig. 4) and to large-scale indices (Fig. 8), with a higher sensitivity in northernmost forests. The aggregation of tree growth data across space, even if based on objective similarity statistics (Appendix S1), may, therefore, mask important local differences in climate-growth interactions [START_REF] Macias | Growth variability of Scots pine (Pinus sylvestris) along a west-east gradient across northern Fennoscandia: A dendroclimatic approach[END_REF]. Our results demonstrate that spatial aggregation should not be performed without accounting for bioclimatic domains especially when studying climategrowth interactions. In practice, one should at least check that a spatial similarity in tree growth patterns is associated with spatial similarity in seasonal climate. The use of both the regional and local scales regarding climate-growth interactions, as in the present study, is, therefore, recommended to exhaustively and more precisely capture cross-scale diverging and emerging tree growth patterns and sensitivity to climate. Post-1980 shifts towards significant influence of large-scale indices on boreal tree growth The emergence of a post-1980 significant positive tree growth response to the current summer NAO index in central and northern boreal Sweden (Fig. 9) appears to be linked to spatial variability in the NAO influence on seasonal climate (Fig. 10). Summer NAO has had little to no influence on summer climate variability over the entire period 1950-2008 in boreal Quebec or Sweden (Appendix S4). However, the partitioning of the period into two sub-periods of similar length (1950-1980 and 1981-2008) revealed a northeastward migration of the significant-correspondence field between the summer NAO index and local climate, particularly in NE (Fig. 10). Over the 1981-2008 period, the summer NAO index was significantly and positively associated with temperature and negatively with precipitation in boreal Sweden (Fig. 10). Higher growing-season temperatures, induced by a higher summer NAO, might have promoted the growth of temperature-limited Swedish boreal forest ecosystems, explaining recent positive response of tree growth to this large-scale index in the central and northern regions (Fig. 9). The northeastward migration of the NAO-climate spatial field may be an early sign of a northward migration of the North Atlantic Gulf stream [START_REF] Taylor | The North Atlantic Oscillation and the latitude of the Gulf Stream[END_REF] or a spatial reorganization of the Icelandic-low and Azores-high pressure NAO's nodes [START_REF] Portis | Seasonality of the North Atlantic Oscillation[END_REF][START_REF] Wassenburg | Reorganization of the North Atlantic Oscillation during early Holocene deglaciation[END_REF]. The August Northern Hemisphere Jet over NE reached its northernmost position in 1976 but thereafter moved southward, despite increasing variability in its position [START_REF] Trouet | Recent enhanced high-summer North Atlantic Jet variability emerges from three-century context[END_REF]. This southward migration of the jet may weaken the strength of the observed post-1980 positive association between boreal tree growth and the summer NAO index in NE in the coming decades. period were extracted from NOAA's climate prediction center. Summer mean temperature and total precipitation are those of CRU TS 3.24 1° x 1° [START_REF] Harris | Updated high-resolution grids of monthly climatic observations ---the CRU TS3.10 dataset[END_REF]. All correlations were computed in the KNMI Climate Explorer (https://climexp.knmi.nl [START_REF] Trouet | KNMI Climate Explorer: A web-based research tool for high-resolution paleoclimatology[END_REF]). Indices and climate variables were normalized (linear regression) prior to analyses. Only correlations significant at P < 0.05 are plotted. The post-1980 significant negative associations between tree growth and summer NAO and AO indices in boreal Quebec are more challenging to interpret. There was no evident significant tree growth response to summer temperature in these regions when analyzed over the full 1950-2008 period (Fig. 4). Yet, some significant positive associations between tree growth and temperatures were observed with winter temperatures from the 1970s (in central Quebec) and with spring temperatures from the 1980s (in western Quebec only) (Fig. 5). These associations indicate that tree growth in boreal Quebec has been limited by winter and spring climate since the 1970s and 1980s, respectively. Below-average summer temperatures induced by high summer NAO and AO may exacerbate the sensitivity of tree growth to low temperatures. Noting that no significant post-1980 association was observed between temperature and summer NAO and AO indices in Quebec (Fig. 10), the emerging negative tree growth response to summer NAO and AO indices may indicate a complex interplay between large-scale indices and air mass dynamics and lagged effects over several seasons [START_REF] Boucher | Decadal variations in eastern Canada's taiga wood biomass production forced by ocean-atmosphere interactions[END_REF]. In western Quebec, tree growth was negatively influenced by the winter AMOC index at the regional level (Fig. 6). This relationship appears to be linked to a significant positive association between tree growth and spring temperature (Figs. particularly through their control upon fire activities (Macias Fauria & Johnson 2006[START_REF] Goff | Historical fire regime shifts related to climate teleconnections in the Waswanipi area, central Quebec, Canada[END_REF]. These indices have not been investigated in the present study but might present some additional interesting features. Contrasting climate-growth associations among boreal regions Post-1980 shifts in tree growth sensitivity to seasonal climate differed among boreal regions. In NA, we observed the emergence of significantly positive growth responses to winter and spring temperatures. In NE, observed post-1980 shifts mainly concerned the significance of negative growth responses to previous summer and winter temperatures. Warmer temperatures at boreal latitudes have been reported to trigger contrasting growth responses to climate [START_REF] Wilmking | Recent climate warming forces contrasting growth responses of white spruce at treeline in Alaska through temperature thresholds[END_REF] and to enhance the control of site factors upon growth [START_REF] Nicklen | Local site conditions drive climate-growth responses of Picea mariana and Picea glauca in interior Alaska[END_REF]. This is particularly true with site factors influencing soil water retention, such as soil type, micro-topography, and vegetation cover [START_REF] Düthorn | Influence of micro-site conditions on tree-ring climate signals and trends in central and northern Sweden[END_REF]. Despite a generalized warming at high latitudes [START_REF] Serreze | The emergence of surface-based Arctic amplification[END_REF], no increased sensitivity of boreal tree growth to precipitation was identified in the present study, except in central Sweden where tree growth became positively and significantly correlated to previous summer precipitation (Fig. 5). This result underlines that temperature remains the major-growth limiting factor in our study regions. The observed differences in tree growth response to winter temperature highlight diverging non-growing season temperature constraints on boreal forest growth. While warmer winters appear to promote boreal tree growth in NA, they appear to constrain tree growth in boreal NE. Such opposite responses to winter climate from two boreal tree species of the same genus might be linked to different winter conditions between Quebec and Sweden. In NA, winters conditions are more continental and harsher than in NE (Appendix S5). Warmer winters may therefore stimulate an earlier start of the growing season and increase growth potential [START_REF] Rossi | Lengthening of the duration of xylogenesis engenders disproportionate increases in xylem production[END_REF]. However, warmer winters, combined with shallower snow-pack, have been shown to induce a delay in the spring tree growth onset, through lower thermal inertia and a slower transition from winter to spring [START_REF] Contosta | A longer vernal window: the role of winter coldness and snowpack in driving spring transitions and lags[END_REF]. This phenomenon might explain the negative association between tree growth and winter temperatures observed in NE. The post-1970s growth-promoting effects of winter and spring temperature in NA (Fig. 5) suggest, as ealier reported by [START_REF] Charney | Observed forest sensitivity to climate implies large changes in 21st century North American forest growth[END_REF] and [START_REF] Girardin | No growth stimulation of Canada's boreal forest under half-century of combined warming and CO2 fertilization[END_REF], that, under sufficient soil water availability and limited heat stress conditions, tree growth at midto high-latitudes can increase in the future. However, warmer winters may also negatively affect growth by triggering an earlier bud break and increasing risks of frost damages to developing buds [START_REF] Cannell | Climatic warming, spring budburst, and frost damage on trees[END_REF] or by postponing the start of the growing season (see above, [START_REF] Contosta | A longer vernal window: the role of winter coldness and snowpack in driving spring transitions and lags[END_REF]. This might provide an argument against a sustained growth-promoting effect of higher seasonal temperatures [START_REF] Gerardin | Une classification climatique du Québec à partir de modèles de distribution spatiale de données climatiques mensuelles : vers une définition des bioclimats du Québec. Direction du patrimoine écologique et du développement durable[END_REF]. Gradients in the sensitivity of tree growth to North Atlantic Ocean dynamics across boreal Quebec and Sweden Trees in western and central boreal Quebec, despite being furthest away from the North Atlantic Ocean in comparison to trees in eastern boreal Quebec, were the most sensitive to oceanic and atmospheric dynamics, and particularly to current summer NAO and AO indices after the 1970s. In these two boreal regions, tree growth responses to large-scale indices were stronger and more spatially homogeneous than tree growth responses to regional climate. This suggests that growth dynamics in western and central boreal Quebec, despite being mainly temperature-limited, can be strongly governed by large-scale oceanic and atmospheric dynamics [START_REF] Boucher | Decadal variations in eastern Canada's taiga wood biomass production forced by ocean-atmosphere interactions[END_REF]. The tree growth sensitivity to the winter AMOC index observed at regional level in western boreal Quebec might directly emerge from the correspondence between AMOC and winter snow fall. Western boreal Quebec is the driest and most fire-prone of the Quebec regions studied here. Soil water availability in this region strongly depends on winter precipitation. High winter AMOC indices are associated with the dominance of Arctic air masses over NA and leads to decreased snowfall (Appendix S4). Large-scale indices, through their correlation with regional fire activity, can also possibly override the direct effects of climate on boreal forest dynamics [START_REF] Drobyshev | Environmental controls of the northern distribution limit of yellow birch in eastern Canada[END_REF][START_REF] Zhang | Stand history is more important than climate in controlling red maple (Acer rubrum L.) growth at its northern distribution limit in western Quebec, Canada[END_REF]. Fire activity in NA strongly correlates with variability in atmospheric circulation, with summer high-pressure anomalies promoting the drying of forest fuels and increasing fire hazard [START_REF] Skinner | The Association Between Circulation Anomaliesin the Mid-Troposphere and Area Burnedby Wildland Fire in Canada[END_REF], Macias Fauria & Johnson 2006) and low-pressure anomalies bringing precipitation and decreasing fire activity. In Sweden, the northernmost forests were the most sensitive to North Atlantic Ocean dynamics, particularly to the summer NAO (Fig. 8). These high-latitude forests, considered to be 'Europe's last wilderness' [START_REF] Kuuluvainen | North Fennoscandian mountain forests: History, composition, disturbance dynamics and the unpredictable future[END_REF], are experiencing the fastest climate changes [START_REF] Hansen | Global surface temperature change[END_REF]. Numerous studies have highlighted a correspondence between tree growth and NAO (both winter and summer) across Sweden [START_REF] D'arrigo | NAO and sea surface temperature signatures in tree-ring records from the North Atlantic sector[END_REF][START_REF] Cullen | Multiproxy reconstructions of the North Atlantic Oscillation[END_REF][START_REF] Linderholm | Dendroclimatology in Fennoscandiafrom past accomplishments to future potential[END_REF], with possible shifts in the sign of this correspondence along north-south [START_REF] Lindholm | Growth indices of North European Scots pine record the seasonal North Atlantic Oscillation[END_REF] and west-east gradients [START_REF] Linderholm | Tree-ring records from central Fennoscandia: the relationship between tree growth and climate along a west-east transect[END_REF]. Our results identified a post-1980 positive correspondence between tree growth and summer NAO, spatially restricted to the northernmost regions (Figs. 8 and9). This emerging correspondence appears linked to the combination of a growth-promoting effect of higher temperature at these latitudes (Fig. 5) and a northeastward migration of the spatial correspondence between NAO and local climate (Fig. 10). Boreal forests of Quebec (western and central) and Sweden (central and northern) emerged as regions sensitive to large-scale climate dynamics. We, therefore, consider them as suitable for a long-term survey of impacts of ocean-atmosphere dynamics on boreal forest ecosystems. Fig. 1 1 Fig. 1 a: Location of the two study areas (black frame); b & c: Clusters identified in each study area by ordination of 1° x 1° latitude-longitude grid cell chronologies. Ordination analyses were performed over the common period between grid cell chronologies in each study area using Euclidean dissimilarities matrices and Ward agglomeration methods. The common period was 1885-2006 for Quebec and 1936-1995 for Sweden. Ordinations included 36 and 56 grid cell chronologies in Quebec and Sweden, respectively. A western (Q_W), central (Q_C) and eastern (Q_E) cluster were identified in Quebec and a southern (S_S), central (S_C) and northern (S_N) cluster were identified in Sweden. Reference chronologies Fig. 2 . 2 Fig. 2. Tree growth responses to seasonal temperature averages (a) and precipitation sums (b) at the regional level over the 1950-2008 period, as revealed by correlation analyses. Analyseswere computed between the six regional chronologies (Q_W, Q_C, and Q_E in NA; and S_S, S_C and S_N in NE) and seasonal climate data. Climate data were first extracted from the CRU TS 3.24 1° x 1°[START_REF] Harris | Updated high-resolution grids of monthly climatic observations ---the CRU TS3.10 dataset[END_REF] for each grid cell and then aggregated at the regional level by a robust bi-weighted mean. Seasons included previous summer (pJJA), previous autumn (SON), winter (DJF); current spring (MAM) and current summer (JJA). Fig. 3 . 3 Fig. 3. Tree growth responses to seasonal temperature averages (a) and precipitation sums (b) at the local level over the 1950-2008 period in Quebec, as revealed by correlation analyses. Fig. 4 . 4 Fig. 4. Tree growth responses to seasonal temperature averages (a) and precipitation sums (b) at the local level over the 1950-2008 period in Sweden, as revealed by correlation analyses. Fig. 5 . 5 Fig. 5. Moving correlations between regional seasonal temperature averages (red lines) and precipitation sums (blue lines), and the six regional chronologies (Q_W, Q_C, and Q_E in NA; and S_S, S_C and S_N in NE) over the 1950-2008 period. Climate data were first Fig. 6 . 6 Fig. 6. Correlation between seasonal AMOC (a), NAO (b), and AO (c) indices and the six regional chronologies (Q_W, Q_C, and Q_E in NA; and S_S, S_C and S_N in NE). Seasonal indices include previous summer (pJJA), winter (DJF), and current summer (JJA), and were calculated as mean of monthly indices. Correlations were calculated over the 1961-2005 period for AMOC, and over the 1950-2008 period for NAO and AO. Significant correlations (P < 0.05) are marked with a star. Fig. 7 . 7 Fig. 7. Correlation between seasonal AMOC (a), NAO (b), and AO (c) indices, and growth patterns at the local level in Quebec. Seasonal indices include previous summer (left-hand panels), winter (middle panels), and current summer (right-hand panels), and were calculated as mean of monthly indices. Correlations were calculated over the 1961-2005 period for AMOC, and over the 1950-2008 period for NAO and AO. To visualize the separation between regional clusters, correlation values at Q_C grid cells are plotted with circles. Significant correlations (P < 0.05) Fig. 8 . 8 Fig. 8. Correlation between seasonal AMOC (a), NAO (b), and AO (c) indices, and growth patterns at the local level in Sweden. Seasonal indices were calculated as mean of monthly indices and include previous summer (left-hand panels), winter (middle panels), and current summer (right-hand panels). Correlations were calculated over the 1961-2005 period for AMOC, and over the 1950-2008 period for NAO and AO. To visualize the separation between regional clusters, correlation values at S_C grid cells are plotted with circles. Fig. 9 . 9 Fig. 9. Moving correlations between previous summer (pJJA; left-hand panels), winter (DJF; middle panels) and current summer (JJA; right-hand panels) large-scale indices, and the six regional chronologies (Q_W, Q_C, and Q_E in NA; and S_S, S_C and S_N in NE). Largescale indices include AMOC (black), NAO (red), and AO (blue). Moving correlations were calculated using 21-yr windows moved one year at a time and are plotted using the central year of each window. Correlations were calculated over the 1961-2005 period for AMOC, and over the 1950-2008 period for NAO and AO. Windows of significant correlations (P < 0.05) are marked with a dot. Fig. 10 . 10 Fig. 10. Correspondence between summer NAO (a) and AO (b) indices and local summer climate (mean temperature and total precipitation) between 1950 and 1980 (left-hand panels) and between 1981 and 2008 (right-hand panels). NAO and AO indices over the 1950-2008 5 and 9). Positive winter AMOC indices are generally associated with cold temperatures in Quebec, and particularly so in the West (Appendix S4). Positive winter AMOC indices are associated with the dominance of dry winter air masses of Arctic origin over Quebec, and may thereby delay the start of the growing season and reduce tree-growth potential. Forest dynamics in NA have been reported to correlate with Pacific Ocean indices such as the Pacific Decadal Oscillation (PDO) or the El-Nino Southern Oscillation (ENSO), Table 1 . 1 Characteristics of tree-ring width chronologies*. 198 *Data for Q_W, Q_C and Q_E chronologies respectively. **Data for S_S, S_C and S_N chronologies respectively. Acknowledgements This study was financed by the Natural Sciences and Engineering Research Council of Canada (NSERC) through the project 'Natural disturbances, forest resilience and forest management: the study case of the northern limit for timber allocation in Quebec in a climate change context' (STPGP 41344-11). We acknowledged financial support from the Nordic Forest Research Cooperation Committee (SNS) through the network project entitled 'Understanding the impacts of future climate change on boreal forests of northern Europe and eastern Canada', from the EU Belmont Forum (project PREREAL), NINA's strategic institute program portfolio funded by the Research Council of Norway (grant no. 160022/F40), the Forest Complexity Modelling (FCM), an NSERC funded program in Canada and a US National Science Foundation CAREER grant (AGS-1349942). We are thankful to the Ministry of Forests, Wildlife and Parks (MFFP) in Quebec and to the Swedish National Forest Inventory (Bertil Westerlund, Riksskogstaxeringen, SLU) in Sweden for providing treegrowth data. ID thanks the Swedish Institute for support of this study done within the framework of CLIMECO project. Appendix A -Supplementary data Supplementary data to this article can be found online at https://doi. org/10.1902/j.gloplacha.2018.03.006
50,742
[ "736796", "790962", "757810" ]
[ "300101", "306172", "458205", "73360" ]
01609518
en
[ "shs" ]
2024/03/05 22:32:10
2017
https://shs.hal.science/halshs-01609518/file/RM%20article%20politics%20reimbursement%20Final.pdf
Keywords: Reimbursement, regenerative medicine, valuation, publications, trade organisations, orphan drugs, risk-sharing agreements Aims This paper aims to map the trends and analyse key institutional dynamics that are constituting the policies for reimbursement of Regenerative Medicine (RM), especially in the UK. Materials & Methods Two quantitative publications studies using Google Scholar and a qualitative study based on a larger study of 43 semi-structured interviews. Results Reimbursement has been a growing topic of publications specific to RM and independent from orphan drugs. Risk-sharing schemes receive attention amongst others for dealing with RM reimbursement. Trade organisations have been especially involved on RM reimbursement issues and have proposed solutions. Conclusion The policy and institutional landscape of reimbursement studies in RM is a highly variegated and conflictual one and in its infancy. of the publications landscape, one overarching and one more specific, combined with one in-depth study of the most contested aspects of the RM reimbursement debate. Thus the publication studies describe the emerging forms and sites of RM reimbursement analysis, while the in-depth (interview-based) study presents the most crucial content of the conflictual debates in the field. II-Methodology and Results In this section, we present the methods/results of each of the three complementary studies. We undertook two quantitative studies of publications using Google Scholar and a qualitative study based on interviews. The time periods the publication searches covered are 2015 and/or 2016. Practically, these were the most recent periods available at the time of our study. Scientifically, the question of RM reimbursement emerged as a prominent issue from 2015. Hence, our systematic searches do not cover earlier publications, although a few relevant publications previous to 2015 are referred to in our discussion. Google Scholar (GS) was chosen as most relevant for the purposes of this research, following review of options [START_REF] Harzing | Google Scholar as a new source for citation analysis?[END_REF][START_REF] Winter | The expansion of Google Scholar versus Web of Science: a longitudinal study[END_REF][START_REF]Google Scholar compared to Web of Science: A Literature Review[END_REF]14]. GS is interdisciplinary and met our supposition that publications on RM reimbursement will be found in various fields of research, and probably more in Social and Human Sciences (SHS) or economics. GS also is not confined to peer-reviewed articles, and it is free to use and thus accessible beyond academia. We supposed transparency and thus wider accessibility are key aspects of RM's reimbursement politics. However, GS also has many limitations, notably it is not as comprehensive or precise as other search interfaces. Therefore, our set of selected articles includes both very detailed publications on RM reimbursement as well as others where it is referred to without being the main focus. Therefore, our results are broadly indicative rather than definitive. We present the results of the three studies below. 1) Reimbursement topics in the publications landscape of RM For the first study, GS has been used for systematic searches to identify trends in RM publications (referring here to every publication whatever the format is: journals, book, Doctoral dissertation…) during 6 months from January 2016 to July 2016. The full search strategy is available from supplementary file 1. In summary, the following keywords were used: "regenerative medicine" OR "advanced therapy" OR "gene therapy" OR "cell therapy" OR "tissue engineered product" OR "innovative therapies" (that we call "regenerative medicine based products": RMPs), combined with and without the word "UK", and with combinations including or excluding "reimbursement", "risk-sharing", and "orphan drugs". The names of different countries were also included: UK, France, Germany, Japan, South Korea, USA (United States). Various adjustments such as averaging of results over 6 monthly periods were made to allow for GS's performance (Supplementary file 1). New systematic researches were added in February 2016 or in March 2016. (Where averages have been calculated over 6 months or over 5 months instead of 7 months, they are respectively marked with a single (6 months average marked*) or a double (5 months average marked**) star, in the tables presented). The results are integrated in an Excel table (Supplementary file 2). This analysis addressed questions including: Is the question of reimbursement prominent among RM publications generally? Is reimbursement discussed to similar extents in the countries that are the most active in the RM field? Is RM reimbursement specifically considered compared to other expensive medicinal products, especially orphan drugs? Is risk-sharing the most discussed managed entry reimbursement option? The results of our analysis are below. First, reimbursement of RMPs has been a growing topic of publications between 2015 and 2016, even excluding orphan drugs. Publications on reimbursement/risk-sharing were more numerous in the first six months of 2016 than in the whole year 2015. However, reimbursement/risk-sharing is not a very prominent primary topic, judged by use of these terms explicitly in publications' titles. This is even more true for the UK, as there was no publication including these terms in UK-origin titles. However, when including orphan drugs the picture changes. Risk-sharing is associated with orphan drugs in more publications (there were generally more than twice as many publications when orphan drugs was included associated with risk sharing. This difference was smaller when orphan drugs was associated with reimbursement as a general term. (Table 1) 1 and supplementary file 2) although there were fewer publications on RM risksharing than on RM reimbursement generally. Thus, the orphan drug focus is greater than that on RMPs, though the latter is attracting an independent set of publications, both on reimbursement generally and risk-sharing in particular. Finally, regarding the 2015 publications related to reimbursement of RMPs, the UK had the fifth most references (183 results) behind the USA (416 results), Japan (239 results) and Germany (237), close to France (188 results) and before South Korea (56). However, it should be noted that, given GS limitations, these countries could be mentioned in the publications data or as countries of affiliation of the authors (Table 2). Thus, UK RM reimbursement is a recent and growing topic of publications, and it appears less developed than in the USA, Japan, Germany, and France, both related to and independent from orphan drugs, and not only linked to risk-sharing agreements as a main policy option. 2) Profiling political characteristics of the RM reimbursement publications landscape The second study is based on a deeper quantitative analysis of targeted publications identified from the first study. Out of 182 publications found in February 2015, 5 appear twice and 127 have been excluded following inspection as being out of our scope. Thus, our working material is based on 50 publications in 2015. These 50 publications have been classified in different Excel tables to distinguish the various research fields they come from for each type of publications: Journals, books chapters, books, thesis and other types of publications see Supplementary file 3). This analysis addressed questions including: In which types (journals, book chapters, books, thesis or other kinds of publications) of publications is reimbursement of RM considered? (We hypothesise that books and theses are less accessible than journals for most stakeholders in the field). In which field of research (clinical, SHS, economics, public health, business, other) is reimbursement of RM considered? In which clinical areas is reimbursement of RM considered? Are there dominant journals? Is reimbursement of RM considered by few publishers or many? What is the UK position and authorship in these publications? Our analysis shows that publications related to RM reimbursement are mainly found in journals (74%; N= 37/50), especially in public health (100%; N= 4/4), clinical (95.7%; N=22/23) and economics (75%; N= 3/4), showing the predominance of the journal format in these disciplines. Nevertheless, a sizeable proportion appears in other formats and so may be less accessible. In addition, the sharing of publications between journals, books' chapters, books, doctoral dissertations, and other types of publications is much more balanced in SHS and business fields. The clinical and economic areas have a similar sharing of publications types regarding RM reimbursement: mainly journals (95.7 % for the clinical area and 75% for the economics area) and few in book chapters (4.3 % (N= 1/23) for the clinical area and 25% (N= 1/4) for the economics area). In the public health area 100% (N= 4/4) publications on RM reimbursement are in journals. On the other hand, the business discipline is an exception, with no publication in journals and more equally represented (together with the SHS area) across other formats of publications (Table 3), suggesting that this discipline may be a less visible area of the landscape Reimbursement of RM is also considered by a wide range of different journals. Indeed, 83.7% (N= 31/37) appeared in separate journals across all fields, suggesting a very diffuse and emerging picture. Nevertheless, the public health and economics fields appeared as significant exceptions with 3/4 of public health articles published in the "Journal of Market Access and Policy", 2/3 of economics journals' publications published in the "Value in Health" journal (Table 7). Thus, it appears RM reimbursement publications are mainly in clinical journals although reimbursement might be considered primarily an SHS or Economics topic. This may be linked to the overall numeric domination of clinical publications compared to SHS or economics publications, or to the interest of medical practitioners and researchers in scenarios for clinical translation. While the question of RM reimbursement was often considered generally, where specific disease areas are targeted, it corresponded to those in which RM is closer to or already in the clinic: Skin and respiratory disease, haematological and orthopaedic and neurologic and ophthalmologic diseases. Finally, two journals (Journal of Market Access and Policy and Value in Health) and two publishers (Elsevier and Universities as a group) seem to be the most active on the topic of RM reimbursement. 3) Trade associations' positions on RM reimbursement issues We now turn from the forms and disciplines with which reimbursement is being studied and debated to the actual content of the key debates and proposals. Thus, the third analysis is part of a study based on 43 semi-structured interviews of stakeholders from key institutions in the field of RM in the UK, conducted in 2015 and 2016: national bodies (5), service providers (3), consultancy companies/law academics (4), regulatory agencies/Institute (4), Innovation networks (organisations that promote relevant innovation) (4), trade organisations (4), funders (2), health professional organisations (4), research charities [START_REF] Helgesson | Values and Valuations in Market Practice[END_REF], other institutions (4). (Details of the organisations are available in Supplementary file 4). Interviews lasted around 1hour, generally at the offices of the interviewees and covered both general questions on the interviewees' current perceptions of the RM field and of its prospects and specific questions modified according to each interviewee/institution.. Informed consent was obtained and interview transcripts were anonymised, and coded and analysed using Nvivo qualitative analysis software. The analysis of the whole set of interviews is the subject of another paper. In this manuscript we target trade associations, as they were the category of stakeholders most involved and most critical of existing protocols on RM reimbursement. We supplemented the interviews with in-depth Internet searches, especially on the websites of the trade associations. We highlight key results and comments from four interviews in three associations. Although a small number, representatives of the associations held key senior positions and, of course, represent the views of many member firms and other organisations. To preserve anonymity we identify the trade organisations/interviewees by a number. All the trade associations considered reimbursement to be a current issue: "So I think we have the academic excellence, some significant infrastructure, a growing community on the positive side. What we don't have is a route to reimbursement." (Trade organisation 2) One trade association highlights the uncertainty of reimbursement decisions: unclear remits of relevant institutions in the context of multiple reimbursement pathways: "At the moment, we have some theoretical routes and we have a number of European companies that have gone bust trying to solve this problem." (Trade organisation 2) Trade associations specifically highlight the contradictions or lack of collaboration between the National Institute for Health and Care Excellence (NICE) and National Health Service (NHS) England. Specific issues identified included: different criteria used by NICE and NHS England such as the gap between marketing authorisation and reimbursement decision; a perceived need for NICE and NHS England integration in an 'holistic system'; tension between NICE and NHS England, although recognition that NICE Office for Market Access has been established to provide clarifications regarding procedures; alleged problems with NICE methodologies; NHS England's specialised commissioning (i.e. many innovations competing for access to the specialised commissioning budget, NHSE not being seen as good at commercial access models); difficulties relating to the cost and choices to be made, long-term and uncertainty of evidence; routes to adoption in the clinic; the differences between devolved nations; difficulty of reaching a common product assessment at the European level; and the impact of (NICE) reimbursement decisions: "For things that go to NICE, whatever they are, that absolutely affects their fortune in the market and through the rest of their lifecycle. So the signal that NICE sends absolutely affects the commercial success of medicines and that will be true of regenerative medicines." (Trade Organisation 4) Moreover, three out of four trade organisations expressed views on reimbursement methodologies for RM. One trade association considered NHSE methodologies had already changed but that NICE methodologies should be revised for RMPs. Indeed, three trade organisations explicitly said there is a need for change in reimbursement methodologies. Regarding the possible establishment of a specific government supported fund for RM, one trade organisation was clearly in favour of it, while 2 others were not opposed to it. A fund for innovative medicines was seen as more acceptable than a disease-type fund such as the Cancer Drugs Fund [START_REF] Littlejohns | Challenges for the new Cancer Drugs Fund[END_REF], but it should be a 'transition' fund. They highlighted that the advantages of a specific fund for RM would be especially to provide flexibility. Moreover, as a budget is necessary to make a system change, the fund would be a powerful and potent mechanism for market access although it raised issues: "There's no doubt that if you want to get something done and you want to make a system change, you create a ring-fenced budget. (…) So in terms of market access it's (a specific fund is) a very powerful and potent mechanism, but it goes against the grain of travel of the system, which is not to have centralised budgets for things, but to have devolved responsibility and decision-making spread out across the system." (Trade organisation 4) Finally, trade associations have made proposals for the reimbursement of RM based products. First, collaborations should be enhanced between marketing authorisation and reimbursement steps, that is, between regulatory agencies and reimbursement bodies. Indeed, "A partnership approach (between developers and gatekeepers) has the potential to reduce the time and cost of development, while improving clinical relevance of studies and their assessment of cost-effectiveness. Improving public health through appropriate uptake of medicines at lower cost and improved cost-effectiveness, without any reduction in development standards or scrutiny, is an important incentive to develop new methodology. Real time database utilisation, new analytical methods and adaptive approaches can underpin this." [START_REF]Reengineering Medicine Development: A stakeholder discussion document for cost-effective development of affordable innovative medicine[END_REF] Second, one trade association emphasised that changes should occur at NHS England: a perceived need for direct producer engagement with NHS England, a key aspect being 'good horizon scanning' at NHS England, whilst approving the new clinically focused routes for reimbursement (i.e. Clinical Reference Groups). Third, one trade organisation highlighted a need for general systemic changes to achieve a healthcare service with a good way of both monitoring and assessing patient performance over time, and the need for industry to have a coherent business strategy. Fourth, two trade organisations underlined that a broader or specific view should be taken for RMPs such as a need for different thinking around the benefits models, especially where there is a curative effect. Fifth, one trade organisation has envisaged 'adaptive pathways' [17] as a solution for RM based products as they could take into account their specific issues, while another referred to risk-sharing or annual/stages payment models: "The key thing is there has to be recognition that the risk has to be shared between the company and the system. So it's no good if the system says, 'Okay, you'll have to supply the medicine for free through this period and we'll look at it again in two years.' There needs to be more flexibility to say, 'Okay, so maybe the medicine will be provided at a different in-market commercial price for this period,' but then, if the data supports it, the price should go up. (...) and it's only if we accept that that we're properly accepting that value should be linked to price."(Trade organisation 4) However, one trade organisation highlighted, as above, that these reimbursement models need connectivity between regulatory and HTA bodies, and between NICE and NHS England, early discussions with HTA bodies and a structured framework for data collection. Finally, one trade organisation welcomed the Japanese model for faster access to market and conditional reimbursement [START_REF] Sipp | Conditional approval: Japan lowers the bar for regenerative medicine products[END_REF] although it recognised it needs to be tested. Thus, beyond recognising reimbursement issues for RMPs, trade organisations generally have views on what are these issues and how they could be solved. Indeed, BIA and ABPI recognise these issues on their websites or positions papers, especially from 2015 [START_REF]Delivering value to the UK-The contribution of the pharmaceutical industry to the patients, the NHS and the economy[END_REF][START_REF]Manifesto: our agenda for change[END_REF][START_REF] Abpi | Adapting the innovation landscape, UK Biopharma R&D sourcebook[END_REF][START_REF] Bia | Update, Influencing and shaping our sector[END_REF][START_REF] Bia | Update, Influencing and shaping our sector[END_REF][START_REF]Briefing Paper: Advanced Therapy Medicinal Products and Regenerative Medicine[END_REF], in collaboration together [START_REF] Abpi | Early Access Medicines Scheme[END_REF] and with other trade associations as well [START_REF] Casmi | Adaptive Licensing[END_REF][START_REF] Bivda | From vision to action: Delivery of the Strategy for UK Life Sciences[END_REF][START_REF] Bia | One nucleus, Mediwales. UK Life sciences manifesto[END_REF], following some earlier statements [START_REF]BIA policy and parliamentary impact in Quarters Two and Three[END_REF][START_REF] Mestre-Ferrandiz | The many faces of Innovation, A report for ABPI by the Office of Health Economics[END_REF]. ABPI was represented on this topic at the 2015 conference of the International Society For Pharmacoeconomics and Outcomes Research (ISPOR) [31]. The general views of the trade organisations could be summarised by this statement: "Ultimately a major consideration is whether payors -especially the NHS in the UK -can afford to use the medicine. Biological medicines, especially advanced therapies like cell and gene therapies, have particularly high development and manufacture costs. But they may also provide healthcare benefits that ultimately save the NHS money down the line. There is a need for policymakers to consider short, versus long-term, trade-offs and to propose models for realistic reimbursement plans." [START_REF] Bia | One nucleus, Mediwales. UK Life sciences manifesto[END_REF] 4) Discussion In this section, we summarise and discuss the main results of the three studies: The valuation of regenerative medicine involves a politics of stakeholder institutions and emerging policy discourse evident in the interview positions and publication profiles that we have presented. It could be considered that 2015 constituted a turning-point in that reimbursement and adoption in the NHS became a key issue in new national reports [START_REF]General proposals to solve the RM reimbursement challenges 33 NICE. Exploring the assessment and appraisal of regenerative medicines and cell therapy products[END_REF]33]. Moreover, the reimbursement of the first authorized Advanced Therapy Medicinal Product (ATMP), Chondrocelect in 2009, was turned down for reimbursement both in France and in the UK. In 2016, its marketing authorization holder, Tigenix NV, decided to withdraw its marketing authorization, as did Dendreon/Valeant for Provenge in 2015, for commercial reasons. Thus, the commercial viability of ATMP and RM products, as linked to the decisions of reimbursement by national bodies, became a key challenge. Indeed, "the reimbursement point is the keystone from which an allowable COGs [Cost of Goods] is determined by subtracting business costs." [START_REF] Mount | Cell-based therapy technology classifications and translational challenges[END_REF] These developments show the volatile environment in which valuation and reimbursement of RMPs is being debated. We have shown that it has been a growing Our publications analysis and interviews suggest that the UK's position in the emerging RM reimbursement landscape is similar. Clearly, trade organisations have been very involved in RMPs reimbursement debates, as one would expect. Indeed, the industry is the most critical of RM valuation issues. Industry generally will not develop medicines lacking likely wide reimbursement and thus uncertain return on investment. Trade organisations consider more flexibility is needed, notably regarding NICE methodologies for assessment. In the context of limited budgets for healthcare, we showed in our interview and internet study that several key trade organisations argue that new flexible routes for reimbursement are needed to ensure patient access to the latest medical advances, including RMPs. Beyond acceptance for risk-sharing schemes, while highlighting their limits, trade organisations emphasised the need for more collaboration between key stakeholders as a main solution to reimbursement issues. Some measures have been taken toward this objective, such as promotion of early contact with regulators and HTA bodies notably through the NICE Office for Market Access. The latter's objectives include defining acceptable evidences in a context of uncertainty with a curative treatment, and supporting navigation between the different gatekeepers. This is seen as particularly necessary given the challenge of an "increase in demand for 'real world' evidence by HTA, payers and regulators", i.e. their "growing interest in relative effectiveness" [START_REF] Abpi | Securing a future for innovative medicines: A discussion paper[END_REF]. 5) Conclusion We conclude that the policy and institutional landscape of reimbursement studies in RM is a highly variegated one and in its infancy. The two publications studies gave details on the amount of activities going on, the potential gap in the field, and signs of both general and niche trends. The volume of publications is growing, as researchers and analysts in a wide variety of disciplines and types of organisation start to grapple with reimbursement challenges. The interviews study highlights trade associations as closely engaged with debating at a high level the possible reimbursement scenarios for RM, and pointing to ways in which current technology assessment and healthcare infrastructures could be improved to favour RM enterprise. The analysis that we have provided is particularly relevant to the stakeholders involved in policy making in RM, and industry and academia. It offers a picture of the emerging landscape of RM reimbursement actors and issues that can inform the various stakeholders' participation in its future analysis, potential, and development. Summary Points • Reimbursement of RM based products has been a growing topic of publications between 2015 and 2016, independently from orphan drugs. • Risk-sharing schemes are only one strategy for dealing with RM reimbursement, albeit a widely debated one. • Reimbursement and risk-sharing are distinct issues for RM, although there is an overlap with the same issues for orphan drugs. • The UK's position in the RM reimbursement publication landscape is in keeping with several reports on the global dynamics of RM. • Trade organisations have been very involved on RM based products reimbursement issues. • Trade organisations have detailed views on reimbursement issues for RM especially the high cost versus the uncertainty regarding long-term evidence. • Trade organisations have various proposals to solve RM reimbursement issues, emphasising a need for more collaboration between several key national-level actors. ------------------------------------------------------------------------------------------------ Table Table 2 : 2015 publications related to reimbursement of RMPs per country 2 UK France Germany Japan South Korea United States Reimbursement 183 188* 237* 239* 56* 416* Anywhere Year 2015 Table 3 : 3 Different types of publication formats by discipline of our first suppositions has been verified in that publications related to RM reimbursement are found in a range of different fields of research. However, the dominant field is not SHS (28.0%; N= 14/50) nor economics (8.0%; N= 4/50) but is clinical (46.0%; N= 23/50); public health being more or less equivalent to economics (8.0%; N= 4/50) and business (10.0%; N= 5/50). Thus, the questions of RM reimbursement are being formulated mainly in clinical and SHS disciplinary publications ( Clinical SHS Economics Public Business Total Health Table 4 ) . Table 4 : Range of different publications subject areas/types 4.4 Clinical SHS Economics Public Business Total Health 23 14 4 4 5 N= 50 46,0% 28,0% 8,0% 8,0% 10,0% N= 100% Of the clinical articles, most were in generalist medical journals (45.5%; N= 10/22). When specific disease areas were the focus, skin and respiratory diseases had received most attention (13.6% each; N= 3/22), followed by haematological and orthopaedic (9.1% each; N= 2/22), and finally neurologic and ophthalmologic diseases (4.5% each; N= 1/22). It is notable that publications had not targeted important clinical fields such as cardiovascular, gastroenterological and cancers other than blood diseases (Table 5 ). Table 5 : Disease areas in clinical publications 5 (Among the book chapters, the Gaucher disease has been considered both as an haematologic disease (Type 2) and as a neurologic disease (Types 2 and 3).) Number of Number of Total articles books' chapters N=23 N= 22 N= 1 General 10 0 10 45,5% 0% 43,5% Haematological 2 1 3 Furthermore, while "United Kingdom" was one of our selection criteria, first authors are from the UK in 40% (N= 20/50) of all the publications. The clinical and SHS areas are the main fields with UK first author's affiliation when we consider all types of publications (clinical area (20%; N= 10/50) and SHS area (12%; N= 6/50)), or journals only (clinical (66.7%; N= 10/15) and SHS (20%; N= 3/15) journals)) (Table 6 ). Table 6 : UK first author affiliation 6 Clinical SHS Economics Public Business Total Health UK first in 10/15 3/15 1/15 1/15 0/15 15/50 journals 66,7% 20% 6,7% 6,7% 0% 30% UK first in 0/2 1/2 0/2 N/A 1/2 2/50 book chapters 0% 50% 0% 50% 4% UK first in N/A 0/1 N/A N/A 1/1 1/50 books 0% 100% 2% UK first in N/A 2/2 N/A N/A 0 2/50 thesis 100% 0% 4% UK first in N/A N/A N/A N/A 0/0 0/50 other 0% 0% publication UK first author 10/50 6/50 1/50 1/50 2/50 20/50 in all 20% 12% 2% 2% 4% 40% publications (N=50) Table 7 : Publications in range of different journals 7 of publishers was also evident. Four publishers shared 52% (N=26/50) of RM reimbursement publications, and Elsevier and universities covered the widest range of different fields (4): clinical, SHS, economics and public health for Elsevier, and clinical, SHS, economics and business for universities, suggesting a mix of commercial and non-profit commitment to the field. Most other publishers were specific to one or two fields. Clinical SHS Economics Public Business Total Health Publications 19/22 8/8 2/3 2/4 0/0 31/37 in different 86,3% 100% 66,7% 50,0% 0% 83,7% journals A wide spread topic of focused publications between 2015 and 2016, the vast majority of which appear in very disparate avenues or 'spaces' geared to various disciplinary audiences and interested parties. Nevertheless, clinical and especially generalist medical journals were shown to be dominating, and at least two specialist journals have appeared recently, which are likely to see more RMP reimbursement contributions. Many of the reimbursement challenges are not specific to RMPs, because other fields such as orphan drugs can also have high up-front costs [START_REF] Gardner | Are there specific translational challenges in regenerative medicine? Lessons from other fields[END_REF][START_REF] Nice | Exploring the assessment and appraisal of regenerative medicines and cell therapy products[END_REF]. However, we maintain that some kinds of RMPs raise specific challenges, such as gene therapies when they are curative [START_REF] Carr | Gene therapies: the challenge of super-high-cost treatments and how to pay for them[END_REF]. We have shown that risk-sharing specifically is far less discussed than reimbursement generally. This result accords with risk-sharing schemes being just one strategy for dealing with RM reimbursement, albeit a widely debated one. Indeed, these schemes can be considered as one way of addressing the uncertainties regarding the alternative approaches to the valuation of RM between different actors, especially the NHS and the producer/manufacturer. Reimbursement and risk-sharing are distinct issues for RM, although there is an overlap with the same issues for orphan drugs. RMPs can be medicinal products, especially ATMPs. For instance, Holoclar, the first stem cell-based medicinal product approved for use in the EU, is both an ATMP and an orphan medicinal product, and as such benefits from the incentives of both regulatory frameworks [START_REF] Gerke | EU Marketing Authorisation of Orphan Medicinal Products and Its Impact on Related Research[END_REF]. More globally, among the eight ATMPs authorised on the EU market to date, four are orphan drugs. As those cases show, orphan drugs and RM based products often share the two main features of high cost and uncertainties around evidence and value [START_REF] Bubela | Bringing regenerative medicines to the clinic: the future for regulation and reimbursement[END_REF]40]. However, these same uncertainties are also seen in the weak long-term evidence for RMPs that are not orphan drugs. Even though there was an increase in using risk-sharing schemes in Europe generally [START_REF] Adamski | Risk sharing arrangements for pharmaceuticals: Potential considerations and recommendations for European payers[END_REF] and they have been considered suitable for orphan drugs [START_REF] Campillo-Artero | Risk sharing agreements: with orphan drugs?[END_REF], there should be further exploration of whether such schemes might be more applicable to orphan drugs than RM products, as our findings imply. Such considerations are important to the political design of the markets and health system adoption of different subsectors of RM and related enterprise. We showed in the first study the UK's position in the RM reimbursement publication landscape. Those results are in keeping with several reports in the field of RM evidencing different countries' positions addressing RM challenges broadly: "The UK needs to be ambitious and act quickly to get ahead. The USA, Canada and Japan are particularly active in this space and, although the UK is preeminent in Europe; Germany, Italy, France and Spain, in particular are rapidly reviewing how they can also capture these investments." [START_REF]Advanced Therapies Manufacturing Task Force Action Plan. Retaining and attracting advanced therapies manufacture in the UK[END_REF] Regarding the distribution of RM tissue engineering firms and research institutes: "When we look at the geographic distribution of tissue engineering firms and research institutes, the U.S. with 52% leads the market followed by Germany (21%), Japan (16%), the UK (7%), and Sweden (4%)." [START_REF]Stem Cell Regenerative Medicine Market : Global Demand Analysis & Opportunity Outlook 2021[END_REF] This pattern has been established for some time across the biopharmaceutical sector said to reflect 'longstanding problems: limited venture capital finance, a fragmented patent system, and relatively weak relations between academia and industry.' [START_REF] Hogarth | Regenerative medicine in Europe: global competition and innovation governance[END_REF].
34,811
[ "18844" ]
[ "239063" ]
01756892
en
[ "info" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01756892/file/NIMFAOD-ACC.pdf
Vineeth S Varma Irinel Constantin Morarescu Yezekael Hayel Analysis and control of multi-leveled opinions spreading in social networks Keywords: Opinion dynamics, Social computing and networks, Markov chains, agent based models This paper proposes and analyzes a stochastic multi-agent opinion dynamics model. We are interested in a multi-leveled opinion of each agent which is randomly influenced by the binary actions of its neighbors. It is shown that, as far as the number of agents in the network is finite, the model asymptotically produces consensus. The consensus value corresponds to one of the absorbing states of the associated Markov system. However, when the number of agents is large, we emphasize that partial agreements are reached and these transient states are metastable, i.e., the expected persistence duration is arbitrarily large. These states are characterized using an N-intertwined mean field approximation (NIMFA) for the Markov system. Moreover we analyze a simple and easily implementable way of controlling the opinion in the network. Numerical simulations validate the proposed analysis. I. INTRODUCTION The analysis and control of complex sociological phenomena as consensus, clustering and propagation are challenging scientific problem. In the past decades much progress has been made both on the development and the analysis of new models that capture more features characterizing the social network behavior. A possible classification of the existing models can be done by looking at the evolution space of the opinions. Precisely we find models in which the opinions evolve in a discrete set of values, they come from statistical physics, and the most employed are the Ising [START_REF] Ising | Contribution to the theory of ferromagnetism[END_REF], voter [START_REF] Clifford | A model for spatial conflict[END_REF] and Sznajd [START_REF] Sznajd-Weron | Opinion evolution in closed community[END_REF] models. A second class is given by the models that consider a continuous set of opinion's values [START_REF] Degroot | Reaching a consensus[END_REF], [START_REF] Hegselmann | Opinion dynamics and bounded confidence models, analysis, and simulation[END_REF], [START_REF] Morȃrescu | Opinion dynamics with decaying confidence: Application to community detection in graphs[END_REF]. While in some models the interaction network is fixed in some others it is state-dependent. Although some studies propose repulsive interactions [START_REF] Altafini | Consensus problems on networks with antagonistic interactions[END_REF] the predominant tendency of empirical studies emphasize the attractive action of the neighbors opinions. We can also emphasize that many studies in the literature focus on the emergence of consensus in social networks [START_REF] Galam | Towards a theory of collective phenomena: Consensus and attitude changes in groups[END_REF], [START_REF] Axelrod | The dissemination of culture: A model with local convergence and global polarization[END_REF], [START_REF] Fortunato | Vector opinion dynamics in a bounded confidence consensus model[END_REF] while some others point out local agreements leading to clustering [START_REF] Hegselmann | Opinion dynamics and bounded confidence models, analysis, and simulation[END_REF], [START_REF] Morȃrescu | Opinion dynamics with decaying confidence: Application to community detection in graphs[END_REF]. Most of the existing models including the aforementioned ones share the idea that an individual's opinion is influenced by the opinions of his neighbors. Nevertheless, it is very hard to estimate these opinions and often one may access only a quantized version of them. Following this idea a mix of continuous opinion with discrete actions (CODA) was proposed in [START_REF] Martins | Continuous opinions and discrete actions in opinion dynamics problems[END_REF]. This model reflects the fact that even if we often face binary choices or actions which are visible * CRAN (CNRS-Univ. of Lorraine), Nancy, France, {vineeth.satheeskumar-varma,constantin.morarescu}@univ-lorraine.fr ‡ University of Avignon, Avignon, France. yezekael.hayel@univavignon.fr This work was partially funded by the CNRS PEPS project YPSOC. by our neighbors, the opinions evolve in a continuous space of values which are not explicitly visible to the neighbors. A multi-agent system with a CODA model was proposed and analyzed in [START_REF] Chowdhury | Continuous opinions and discrete actions in social networks: a multi-agent system approach[END_REF]. It was shown that this deterministic model leads to a variety of asymptotic behaviors including consensus and clustering. In [START_REF] Varma | Modeling stochastic dynamics of agents with multi-leveled opinions and binary actions[END_REF] the model in [START_REF] Chowdhury | Continuous opinions and discrete actions in social networks: a multi-agent system approach[END_REF] was reformulated as a discrete interactive Markov chain. One advantage of this approach is that, it also allows analysis of the behavior of infinite populations partitioned into a certain number of opinion classes. Due to the complexity of the opinion dynamics, we believe that stochastic models are more suitable than deterministic ones. Indeed, we can propose a realistic deterministic update rule but many random events will still influence the interaction network and consequently the opinion dynamics. Following the development in [START_REF] Varma | Modeling stochastic dynamics of agents with multi-leveled opinions and binary actions[END_REF] we propose here a continuous-time interactive Markov chain modeling that approximates the model in [START_REF] Chowdhury | Continuous opinions and discrete actions in social networks: a multi-agent system approach[END_REF]. Although the asymptotic behavior of the model can be given by characterizing the absorbing states of the Markov chain, the convergence time can be arbitrarily large and transient but persistent local agreements, called metastable equilibria, can appear. These equilibria are very interesting because they describe the finite-time behavior of the network. Consequently, we consider in this paper an N-intertwined mean field approximation (NIMFA) based approach in order to characterize the metastable equilibria of the Markov system. It is noteworthy that NIMFA was successfully used to analyze and validate some epidemiological models [START_REF] Van Mieghem | Virus spread in networks[END_REF], [START_REF] Trajanovski | Decentralized protection strategies against sis epidemics in networks[END_REF]. In this work, we model the social network as a multi-agent system in which each agent represents an individual whose state is his opinion. This opinion can be understood as the preference of the agent towards performing a binary action, i.e. action can be 0 or 1. These agents are interconnected through an interaction directed graph, whose edge weights represent the trust given by an agent to his neighbor. We propose continuous time opinion dynamics in which the opinions are discrete and belong to a given set that is fixed a priori. Each agent is influenced randomly by the actions of his neighboring agents and consequently influences its neighbors. Therefore the opinions of agents are an intrinsic variable that is hidden from the other agents, the only visible variable is the action. As an example, consider the opinion of users regarding two products red and blue cars. A user may prefer red cars strongly, while some other users might be more indifferent. However, what the other users see (and is therefore influenced by) is only what the user buys, which is the action taken. The contributions of this paper can be summarized as follows. Firstly, we formulate and analyze a stochastic version of the CODA model proposed in [START_REF] Chowdhury | Continuous opinions and discrete actions in social networks: a multi-agent system approach[END_REF]. Secondly, we characterize the local agreements which are persistent for a long duration by using the NIMFA for the original Markov system. Thirdly, we provide conditions for the preservation of the main action inside one cluster as well as for the propagation of actions. Finally, we study how an external entity can control the opinion dynamics by manipulating the network edge weights. In particular, we study how such a control can be applied to a cluster for preservation or propagation of its opinion. The rest of the paper is organized as follows. Section II introduces the main notation and concepts and provides a description of the model used throughout the paper. The analysis of the asymptotic behavior of opinions described by this stochastic model is provided in Section III. The presented results are valid for any connected networks with a finite number of agents. Moreover Section III contains the description of the NIMFA model and an algorithm to compute its equilibria. In Section IV we emphasize conditions for the preservation of the main action (corresponding to a metastable state) in some clusters as well as conditions for the action propagation, both without any control and in the presence of some control. The results of our work are numerically illustrated in Section V. The paper ends with some concluding remarks and perspectives for further developments. Preliminaries: We use E for the expectation of a random variable, 1 A (x) the indicator function which takes the value 1 when x ∈ A and 0 otherwise, R + the set of non-negative reals and N = {1, 2, . . . } the set of natural numbers.. II. MODEL Throughout the paper we consider N ∈ N an even number of possible opinion levels, and the set of agents K = {1, 2, . . . , K} with K ∈ N. Each agent i is characterized at time t ∈ R + by its opinion represented as a scalar X i (t) ∈ Θ where Θ = {θ 1 , θ 2 , . . . , θ N } is the discrete set of possible opinions, such that θ n ∈ (0, 1)\{0.5} and θ n < θ n+1 for all n ∈ {1, 2, . . . , N }. Moreover Θ is constructed such that θ N/2 < 0.5 and θ N/2+1 > 0.5. In the following let us introduce some graph notions allowing us to define the interaction structure in the social network under consideration. Definition 1 (Directed graph): A weighted directed graph G is a couple (K, A) with K being a finite set denoting the vertices, and A being a K × K matrix, with elements a ij denoting the trust given by i on j. We say that agent j is a neighbor of agent i if a ij > 0. We denote by τ i the total trust in the network for agent i as τ i = K j=1 a ij . Agent i is said to be connected with agent j if G contains a directed path from i to j, i.e. if there exists at least one sequence (i = i 1 , i 2 , . . . , i p+1 = j) such that a i k ,i k+1 > 0, ∀k ∈ {1, 2, . . . , p}. Definition 2 (Strongly connected): The graph G is strongly connected if any two distinct agents i, j ∈ K are connected. In the sequel we suppose the following holds true. Assumption 1: The graph (K, E) modeling the interaction in the network is strongly connected. The action Q i (t) taken by agent i at time t is defined by the opinion X i (t) through the following relation Q i (t) = X i (t) , where . is the nearest integer function. This means that if an agent has an opinion more than 0.5, it will take the action 1 and 0 otherwise. This kind of opinion quantization is suitable for many practical applications. For example, an agent may support the left or right political party, with various opinion levels (opinions close to 0 or 1 represents a stronger preference), however, in the election, the agent's action is to vote with exactly two choices (left or right). Similarly, an agent might have to choose between two cars or other types of merchandise like cola as mentioned in the introduction. Although its preference for one product is not of the type 0 or 1, its action will be, since it cannot buy fractions of cars, but one of them. A. Opinion dynamics In this work, we look at the evolution of opinions of the agents based on their mutual influence. We also account for the inertia of opinion, i.e., when the opinion of the agent is closer to 0.5, he is more likely to shift as he is less decisive, whereas someone with a strong opinion (close to 1 or 0) is less likely to shift his opinion as he is more convinced by his opinion. The opinion of agent j may shift towards the actions of its neighbors with a rate β n while X j (t) = θ n . If no action is naturally preferred by the opinion dynamics, then we construct θ n = 1 -θ N +1-n and assume that β n = β N +1-n for all n ∈ {1, 2, . . . , N }. At each time t ∈ R + we denote the vector collecting all the opinions in the network by X(t) = (X 1 (t), . . . , X K (t)). Notice that the evolution of X(t) is described by a continuous time Markov process with N K states and its analysis is complicated even for small number opinion levels and relatively small number of agents. The stochastic transition rate of agent i shifting its opinion to the right, i.e. to have opinion θ n+1 when at opinion θ n , with n ∈ {1, 2, . . . , N -1}, is given by β n N j=1 a ij 1 (0.5,1] (X j (t)) = β n K j=1 a ij Q j (t) = β n R i (t). Similarly, the transition rate to the left, i.e. to shift from θ n to θ n-1 is given by β n N j=1 a ij 1 [0,0.5) (X j (t)) = β n N j=1 a ij (1-Q j (t)) = β n L i (t). for n ∈ {2, . . . , N }. Therefore, we can write the infinitesimal generator M i,t (a tri-diagonal matrix of size N × N ) for an agent i as: M i,t =    -β 1 R i (t) β 1 R i (t) 0 . . . β 2 L i (t) -β 2 τ i β 2 R i (t) . . . . . .    (1) with elements corresponding the n-th rown and m-th column given by M i,t (m, n) and ∀n ∈ {1, . . . , N -1}, M i,t (n, n + 1) = β n R i (t), ∀n ∈ {2, . . . , N }, M i,t (n, n -1) = β n L i (t), ∀|m -n| > 1, M i,t (m, n) = 0 and Mi,t(n, n) =    -β1Ri(t) for n = 1, -βnτi for n ∈ {2, . . . , N -1}, -βN Li(t) for n = N. Let v i,n (t) := E[1 {θn} (X i (t))] = Pr(X i (t) = θ n ) be the probability for an opinion level θ n for user i at time t. Then, in order to propose an analysis of the stochastic process introduced above, we may consider the mean-field approximation by replacing the transitions by their expectations. Then, the expected transition rate from state n to state n + 1 for K → ∞, is given by: β n K j=1 a ij E 1 (0.5,1] (X j (t)) = β n K j=1 a ij N n=N/2+1 v j,n (t). We have similar expression for transition between state n and n -1. III. STEADY STATE ANALYSIS Define by θ-= (θ 1 , . . . , θ 1 ) and θ+ = (θ N , . . . , θ N ) the states where all the agents in the network have an identical opinion, which correspond to the two extreme opinions. Proposition 1: Under Assumption 1, the continuous time Markov process X(t), with (1) as the infinitesimal generators corresponding to each agent, has exactly two absorbing states X(t) = θ+ and X(t) = θ-. Proof: Due to space limitation the proof is omitted. Considering the NIMFA approximation, we get that the dynamics of the opinion for an agent i are given by: vi,1 = -β 1 r i v i,1 + β 2 l i v i,2 vi,n = -β n r i v i,n + β n+1 l i v i,n+1 -β n l i v i,n + β n-1 r i v i,n-1 vi,N = -β N l i v i,N + β N -1 r i v i,N -1 (2) for all i ∈ K and 1 < n < N where l i = j∈K a ij E[1 -Q j ] = j∈K N/2 n=1 a ij v j,n , r i = j∈K a ij E[Q j ] = j∈K N n=N/2+1 a ij v j,n . (3) and n v i,n = 1. We can easily verify that X i = θ 1 , i.e. v i,1 = 1 for all i is an equilibrium for the above set of equations. When v i,1 = 1 for all i, v i,n = 0 for all n ≥ 2 and as a result, l i = τ i and r i = 0 for all i which gives vi,n = 0 for all i, n. Excluding the extreme solutions θ+ and θ-, the non-linearity of system (2) could give rise to the existence of interior rest points which are locally stable. Such rest points are referred to as metastable states in Physics. Metastability of Markov processes is precisely defined in [START_REF] Huisinga | Phase transitions and metastability in markovian and molecular systems[END_REF], where the exit times from these metastable states are shown to approach infinity when the network size is arbitrarily large. A. Rest points of the dynamics For a given r i = E[R i (t)], the equilibrium state v * i,n must satisfy the following conditions 0 = -β 1 ri τi v * i,1 + β 2 ( τi-ri τi )v * i,2 0 = -β n v * i,n + β n+1 ( τi-ri τi )v i,n+1 +β n-1 ri τi v * i,n-1 0 = -β N ( τi-ri τi )v * i,N + β N -1 ri τi v * i,N -1 (4) We can write any v * i,n based on v * i,1 , by simplification as v * i,n = β 1 β n r i τ i -r i n-1 v * i,1 . (5) As the sum of v i,n over n must be 1, we can solve for v * i,1 as v * i,1 = 1 N n=1 β1 βn ri τi-ri n-1 . (6) We then can use this relationship to construct a fixed-point algorithm that computes a rest-point of the global opinion dynamics for all users. Data: Number of agents K, the edge weights a i,j for all i, j ∈ K, initial values v i,n (0), convergence factor << 1, opinion levels N and the jump rates β n for all n ∈ {1, . . . , N }. Result: v(m) at the end of the loop is close to a fixed point of the opinion dynamics do m ← m + 1 ; Set r i (m) = k N n=N/2+1 a i,k v k,n (m) (7) for all i ∈ K ; Set v i,n (m) = β1 βn ri(m) τi-ri(m) n-1 N l=1 β1 β l ri(m) τi-ri(m) l-1 (8) for all n ∈ {1, . . . , N }, i ∈ K ; while ||v m -v m-1 || ≥ ; Algorithm 1: Algorithm to find a fixed point of the NIMFA. Additionally, we can obtain some nice properties on the relation between r i and v i,n by studying the following function. Lemma 1: Consider the function f : [0, 1] → [0, 1] defined as f (x) := N n=N/2+1 β1 βn x 1-x n-1 N n=1 β1 βn x 1-x n-1 (9) for all x ∈ [0, 1) and with f (1) = 1. We have that f (x) is a monotonically increasing continuous function and takes the values f (0) = 0, f (0.5) = 0.5 and lim x→1 f (x) = 1. Proof: Due to space limitation the proof is omitted. We can use f ( ri τi-ri ) to calculate the probability that an agent i will take the action 1, i.e., 6) and ( 5). N n=N/2+1 v * i,n = f ( ri τi-ri ) from ( IV. OPINION SPREADING A way to model generic interaction networks is to consider that they are the union of a number of clusters (see for instance [START_REF] Morȃrescu | Opinion dynamics with decaying confidence: Application to community detection in graphs[END_REF] for a cluster detection algorithm). Basically a cluster C is a group of agents in which the opinion of any agent in C is influenced more by the other agents in C, than agents outside C. When the interactions between cluster are deterministic and very weak, we can use a two time scale-modeling as in [START_REF] Martin | Time scale modeling for consensus in sparse directed networks with time-varying topologies[END_REF] to analyze the overall behavior of the network. In the stochastic framework and knowing only a quantized version of the opinions we propose here a development aiming at characterizing the majority actions in clusters. The notion of cluster can be mathematically formalized as follows. Definition 3 (Cluster): A subset of agents C ⊂ K defines a cluster when, for all i, j ∈ C and some λ > 0.5 the following inequality holds a ij ≥ λ τ i |C| . ( 10 ) The maximum λ which satisfies this inequality for all i, j ∈ C is called the cluster coefficient. For any given set of agents C ⊂ K, let us denote that ν C -= j∈C N/2 n=1 v j,n |C| , and ν C + = j∈C N n=N/2+1 v j,n |C| . Those values represent the expected fraction of agents within a set C with action 0 and 1, respectively. We also denote by ν C n , the average probability of agents in a cluster to have opinion θ n , i.e., ν C n = i∈C vi,n |C| . Now we can use the definition of a cluster given in [START_REF] Fortunato | Vector opinion dynamics in a bounded confidence consensus model[END_REF] to obtain the following proposition. Proposition 2: The dynamics of the average opinion probabilities in a cluster C ⊂ K can be written as: Proof: Due to space limitation the proof is omitted. The result above shows that instead of looking at individual opinions of agents inside a cluster, we can provide equation [START_REF] Martins | Continuous opinions and discrete actions in opinion dynamics problems[END_REF] for the dynamics of the expected fraction of agents in a cluster with certain opinions. νC 1 κ = -β 1 ν C 1 λν C + + (1 -λ)δ +β 2 ν C 2 λν C -+ (1 -λ)(1 -δ) νC n κ = -β n ν C n + β n+1 ν C n+1 λν C -+ (1 -λ)(1 -δ) +β n-1 ν C n-1 λν C + + (1 -λ)δ νC N κ = -β N ν C N λν C -+ (1 -λ)(1 -δ) +β N -1 ν C N -1 λν C + + (1 -λ)δ ( A. Action preservation One question that can be asked in this context is what are the sufficient conditions for the preservation of actions in a cluster, i.e. regardless of external opinions, agents preserve the majority action inside the cluster C for long time. At the limit, all the agents will have identical opinions corresponding to an absorbing state of the network but, clusters with large enough λ may preserve their action in metastable states (long time). Proposition 3: If ∃ x ∈ (0.5, 1) such that x = f (λx) than cluster C with coefficient λ preserves its action in a metastable state. If no such x exists, then the only equilibrium when the perturbation term δ = 1 from (11) is at ν C + = 1, and when δ = 0, the equilibrium is at ν C + = 0. Proof: Due to space limitation the proof is omitted. B. Propagation of actions In the previous subsection, we have seen that a cluster C can preserve its majority action regardless of external opinion if it has a sufficiently large λ. If there are agents outside C with some connections to agents in C, then this action can be propagated. Let τ C i = j∈C a ij denote the total trust of agent i in the cluster C. Let the cluster C be such that it has λ large enough so that νC + > 0.5 exists where νC + = f (λν C + ). Proposition 4: If the cluster C is preserving an action 1 with at least a fraction νC + of the population in C having action 1, then the probability of any agent i ∈ K \ C to chose action 1 at equilibrium is bounded as follows. f νC + τ C i τ i ≤ Pr(Q * i = 1) ≤ 1 -f νC - τ C i τ i . ( 12 ) where Q * i is the action taken by i when the system is in a non-absorbing NIMFA equilibrium state. Proof: Due to space limitation the proof is omitted. C. Control of opinion spreading Consider that an external entity, a firm for example, wants to control the actions of the agents in the network. In particular, the external agency wants to ensure that a cluster C preserves its initial action. Practically, a set of consumers in C might prefer the product of the firm, and this firm wants to ensue that other competing firms do not sway their opinions and convert their opinions. For this purpose, the firm tries to reinforce the opinion spread within C and from C to its followers, by strengthening the influence of agents in C. As an example, the party might pay the social network, such as Facebook or Twitter, to make messages, in the form of wall posts or tweets, to be more visible when the source of these messages are within C. This implies that a ij for any i ∈ K and j ∈ C, is now modified to a ij = a ij (1 + u), where u denotes the control parameter. Here, u = 0 represents the normal state of the network and u > 0 implies that agents in the network will view messages from agents in C more frequently or more easily (with a higher priority). Thus, the cluster coefficient of C is no longer the initial cluster coefficient λ, but the new coefficient is increased depending on u. Proposition 5: Let λ be the cluster coefficient of C. Then a control which strengthens the connections from C from a ij to a ij (1 + u) for all j ∈ C and i ∈ K results in a new cluster coefficient λ ≥ λ given by λ ≥ λ 1 + u 1 + λu (13) This results in a threshold on the control required for preservation as the smallest u * ≥ 0 satisfying x = f λ 1 + u * 1 + λu * x ( 14 ) for some x ∈ (0.5, 1]. Additionally, any follower of C has its probability of action 1 modified to f νC + τ C i (1 + u) τ i + τ C i u ≤ Pr(Q * i = 1) ≤ 1-f νC - τ C i (1 + u) τ i + τ C i u (15) Proof: Due to space limitation the proof is omitted. As λ ≤ 1, λ ≥ λ always, and we also have lim u→∞ λ = 1. This implies that as long as we apply a sufficiently large control u, any cluster C will be able to preserve its action (as a result of Corollary 1). Additionally, this also means that agents who are followers of C will now be influenced to a greater degree by C. V. NUMERICAL RESULTS For all simulations studied, unless otherwise mentioned, we take Θ = {0.2, 0.4, 0.6, 0.8}, β 1 = β 4 = 0.01 per unit of time and β 2 = β 3 = 0.02 per unit of time. For the first set of simulations, we take another graph structure, but with the same K as indicated in Figure 1. We first randomly make links between any i, j ∈ K with a probability of 0.05. When such a link exists, a ij = 1 and a ij = 0 otherwise. Then we construct a cluster C 1 , with the agents i = 1, 2, . . . , 40, and C 2 with agents i = 81, 82, . . . , 120. We also label agents 40 < i ≤ 60 as set B 1 and 60 < j ≤ 80 as set B 2 . To provide the relevant cluster structure, we made the edge weights a ij = 1 for all τi ≥ 0.444 for all 60 < i ≤ 80. Cluster C 1 |C 1 | = 40 Cluster C 2 |C 2 | = 40 Set B1 |B1| = 20 Set B2 |B2| = 20 • i, j ∈ C 1 or i, j ∈ C 2 , A. Propagation of actions We find that the largest x satisfying x = f (λ 1 x) is 0.95 and that satisfying x = f (λ 2 x) is 0.94. Therefore, if all agents in C 1 start with opinion 0.2 and all agents in cluster 2 start with opinion 0.8, we predict from proposition 3 that ν C1 Q i (t) for S = C 1 , C 2 , B 1 , B 2 . We see that C 1 and C 2 preserve their initial actions as given by proposition 3. We also see that as B 1 follows only C 1 , it's action is close to C 1 . As B 2 follows both C 1 and C 2 who have contradicting actions, it has a very mixed opinion which keeps changing randomly in time. Simulations of the continuous time Markov chain show that our theoretical results are valid even when the cluster size is 40. Figure 2 plots the population fraction of agents with action 1 within a certain set for one simulation. We look for this value in the clusters C 1 and C 2 as well as the sets B 1 and B 2 . C 1 and C 2 are seen to preserve their actions which are opposite to each other. Since B 1 has a significant trust in C 1 alone, the opinion of C 1 is propagated to B 1 . However, as B 2 trusts both C 1 and C 2 , its opinion is influenced by the two contradicting actions resulting in having some agents with action 1 and the rest with action 0. B. Control of opinion spreading We consider the same graph structure used in Fig. 2 illustrated in Fig. 1, but with Cluster C 2 having its first 25 agents removed. As a result, the cluster coefficient of this cluster becomes λ 2 = 0.636, which no longer allows for preservation of its action as seen in Figure 3a. From proposition 3, we find that a λ > 0.8 ensures preservation. Therefore, we introduce a control u = 1.5 which enhances the visibility of actions spread by agents in C 2 by a factor of 2.5, resulting in a ij = 2.5a ij for all j ∈ C, i ∈ K. This results in a new λ 2 ≥ 0.8 according to proposition 5 and verified by numerical calculation on the graph to be λ 2 = 0.814. As a result, we observe from Figure 3b that the cluster 2 action is preserved for a long duration. for a finite number of agents. Whereas, when this number becomes large enough, the stochastic system can enter into a quasi-stationary regime in which partial agreements are reached. This type of phenomenon has been observed in allto-all and cluster type topologies. Additionally, we have also studied the impact of an external entity which tries to control the actions of the users in a social network by manipulating the network connections. This can be interpreted as a company paying a social platform to make content from certain groups of agents more visible or frequent. We have shown how such a control can enable a community or cluster of agents to preserve or propagate its opinion better. 11) where κ = i∈C τi |C| and δ ∈ [0, 1]. Fig. 1 : 1 Fig. 1: Structure of the graph. Any two agents in K may be connected with a 0.05 probability. All agents within a cluster are connected, and the arrows indicate directed connections. - ≥ 0.95 and ν C2 + ≥ 0.94 in the metastable state. Additionally, applying proposition 4 yields ν B1 -≥ f (0.95 × 0.714) = 0.85, ν B2 -, ≥ f (0.95 × 0.444) = 0.324 and ν B2 + ≥ f (0.94 × 0.444) = 0.315. Fig. 2 : 2 Fig. 2: Simulation of i∈SQ i (t) for S = C 1 , C 2 , B 1 , B 2 .We see that C 1 and C 2 preserve their initial actions as given by proposition 3. We also see that as B 1 follows only C 1 , it's action is close to C 1 . As B 2 follows both C 1 and C 2 who have contradicting actions, it has a very mixed opinion which keeps changing randomly in time. (a) Simulation with no control implemented. (b) Simulation with u = 1 on cluster 2. Fig. 3 : 3 Fig. 3: Average population with action 1 plotted vs time for each set. making C 1 and C 2 clusters with coefficients λ 1 = 0.833 and λ 2 = 0.816 (for the particular random graph generated for this simulation).• 40 < i ≤ 60 and 1 ≤ j ≤ 20, making agents in B 1 trust C 1 with 60 < i ≤ 80 and 1 ≤ j ≤ 20 or 80 < j ≤ 120, making agents in B 2 trust both C 1 and C 2 , with τ C 1 i τ τi , C 1 i τ C 2 i τi ≥ 0.714 for all 40 < i ≤ 60. •
29,344
[ "5857", "5210", "753572" ]
[ "185180", "185180", "100376" ]
01745260
en
[ "info" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01745260v2/file/IFAC_ADHS.pdf
Irinel-Constantin Morȃrescu email: constantin.morarescu@univ-lorraine.fr Vineeth Satheeskumar Varma Lucian Bus email: lucian@busoniu.net Samson Lasaulce email: samson.lasaulce@lss.supelec.fr Space-time budget allocation for marketing over social networks Keywords: Social networks, hybrid systems, optimal control We address formally the problem of opinion dynamics when the agents of a social network (e.g., consumers) are not only influenced by their neighbors but also by an external influential entity referred to as a marketer. The influential entity tries to sway the overall opinion to its own side by using a specific influence budget during discrete-time advertising campaigns; consequently, the overall closed-loop dynamics becomes a linear-impulsive (hybrid) one. The main technical issue addressed is finding how the marketer should allocate its budget over time (through marketing campaigns) and over space (among the agents) such that the agents' opinion be as close as possible to a desired opinion; for instance, the marketer may prioritize certain agents over others based on their influence in the social graph. The corresponding space-time allocation problem is formulated and solved for several special cases of practical interest. Valuable insights can be extracted from our analysis. For instance, for most cases we prove that the marketer has an interest in investing most of its budget at the beginning of the process and that budget should be shared among agents according to the famous water-filling allocation rule. Numerical examples illustrate the analysis. INTRODUCTION The last decades have witnessed an increasing interest in the study of opinion dynamics in social networks. This is mainly motivated by the fact that people's opinions are increasingly influenced through digital social networks. Therefore, governmental institution but also private companies consider that marketing over social networks becomes a key tool for promoting new products or ideas. However, most of the existing studies focus on the analysis of models without control, i.e., they study the convergence, dynamical patterns or asymptotic configurations of the open-loop dynamics. Various mathematical models [START_REF] Degroot | Reaching a consensus[END_REF]Friedkin and Johnsen., 1990;[START_REF] Deffuant | Mixing beliefs among interacting agents[END_REF][START_REF] Hegselmann | Opinion dynamics and bounded confidence models, analysis, and simulation[END_REF][START_REF] Altafini | Consensus problems on networks with antagonistic interactions[END_REF][START_REF] Chowdhury | Continuous opinions and discrete actions in social networks: a multi-agent system approach[END_REF] have been proposed to capture more features of these complex dynamics. Empirical models based on in vitro and in vivo experiments have also been developed [START_REF] Davis | Understanding group behavior : Consensual action by small groups, volume 1, chapter Group decision making and quantitative judgments : A consensus model[END_REF][START_REF] Ohtsubo | Majority influence process in group judgment : Test of the social judgment scheme model in a group polarization context[END_REF][START_REF] Kerckhove | Modelling influence and opinion evolution in online collective behaviour[END_REF]. The emergence of consensus received a particular attention in opinion dynamics [START_REF] Axelrod | The dissemination of culture: A model with local convergence and global polarization[END_REF][START_REF] Galam | Towards a theory of collective phenomena: Consensus and attitude changes in groups[END_REF]. While some mathematical models naturally lead to consensus [START_REF] Degroot | Reaching a consensus[END_REF]Friedkin and Johnsen., 1990), others lead to network clustering [START_REF] Hegselmann | Opinion dynamics and bounded confidence models, analysis, and simulation[END_REF][START_REF] Altafini | Consensus problems on networks with antagonistic interactions[END_REF][START_REF] Morȃrescu | Opinion dynamics with decaying confidence: Application to community detection in graphs[END_REF]. In order to enforce consensus, some recent studies propose the control of one or a few agents, see [START_REF] Caponigro | Sparse feedback stabilization of multi-agent dynamics[END_REF]; [START_REF] Dietrich | Control via leadership of opinion dynamics with state and timedependent interactions[END_REF]. Beside these methods of controlling opinion dynamics towards consensus, we also find recent attempts to control the discretetime dynamics of opinions such that as many agents as possible reach a certain set after a finite number of influences [START_REF] Hegselmann | Optimal opinion control : The campaign problem[END_REF]. In [START_REF] Masucci | Strategic resource allocation for competitive influence in social networks[END_REF], the authors consider multiple influential entities competing to control the opinion of consumers under a game theoretical setting. However, this work assumes an undirected graph and a voter model for opinion dynamics resulting in strategies that are independent of the node centrality. On the other hand, [START_REF] Varma | Opinion dynamics aware marketing strategies in duopolies[END_REF] considers a similar competition with opinion dynamics over a directed graph and no budget constraints. In this paper, we consider a different problem that requires minimizing the distance between opinions and a desired value using a given control/marketing budget. Moreover, we assume that the maximal marketing influence cannot instantaneously shift the opinion of one individual to the desired value. Basically, we consider a continuous time opinion dynamics and we want to design a marketing strategy that minimizes the distance between opinions and the desired value after a given finite number of discrete-time campaigns under budget constraints. To solve this control design problem we write the overall closed-loop dynamics as a linear-impulsive system and we show that the optimal strategy is to influence as much as possible the most central/popular individuals of the network. (see [START_REF] Bonacich | Eigenvector-like measures of centrality for asymmetric relations[END_REF] for a formal definition of centrality). To the best of our knowledge our work is different from all the existing results on opinion dynamics control. Unlike the few previous works on the control of opinions in social networks, we do not control the state of the influencing entity. Instead, we consider that value as fixed and we control the influence weight that the marketer has on different individuals of the social network. By doing so, we emphasize the advantages of targeted marketing with respect to broadcasting strategies when budget constraints have to be taken into account. Moreover, we show that, although the individual control action u i (t k ) at time t k can be chosen in the interval [0, ū], the optimal choice is discrete: either 0 or ū. The rest of the paper is organized as follows. Section 2 formulates the opinion dynamics control problem under consideration. A useful preliminary result for solving a specific optimization problem with constraints is given in Section 3. To motivate our analysis, we emphasize in Section 4 the improvements that can be obtained by targeted advertising with respect to a uniform/broadcasting control. Section 5 contains the results related to the optimal control strategy. We first analyze the case when the campaign budget is given a priori and must be optimally partitioned among the network agents. Secondly, we look at the case when the campaign budget is unknown but the campaigns are distanced in time. Both cases point out that the optimal control contains only 0 or ū actions. These results motivate us to study in Section 6 the spatio-temporal distribution of the budget under the assumption that all the components of u(t k ) are either 0 or ū. Numerical examples and concluding remarks end the paper. PROBLEM STATEMENT We consider an entity (for example, a governmental institution or a company) that is interested in attracting consumers to a certain opinion. Consumers belong to a social network and we refer to any consumer as an agent. For the sake of simplicity, we consider a fixed social network over the set of vertices V = {1, 2, . . . , N } of N agents. In other words, we identify each agent with its index in the set V. To agent n ∈ V we assign a normalized scalar opinion x n (t) ∈ [0, 1] that represents its opinion level and can be interpreted e.g., as the probability for an agent to act as desired. We use x(t) = (x 1 (t), x 2 (t), . . . , x N (t)) to denote the state of the network at any time t, where x(t) ∈ X and X = [0, 1] N . In order to obtain a larger market share with a minimum investment, the external entity applies an action vector u(t k ) = (u 1 (t k ), . . . , u N (t k )) ∈ U i in marketing campaigns at discrete time instants t k ∈ T , k ∈ N. A given action therefore corresponds to a given marketing campaign aiming at influencing the consumer's opinion. The instants corresponding to the campaigns are known and are collected in the set T = {t 0 , t 1 , . . . , t M }. Between two consecutive campaigns, the consumer's opinion is only influenced by the other consumers of the network. We assume that t k -t k-1 = δ k ∈ [δ m , δ M ] where δ m < δ M are two fixed real numbers. Throughout the paper we refer to d ∈ {0, 1} as the desired opinion that the external entity wants to be adopted. We consider ∀i ∈ V the following dynamics:        ẋi (t) = N j=1 a ij (x j (t) -x i (t)), t ∈ [t k , t k+1 ) x i (t k ) = u i (t k )d + (1 -u i (t k ))x i (t - k ) , ∀k ∈ N, (1) where u i (t k ) ∈ [0, ū] with ū < 1, ∀i ∈ V and M k=0 N i=1 u i (t k ) ≤ B where B represents the total budget of the external entity for the marketing campaigns. Dynamics (1) can be rewritten using the collective variable X(t) = (d, x(t)) as: Ẋ(t) = -LX(t) X(t k ) = PX(t - k ) , (2) where L = 0 0 1,N 0 N,1 L , P = 1 0 1,N u(t k ) I N -diag(u(t k )) with diag(u(t k )) ∈ R N ×N being the diagonal matrix having the components of u(t k ) on the diagonal. Remark 1. It is noteworthy that: • L is a Laplacian matrix corresponding to a network of N + 1 agents. The first agent represents the external entity and is not connected to any other agent while the rest of the agents represents the consumers and are connected through the social network defined by the influence weights a ij . • P is a row stochastic matrix that can be interpreted as a Perron matrix associated with the tree having the external entity as a parent of all the other nodes. Consequently, without budget constraints, the network reaches, at least asymptotically, the value d. Several space-time control strategies can be implemented under the budget constraints. For instance, we can spend the same budget for each agent i.e., u i (t k ) = u j (t k ), ∀i, j ∈ V, we can also allocate the entire budget for specific agents of the network. Moreover, the budget can be spent either on few or many campaigns. Our objective is to design a space-time control strategy that minimizes the following cost function J T = N i=1 |x i (T ) -d| (3) for some T > t M , and we have the cost associated with the asymptotic opinion given by J ∞ = N i=1 lim t→∞ |x i (t) -d| (4) This can be interpreted as follows. If the entity (a governmental institution for example) is interested in convincing the public to buy some product or change their habits (practice sports or quit smoking for instance), it will try to move the asymptotic consensus value of the network as close as possible to the desired value, i.e. minimize J ∞ . In some other cases, like an election campaign which targets to get the opinions close to d within a finite time T , we will minimize J T . It is worth mentioning that between campaigns and after the last campaign the opinions evolve according to consensus dynamics. PRELIMINARIES We first state a very useful Lemma which will help us to find the optimal solutions for many sub-cases of our problem. Lemma 1. Given an optimization problem (OP) of the form minimize y∈R L C(y) subject to 0 ≤ y i ≤ ȳ < 1, L i=1 y i ≤ B. (5) where C(y) is a decreasing convex function in y i such that one of the following two conditions hold Case 1: ∀ i ∈ {1, . . . , L}, ∃g(y) ≥ 0 such that ∂C(y) ∂y i = -c i g(y); Case 2: ∂C(y) ∂y i = 1 1 -y i for all i ∈ {1, . . . , L}. Then, a solution y * to this OP is given by water-filling as follows y * R(i) =        ȳ if i ≤ B Lȳ B -ȳL B Lȳ if i = B Lȳ 0 otherwise (6) where R : {1, . . . , L} → {1, . . . , L} is any bijection for Case 2 and, a bijection satisfying c R(1) ≥ c R(2) ≥ • • • ≥ c R(L) . for Case 1. THE BROADCASTING CASE STUDY To emphasize the relevance of the problem under consideration, we will show that for some particular network topologies we can obtain a significant improvement of the revenue by using targeted marketing instead of broadcasting-based marketing in which the marketer allocates the same amount of resource to all the agents. First, we derive the optimal revenue that can be obtained by implementing a broadcasting strategy i.e., u i (t k ) = u j (t k ) α k , ∀i, j ∈ V. We suppose that the graph representing the social network contains a spanning tree. Let v be the right eigenvector of L associated with the eigenvalue 0 and satisfying v 1 N = 1. Therefore, in the absence of any control action, one has that lim t→∞ x(t) = v x(0)1 N x ∞ 0 . Let us also introduce the following notation: x ∞ k = lim t→∞ e -L(t-t k ) x(t k ) = v x(t k )1 N , ∀k ∈ N. Following (2) and using δ k = t k+1 -t k , D k = diag(u(t k )) one deduces that: x ∞ k+1 = v x(t k+1 )1 N = v u(t k+1 )d + (I N -D k+1 )x(t - k+1 ) 1 N = v u(t k+1 )d + (I N -D k+1 )e -Lδ k x(t k ) 1 N . Since v L = 0 N one has that v e -Lδ k = v and conse- quently one obtains that x ∞ k+1 -x ∞ k = v u(t k+1 )d -D k+1 e -Lδ k x(t k ) 1 N . (7) In the case of broadcasting one has u(t k ) = α k 1 N and D k = α k I N , where α k ∈ [0, ū] for all k ∈ {0, . . . , M }. Therefore, using v 1 N = 1, (7) becomes x ∞ k+1 -x ∞ k = α k+1 (d1 N -x ∞ k ), which can be equivalently rewritten as (d1 N -x ∞ k+1 ) = (1 -α k+1 )(d1 N -x ∞ k ). (8) Using (8) recursively one obtains that J ∞ (α) = (d1 N -x ∞ M ) = M =0 (1 -α )(d1 N -x ∞ 0 ). ( 9 ) where J B (α) denotes the cost associated with a broadcasting strategy using α k at stage t k . Proposition 1. The broadcasting cost J ∞ (α) is minimized by using the maximum possible investments as soon as possible, i.e. α k =        ū if k ≤ B N ū B -ūN B N ū if k = B N ū 0 otherwise (10) Proof. Minimizing J ∞ (α) under the broadcasting strategy assumption is equivalent to minimizing k+1 =0 (1 -α ). This is equivalent to minimizing C(α) = log k+1 =0 (1 -α ) and we have ∂C ∂α = - 1 1 -α (11) This results in an OP which satisfies the conditions to use Lemma 1 case 2. It is noteworthy that for u i ∈ [0, 1) one has that k+1 =0 (1 -α ) ≥ 1 - k+1 =0 α ≥ 1 - B N . (12) The last inequality in (12) comes from the broadcasting hypothesis u i (t ) = α , ∀i ∈ V which mean that the budget spent in the -th campaign is N • α . Therefore, the total budget for k + 2 campaigns is N k+1 =0 α and has to be smaller than B. Thus J = 1 N |(d1 N -x ∞ k+1 )| ≥ (1 - B N )1 N |(d1 N -x ∞ 0 )|. The interpretation of ( 12) is that for the broadcasting strategy the minimal cost J is obtained when the whole budget is spent in one marketing campaign (provided this is possible i.e., B ≤ N ū), otherwise the first inequality in (12) becomes strict meaning that J > (1 - B N )1 N |(d1 N -x ∞ 0 )|. Let us now suppose that the graph under consideration is a tree having the first node as root. Then, using a targeted marketing in which the external entity influences only the root, we will show that, under the same budget constraints, the cost J will be smaller. Indeed, for this graph topology one has v = (1, 0, . . . , 0) yielding x ∞ k = x 1 (t k )1 N . Moreover, the dynamics of x 1 (•) writes as: ẋ1 (t) = 0, t ∈ [t k , t k+1 ) x 1 (t k ) = u 1 (t k )d + (1 -u 1 (t k ))x 1 (t - k ) , ∀k ∈ N. (13) Therefore, x 1 (t k ) = u 1 (t k )d + (1 -u 1 (t k ))x 1 (t k-1 ) yielding d -x 1 (t k ) = (1 -u 1 (t k ))(d -x 1 (t k-1 )), which is equivalent to (8). As we have seen before, in the broadcasting strategy one has k+1 =0 α ≤ B N whereas targeting only the root, the constraint becomes k+1 =0 u 1 (t ) ≤ B. Therefore, for any given broadcasting strategy (u 1 , u 2 , . . . , u k ) there exists a strategy targeted on the root that consists of repeating N times (u 1 , u 2 , . . . , u k ). Doing so, one obtains (d1 N -x ∞ k+1 ) = k+1 =0 (1 -α ) N (d1 N -x ∞ 0 ). which leads to a much smaller cost J i.e., the strategy is more efficient. GENERAL OPTIMAL SPACE-TIME CONTROL STRATEGY First, we rewrite the optimal control problem as an optimization problem by treating the control u(t k ) as an N M dimensional vector to optimize. We denote u i,k = u i (t k ) to represent the control for agent i ∈ V at time t k . Then our problem can be rewritten as Minimize u∈R N M J T (u) Subject to 0 ≤ u i,k ≤ ū ∀i ∈ V, k ∈ {0, . . . , M }, and N i=1 M k=1 u i,k ≤ B (14) Here, J T (u) is a multilinear function in u. Before solving problem ( 14) we want to get further insights on the solution's structure, which will lead to important simplifications. Therefore, instead of solving the general optimization problem ( 14), we consider splitting our problem into time-allocation and spaceallocation. For a given time-allocation, i.e. if we know that for stage k a maximum budget of β k ≤ B has been allocated, we find the optimal control strategy for the k-th stage. Moreover, for long stage durations (i.e., t k+1 -t k large) and given temporal budget allocation (β 0 , . . . , β M ), we characterize the optimal space allocation of the budget. Based on these results, we propose a discrete-action spatio-temporal control strategy. Minimizing the per-stage cost In this section we consider that the budget β k for each campaign is a priori given, and optimize the corresponding |d1 N -x ∞ k |. Denote the budget for stage k by β k such that N i=1 u i (t k ) ≤ β k (15) The corresponding cost for the stage k is written as J ∞ k (u(t k )) = (d1 N -x ∞ k ) = |d - N i=1 v i x i (t k )| = |d - N i=1 v i (u i (t k )d + (1 -u i (t k ))x i (t - k ))| (16) We use γ i = v i |d -x i (t - k ) | to denote the gain by investing in agent i ∈ V. Define by R : V → V, a bijection which sorts the agents based on decreasing γ i , i.e. γ R(1) ≥ γ R(2) ≥ • • • ≥ γ R(N ) Proposition 2. The cost J ∞ k (u(t k )) is minimized by the fol- lowing investment profile u * R(i) (k) =        ū if i ≤ β k ū β k -ū β k ū if i = β k ū 0 otherwise (17) Proof. Due to space limitation the proof is omitted. Space allocation for long stage duration In the following we consider that a finite number of marketing campaigns with a priori fixed budget are scheduled such that t k+1 -t k is very large for each k ∈ {0, 1, . . . , M -1}. In this case, we can assume that x i (t - k+1 ) = x ∞ k for all i ∈ V and k ∈ {0, 1, . . . , M -1}. Under this assumption, we write x i (t - 1 ) = x ∞ 0 (u(t 0 )) = N i=1 v i (du i (t 0 ) + x i (t - 0 )(1 -u i (t k ))) (18) for any i ∈ V. Subsequently, we have x ∞ k (u(t 0 ), u(t 1 ), . . . , u(t k )) = N i=1 v i [du i (t k ) +x ∞ k-1 (u(t 0 ), . . . , u(t k-1 ))(1 -u i (t k )) (19) for all k ∈ {1, 2, . . . , M }. Our objective is to minimize J ∞ = x ∞ M (u(t 0 ), . . . , u(t M ) ) -d and this can be done using the proposition below. First, let us define S k : V → V a bijection such that S 0 = R and for all k ∈ {1, 2, . . . , M }, S k gives the agent index after sorting over v i , i.e., v S k (1) ≥ v S k (2) ≥ • • • ≥ v S k (N ) Proposition 3. Let the temporal budget allocation be given by β = (β 0 , . . . , β M ) such that M k=1 β k ≤ B and β k ≤ N ū. Then, the optimal allocation per agent minimizing the cost J(u) is given by u * S k (i) (k) =        ū if i ≤ β k ū β k -ū β k ū if i = β k ū 0 otherwise (20) Proof. Due to space limitation the proof is omitted. DISCRETE-ACTION SPACE-TIME CONTROL STRATEGY Motivated by the results in Propositions 2 and 3, in this section we consider that u i (t k ) ∈ {0, ū}, ∀i ∈ V, k ∈ N and B = K ū with K ∈ N given a priori. The objective is to numerically find the best space-time control strategy for a given initial state x 0 of the network. Algorithms Let us consider in turn the cases of short and long stages. In the short-stage case, given a time allocation consisting of the budgets β k = b k ū at each stage, Proposition 2 tells us how to allocate each stage budget optimally across the agents. Denote all possible budgets at one stage by B = {0, . . . , min{N, K}}. A very simple algorithm is then to search in a brute-force manner all possible time allocations b = (b 0 , . . . , b M ) ∈ B M +1 , subject to the constraint k b k ≤ K. For each such vector b, we simulate the system from x 0 with dynamics (1) where the budget b k is allocated with Proposition 2, and we obtain a final state x F (b) = v x(t M )1 N (the infinite-time state of the network after the last campaign). We retain a solution with the best cost: min b |x 1,F (b) -d| where subscript 1 denotes the first agent (recall that the agents all have the same opinion at infinite time). Note that this cost is J ∞ /N ; we do not sum it over all the agents because this version is easier to interpret as a deviation of each agent from the target state. Furthermore, the simulation can be done in closed form, using the fact that x -(t k+1 ) = e -Lδ k x(t k ). The complexity of this search is O(N 3 (M +1)(min{N, K}+1) M +1 ), dominated by the exponential term. Therefore, this approach will only be feasible for small values of N or K, and especially of M . Considering now the long-stage case, we could still implement a similar brute-force search, but using dynamics (19) for interstage propagation and Proposition 3 for allocation over agents. However, now we can do better by taking advantage of the fact that for all k > 1, the opinions of all the agents reach identical values. Using this, we will derive a more efficient, dynamic programming solution to the optimal control problem: min b |x 1,F (b) -d| where the long-stage dynamics apply but by a slight abuse we keep it the same as in the previous section. To obtain the DP algorithm, define for k = 1, . . . , M, M + 1 =: F a new state signal z k = [y k , r k ] ∈ Z := [0, 1] × {0, . . . , K}. In this signal, y k = x ∞ k-1 , the opinion resulting from long-term propagation after the k -1th campaign, and r k is the remaining budget to apply (we will start from r 0 = K). Here, F is associated with the infinite-time state of the network. We will compute a value function V k (z k ), representing the best cost attainable from stage k, if the agent state is y k and budget r k remains: V F (z F ) = |x F -d|, ∀r F V k (z k ) = min min{r k ,N } b k =0 V k+1 (g(z k , b k )), k = M, . . . , 1 Here, the dynamics g : Z × B → Z are given by: At stage 0, special dynamics g 0 : X × B → Z apply, because the initial state of the network cannot be represented by a single number: Once V k is available, an optimal solution is found by a forward pass, as follows: y k+1 = v x(t k ), r k+1 = r k -b k where x i (t k ) = u i (b k )d + (1 -u i (b k )) y 1 = v x(t 0 ), r 1 = K -b 0 where x i (t 0 ) = u i (b 0 )d + (1 -u i (b 0 ))x i, b * 0 = arg min min{K,N } b0=0 V 1 (g 0 (x 0 , b 0 )), z * 1 = g 0 (x 0 , b * 0 ) b * k = arg min min{r k ,N } b k =0 V k+1 (g(z * k , b k )), z * k+1 = g(z * k , b * k ) for k = 1, . . . , M and the optimal cost of the overall solution is simply the minimum value at the first step. To implement this algorithm in practice, we will discretize the continuous state y into Y points, and interpolate the value function on the grid formed by these points, see [START_REF] Bus ¸oniu | Approximate dynamic programming with a fuzzy parameterization[END_REF] for details. The complexity of the backward pass for value function computation is O(M Y (min{N, K}+1)N ) (we disregard the complexity of the forward pass since it is much smaller). To develop an intuition, take the case N < K; then the algorithm is quadratic in N and linear in M and Y . This allows us to apply the algorithm to much larger problems than the brute-force search above. Finally, note that in principle we could develop such a DP algorithm for the short-stage problem, but there we cannot condense the network state into a single number. Each agent state would have to be discretized instead, leading to a memory and time complexity proportional to Y N , which makes the algorithm unfeasible for more than a few agents. Numerical results We begin by evaluating the brute-force algorithm on a smallscale problem with short stages. Then, we move to the longstage case, where for the same network we compare DP and the brute-force method, confirming that they get the same result. Consider N = 15 agents connected on the graph from Figure 1, where the size of the node corresponds to its centrality. There are 4 stages, corresponding to M = 3, and the budget K = 15 = N . The initial states of the agents are random. For a short stage length δ k = 0.5 ∀k, the brute-force approach gets the results from Figure 2. The final cost (each individual agent's difference from the desired opinion) is 0.2485. Table 1, left shows the agents influenced at each stage. Reexamining Figure 1, we see that most of these agents have a large centrality, which is also the reason for which the algorithm selects them. An exception is agent 6 which has a low centrality, but is still influenced at many stages as x 6 (0) ≈ 0.1. We take now the same problem and make a single change: the stages become long (i.e., t k+1 -t k → ∞). We apply DP, with a discretization of y into 10 points. The results are in Figure 3, with the specific agents being controlled shown on the right side of Table 1. Note the solution is different from the short-stage case, and the final cost is 0.2553, slightly worse, which indicates that giving up the fine-grained control of the agents over time leads to some losses, but they are small. To evaluate the impact of function approximation (interpolation), we also run the brute-force search, since in this problem it is still feasible. It gives the same strategy and cost as DP, so we do not show the result. Note that unlike before, agent 6 is only influenced at k = 0 as the stage durations are long and its opinion value plays a role only at the first stage. CONCLUSIONS In its full generality, the problem of space-time budget allocation problem over a social network is seen to be non-trivial. However, it can be solved in several special cases of practical interest. If for every marketing campaign, the budget is allocated uniformly over the agents, the problem becomes a pure time budget control and can be solved. On the other hand, for a given time budget control, the problem becomes a pure space problem and the optimal way of allocating the budget is proved to be a water-filling allocation policy. Thirdly, if one goes for a binary budget allocation i.e., the marketer either allocates a given amount of budget to an agent or nothing, the space-time budget allocation problem can be solved by using dynamic programming-based numerical techniques. Numerical results illustrate how the available budget should be used by the marketer to reach its objective in terms of desired opinion for the network. This work was supported by projects PEPS INS2I IODINE and PEPS S2IH INS2I YPSOC funded by the CNRS . y k is the network state after campaign k, in which the agent allocations u i (b k ) are computed by distributing budget b k with Proposition 3. 0 and -differently from the other steps -u i (b 0 ) is found with Proposition 2. Figure 1 . 1 Figure 1. Small-scale graph. Figure 2 . 2 Figure 2. Results for short stages. The bottom plot shows the budget allocated by the algorithm at each stage. The top plot shows the opinions of the agents, with an additional, long stage converging to the average opinion (so the last stage duration is not to scale). The circles indicate the opinions right before applying the control at each stage; note the discontinuous transitions of the opinions after control. Figure 3 . 3 Figure 3. Results for long stages. The continuous opinion dynamics is plotted for t ∈ [t k , t k + 25) per stage k, which is sufficient to observe the long stage behavior, i.e., the convergence of opinions of the agents. Table 1 . 1 Agents influenced in each campaign. Left: short stages. Right: long stages. Campaign Agents Campaign Agents 0 3,5,6,7,8,14 0 3,6,7,8 1 3,6,7 1 2,3,7,9 2 3,6,7,9 2 2,3,7,9 3 3,9 3 2,3,9
28,896
[ "5210", "5857", "933138", "1068236" ]
[ "185180", "185180", "43046", "1289" ]
01756120
en
[ "info" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01756120v2/file/main_v2.pdf
Guangshuo Chen email: guangshuo.chen@inria.fr Sahar Hoteit email: sahar.hoteit@u-psud.fr Aline Carneiro Viana Marco Fiore email: marco.fiore@ieiit.cnr.it Carlos Sarraute Enriching Sparse Mobility Information in Call Detail Records Keywords: Call Detail Records, spatiotemporal trajectories, data sparsity Call Detail Records (CDR) are an important source of information in the study of diverse aspects of human mobility. The accuracy of mobility information granted by CDR strongly depends on the radio access infrastructure deployment and the frequency of interactions between mobile users and the network. As cellular network deployment is highly irregular and interaction frequencies are typically low, CDR are often characterized by spatial and temporal sparsity, which, in turn, can bias mobility analyses based on such data. In this paper, we precisely address this subject. First, we evaluate the spatial error in CDR, caused by approximating user positions with cell tower locations. Second, we assess the impact of the limited spatial and temporal granularity of CDR on the estimation of standard mobility metrics. Third, we propose novel and effective techniques to reduce temporal sparsity in CDR by leveraging regularity in human movement patterns. Tests with real-world datasets show that our solutions can reduce temporal sparsity in CDR by recovering 75% of daytime hours, while retaining a spatial accuracy within 1 km for 95% of the completed data. cellular networks, mobility, movement inference. Introduction Urbanization challenges the development and sustainability of city infrastructures in a variety of ways, and telecommunications networks are no exception. Understanding human habits becomes essential for managing the available resources in complex smart urban environments. Specifically, a number of network-related functions, such as paging [START_REF] Zang | Mining call and mobility data to improve paging efficiency in cellular networks[END_REF], caching [START_REF] Lai | Supporting user mobility through cache relocation[END_REF], dimensioning [START_REF] Paul | Understanding traffic dynamics in cellular data networks[END_REF], or network-driven location-based recommending systems [START_REF] Zheng | Mining interesting locations and travel sequences from gps trajectories[END_REF] have been shown to benefit from insights on movements of mobile network subscribers. More generally, the investigation of human mobility pattern has attracted a significant attention across disciplines [START_REF] González | Understanding individual human mobility patterns[END_REF][START_REF] Ranjan | Are call detail records biased for sampling human mobility?[END_REF][START_REF] Song | Limits of Predictability in Human Mobility[END_REF][START_REF] Iovan | Moving and Calling: Mobile Phone Data Quality Measurements and Spatiotemporal Uncertainty in Human Mobility Studies[END_REF][START_REF] Ficek | Inter-Call Mobility model: A spatio-temporal refinement of Call Data Records using a Gaussian mixture model[END_REF]. Motivation: Human mobility studies strongly rely on actual human footprints, which are usually provided by spatiotemporal datasets, as a piece of knowledge to investigate human mobility patterns. In this context, using specialized spatiotemporal datasets such as GPS logs seems to be a direct solution, but there is a huge overhead of collecting such a detailed dataset at scale. Hence, Call Detail Records (CDR) have been lately considered as a primary source of data for large-scale mobility studies. CDR contain information about when, where and how a mobile network subscriber generates voice calls and text messages, and are collected by mobile network operators for billing purposes. These records usually cover large populations [START_REF] Naboulsi | Large-scale Mobile Traffic Analysis: a Survey[END_REF], which makes them a practical choice for performing large-scale human mobility analyses. CDR can be regarded as footprints of individual mobility and can thus be used to infer visited locations, to learn recurrent movement patterns, and to measure mobility-related features. Despite the significant benefits that CDR bring to human mobility analyses, an indiscriminate use of CDR may question the validity of research conclusions. Indeed, CDR have limited accuracy in the spatial dimension (as the user's location is known at a cell sector or in a base station level) and the temporal dimension (since the device's position is only recorded when it sends or receives a voice call or text message). This is a severe limitation, as a cell (sector) typically spans thousands of square meters at least, and even a very active mobile network subscriber only generates a few tens of voice or text events per day. Overall, CDR are characterized by spatiotemporal sparsity, and understanding whether and to what extent such sparsity affects mobility studies is a critical issue. Existing studies and limitations: A few previous works have investigated the validity of mobility studies based on CDR. An influential analysis [START_REF] Ranjan | Are call detail records biased for sampling human mobility?[END_REF] observed that using CDR allows to correctly identify popular locations that account for 90% of each subscriber's activity; however, biases may arise when measuring individual human mobility features. Works such as [START_REF] Ranjan | Are call detail records biased for sampling human mobility?[END_REF] or the later [START_REF] Zhang | Exploring human mobility with multi-source data at extremely large metropolitan scales[END_REF] discussed biases introduced by the incompleteness of positioning information, i.e., the fact that CDR do not capture every location a user has travelled through. Nevertheless, another important bias of CDR, caused by the use of cell tower locations of mobile network subscribers in their footprints instead of their actual positions, has been overlooked in the literature. Another open research problem is that of completing spatiotemporal gaps in CDR. The most intuitive solution is to consider that the location in an entry of CDR stays representative for a time interval period (e.g., one hour) centered on the actual event timestamp [START_REF] Song | Limits of Predictability in Human Mobility[END_REF][START_REF] Jo | Spatiotemporal correlations of handset-based service usages[END_REF]. So far and to the best of our knowledge, no more advanced solution has been proposed in the literature to fill the spatiotemporal gaps in CDR. Our work and contributions: In this paper, we explore the following research questions. First, we investigate how the spatiotemporal sparsity of CDR affects the accuracy and incompleteness of mobility information, by leveraging CDR and cell tower deployments in metropolitan areas. Second, we evaluate the biases caused by such spatiotemporal sparsity in identifying important locations and measuring individual movements. Third, we study the capability of CDR of locating a user continuously in time, i.e., the degree of completeness of the data. Answering these questions leads to the following main contributions: • We show that the geographical shifts, caused by the mapping of user locations to cell tower positions, are less than 1 kilometer in the most of cases (i.e., 85%-95% in the entire country or over 99% in the metropolitan areas in France), and the median of the shifts is around 200 -500 meters (varying across cellular operators). This result substantiates the validity of many large-scale analyses of human mobility that employ CDR. • We provide a confirmation of previous findings in the literature regarding the capability of CDR to model individual movement patterns: (1) CDR provides the limited suitability for the assessment of the spread of human mobility and the study of short-term mobility patterns; (2) CDR yield enough details to detect significant locations in users' visiting patterns and to estimate the ranking among such locations. • We implement different techniques for CDR completion proposed in the literature and assess their quality in the presence of ground-truth GPS data. Our evaluation sheds light on the quality of the results provided by each approach. • We propose original CDR completion approaches that outperform existing ones, and carry out extensive tests on their performance with substantial real-world datasets collected by mobile network operators and mobility tracing initiatives. Validations against ground-truth movement information of individual users show that, on average, our proposed adaptive techniques can achieve an increased temporal completion of CDR data (75% of daytime hours) and retain significant spatial accuracy (having errors below 1 km in 95% of completed time). Compared with the most common proposal in the literature, our best adaptive approach outperforms by 5% of accuracy and 50% of completion. The rest of the paper is organized as follows. Related works are introduced in Sec. 2. In Sec. 3, we present the datasets used in our study. In Sec. 4, we introduce and explore the biases of using CDR for human mobility analyses. In Sec. 5, we discuss the rationale for CDR completion and errors introduced by common literature related approaches. In Sec. 6 and 7, we describe original CDR completion solutions that achieve improved accuracy, during nighttime and daytime, respectively. Finally, Sec. 8 concludes the paper. Related works Our work aims at measuring and evaluating possible biases induced by the use of CDR. Understanding whether and to what extent these biases affect human mobility studies is a subject that has been only partly addressed. The early paper by Isaacman [START_REF] Isaacman | Ranges of human mobility in los angeles and new york[END_REF] unveiled that using CDR as positioning information may lead to a distance error within 1 km compared to ground-truth collected from 5 users. In a seminal work, Ranjan et al. [START_REF] Ranjan | Are call detail records biased for sampling human mobility?[END_REF] showed that CDR are capable of identifying important locations, but they can bias results when more complex mobility metrics are considered; the authors leveraged CDR of very active mobile network subscribers as ground-truth. In our previous study [START_REF] Hoteit | Filling the gaps: On the completion of sparse call detail records for mobility analysis[END_REF], we confirmed these observations using a GPS dataset encompassing 84 users. In the present work, we confirm the observation in [START_REF] Ranjan | Are call detail records biased for sampling human mobility?[END_REF], and push them one step further by also considering the spatial bias introduced by CDR. For the sake of completeness, we mention that results are instead more promising when mobility is constrained to transportation networks: Zhang et al. [START_REF] Zhang | Exploring human mobility with multi-source data at extremely large metropolitan scales[END_REF] found CDR-based individual trajectories to match reference information from public transport data, i.e., GPS logs of taxis and buses, as well as subway transit records. Also relevant to our study are attempts at mitigating the spatiotemporal sparsity of CDR through completion techniques. The legacy approach in the literature consists in assuming that a user remains static from some time before and after each communication activity. The span of the static period, which we will refer to as temporal cell boundary hereinafter, is a constant system parameter that is often fairly arbitrary [START_REF] Jo | Spatiotemporal correlations of handset-based service usages[END_REF][START_REF] Hoteit | Filling the gaps: On the completion of sparse call detail records for mobility analysis[END_REF]. In this paper, we extend previously proposed solutions [START_REF] Hoteit | Filling the gaps: On the completion of sparse call detail records for mobility analysis[END_REF][START_REF] Chen | Towards an adaptive completion of sparse call detail records for mobility analysis[END_REF], and introduce two adaptive approaches to complete subscribers' trajectories inferred from CDR. Datasets We leverage two types of datasets in our study. Coarse-grained datasets are typical CDR data and feature significant spatiotemporal sparsity as well as user locations mapped to cell tower positions. Fine-grained datasets describe the mobility of the same user populations in the coarse-grained datasets with a much higher level of details and spatial accuracy. The coarse-grained datasets are treated as CDR in our experiments, while the corresponding fine-grained datasets are used as ground-truth to validate the results. We have access to one coarse-grained (CDR) and three fine-grained (Internet flow, MACACO, and Geolife) datasets. The CDR and Internet flow datasets share the same set of subscribers, and thus represent a readily usable pair of coarse-and fine-grained datasets. Coarse-grained counterparts of the MACACO and Geolife datasets are instead artificially generated, by downsampling the original fine-grained data. The exact process is detailed in Sec. 3.5. As a result, we have three pairs of fine-and coarse-grained datasets. The following sections describe each dataset in detail. CDR coarse-grained dataset This dataset consists of actual Call Detail Records (CDR), i.e., time-stamped and geo-referenced logs of network events associated to voice calls placed or received by mobile network subscribers. Specifically, each record contains the hashed identifiers of the caller and the callee, the call duration in seconds, the timestamp for the call time and the location of the cell tower to which the caller's device is connected to when the call was first started. The CDR are collected by a major cellular network operator. They capture the communication activities of 1.6 million of users over a consecutive 3-month period in 20151 , resulting in 681 million CDR in the selected period of study. We carry out a preliminary analysis of the CDR dataset, by extracting the experimental statistical distributions of the inter-event time (i.e., the time between consecutive events). These distributions will be later leveraged in Sec. 3.5 to downsample the fine-grained datasets. The resulting cumulative distribution functions (CDF) are shown, for different hours of the day, in Fig. 1. We observe that a majority of events occur at a temporal distance of a few minutes, but a non-negligible amount of events are spaced by hours. This observation confirms results in the literature on the burstiness of human digital communication activities, with rapidly occurring events separated by long periods of inactivity [START_REF] Barabasi | The origin of bursts and heavy tails in human dynamics[END_REF]. The curves in Fig. 1 allow appreciating the longer inter-event times during low-activity hours (e.g., midnight to 6 am) that become progressively shorter during the day. Internet flow fine-grained dataset This dataset is composed of mobile Internet session records, termed flows in the following. These records are generated and stored by the operator every time a mobile device establishes a TCP/UDP session for certain services (i.e., Facebook, Google Services, WhatsApp etc). Each flow entry contains the hashed device identifier, the type of service, the volume of exchanged upload and download data, the timestamps denoting the starting and ending time of the session, and the location of the cell tower handling the session. The dataset refers to two-day period consisting of a Sunday and a Monday in 2015. In each day, the data covers a constant time interval, i.e., from 10 am to 6 pm. The flows in the Internet flow dataset have a considerably higher time granularity than the original CDR. Namely, at least one flow (i.e., one location) is provided within every 20 minutes, for all users. The statistical distribution of the per-user inter-flow time is shown in Fig. 2(a). We note that in 98% of cases, the inter-event time is less than 5 minutes, and in less than 1% of cases, the inter-event time is higher than 10 minutes. We also plot in Fig. 2(b) the CDF of the number of flows (solid lines) and CDR (dashed lines) for each user appearing in both datasets: the number of events per user in the Internet flow case is more than two orders of magnitude larger than that observed in the CDR case. We conclude that the Internet flows represent a suitable fine-grained dataset that can be associated to the coarse-grained CDR dataset. Tab. 1 summarizes the number of users in the Internet flow dataset. In particular, the over 10K and 14K subscribers recorded on Sunday and Monday, respectively, are separated into two similarly sized categories based on their CDR as follows: • Rare CDR users are not very active in placing or receiving voice calls and thus have limited records in the CDR dataset. As in [START_REF] Song | Limits of Predictability in Human Mobility[END_REF], we use the threshold of 0.5 event/hour below which the user is considered to belong to this category. • Frequent CDR users are more active callers or callees and have more than 0.5 event/hour in the CDR dataset. This distinction will be leveraged later on in our performance evaluation. MACACO fine-grained dataset This dataset is obtained through an Android mobile phone application, MACACOApp2 , developed in the context of the EU CHIST-ERA MACACO project [START_REF]EU CHIST-ERA Mobile context-Adaptive CAching for COntent-centric networking (MACACO) project[END_REF]. The application collects data related to the user's digital activities such as used mobile services, generated uplink/downlink traffic, available network connectivity, and visited GPS locations. These activities are logged with a fixed periodicity of 5 minutes. We remark that this sampling approach differs from those employed by popular GPS tracking projects, such as MIT Reality Mining [START_REF] Eagle | Reality mining: Sensing complex social systems[END_REF] or GeoLife [START_REF] Zheng | Mining interesting locations and travel sequences from gps trajectories[END_REF], where users' positions are sometimes irreg- ularly Geolife fine-grained dataset This is the latest version of the Geolife dataset [START_REF] Zheng | Mining interesting locations and travel sequences from gps trajectories[END_REF], which provides timestamped GPS locations of 182 individuals, mostly in Beijing [START_REF] Zheng | Mining interesting locations and travel sequences from gps trajectories[END_REF]. The dataset spans a three-year time period, from April 2007 to August 2012. Unfortunately, the Geolife dataset is often characterized by large temporal gaps between subsequent data records. As a result, not all users present a number of locations or mobility level sufficient to our analysis. We thus select users given the criteria that the entropy rate of each individual's data points falls below the theoretical maximum entropy rate, which are used in [START_REF] Smith | A refined limit on the predictability of human mobility[END_REF] to select the Geolife users for analyzing individual human mobility. Generating coarse-grained equivalents for MACACO and Geolife We do not have access to CDR datasets for users in the MACACO nor Geolife datasets. We thus generate CDR-equivalent coarse-grained datasets, by leveraging the experimental distributions of the inter-event time in the CDR dataset (shown in Fig. 1, cf. Sec. 3.1). Specifically, we downsample the MACACO and Geolife datasets so that the inter-event times match those in the experimental distributions. Therefore, we first randomly choose one GPS record of the user as the seed CDR entry. We then randomly choose an inter-event time value from the distribution for the corresponding hour of the day, and use such interval to sample the second GPS record for the same user, mimicking a new CDR entry. We repeat this operation through the whole fine-grained trajectories of all users, and obtain datasets of downsampled GPS records that follow the actual inter-event time distributions of CDR. Note that tailoring the inter-event distribution on a specific hour of the day allows taking into account the daily variability of CDR sampling. Also, upon downsampling, we filter out users who have an insufficient number of records, i.e., users with less than 30 records per day on average or less than 3 days of activity. The final CDR-like coarse-grained versions of the MACACO and Geolife datasets contain 32 and 42 users, respectively. Summary By matching or downsampling the original data, we obtain three combinations of coarse-grained and fine-grained datasets for the same sets of users. The second and third data combinations, issued from the MACACO and Geolife datasets, cover instead all times. We thus employ them to overcome the limitations of the CDR and Internet flow pair, and to study CDR completion during night hours. Details are provided in Sec. 6. Biases in CDR-based mobility analyses Before delving into CDR completion, we present an updated analysis of the suitability of CDR data for the characterization of human mobility. Indeed, as anticipated in Sec. 1, CDR are typically sparse in space and time, which may affect the validity of results obtained from CDR mining. Cell tower locations In most CDR datasets, the position information is actually represented by the cell tower location handling the corresponding communication. Hence, a shift from the user's actual location to the cell tower location always exists in every CDR entry. Such a shift may impact the accuracy of individual mobility measurements. Usually, CDR are collected in metropolitan areas. In this case, the precision of human locations provided by CDR is related to the local deployment of base stations. Fig. 4 shows the deployment of cell towers in the metropolitan area where our CDR dataset was collected. The presence of cell towers is far from uniform, with a higher density in downtown areas where a cell tower covers an approximately 2 km 2 area on average: in these cases, the cell coverage grants a fair granularity in the localization of mobile network subscribers. The same may not be true for cells in the city outskirts, which cover areas of several tens of km 2 . We evaluate how the cell deployment can bias human mobility studies. To We first extract 718, 987 GPS locations in the mainland of France 3 from the MACACO dataset. Among these locations, 74% are collected from the major metropolitan areas in France, including Paris Region, Lyon, and Toulouse. We then extract cell tower locations of the four major cellular network operators in France (i.e., Orange, SFR, Free, and Bouygues) from the open government data [START_REF]France Open Data[END_REF]. Fig. 5(a) is the CDF of the distance between each GPS location in the 3 The study focuses on the area in the latitude and longitude ranges of (43.005, 49.554) and (-1.318, 5.999), respectively. MACACO dataset and its nearest cell tower. We observe that most of the locations have a distance below 1 km when shifting to their nearest cells (i.e., 95% for Orange, 91% for SFR, 86% for Free, and 91% for Bouygues). Nevertheless, when we focus on the metropolitan areas as shown in Fig. 5(b), almost all the shifts (i.e., over 99%) are below 1 km and all the operators have their median shifts around 200 -500 meters. This indicates that the shifts above 1 km are all observed in rural areas. Still, most of the shifts are higher than 100 meters, indicating the presence of some bias of using cell tower locations. We stress that these values provide an upper bound to the positioning error incurred by CDR, as mobile network subscribers may be associated to antennas that are not the nearest ones, due to the signal propagation phenomena or load balancing policies enacted by the operator. Still, the level of accuracy in Fig. 5, although far from that obtained from GPS logs, is largely sufficient for a variety of metropolitan-level or inter-city mobility analyses. For instance, it was shown that a spatial resolution of 2-7 km is sufficient to track the vast majority of mobility flows in a large dual-pole metropolitan region [START_REF] Coscia | Optimal spatial resolution for the analysis of human mobility[END_REF]. Human movement span We then examine whether mining CDR data is a suitable means for measuring the geographical span of movement of individuals. For that, we compute for each user u in the set of study U the radius of gyration, i.e., the deviation of the user's positions to their centroid. Formally, R u g = 1 n n i=1 ||r u i -r u centroid || 2 geo , where r u centroid is the center of mass of locations of the user u, i.e., r u centroid = 1 n n i=1 r u i . This metric reflects how widely the subscribers move and is a popular measure used in human mobility studies [START_REF] Paul | Understanding traffic dynamics in cellular data networks[END_REF][START_REF] González | Understanding individual human mobility patterns[END_REF][START_REF] Song | Limits of Predictability in Human Mobility[END_REF][START_REF] Hoteit | Estimating human trajectories and hotspots through mobile phone data[END_REF]]. An individual who repeatedly moves among several fixed nearby locations still yields a small radius of gyration, even if she may total a large traveled distance. We are able to compute both estimated (due to the temporal sparsity of the actual or the equivalent CDR data) and real (due to the finer granularity in the ground-truth provided by the Internet flow, MACACO, and Geolife datasets) The three distributions are quite similar, indicating that one can get a reliable distribution of R u g from a certain number of users even if they are rare CDR users, i.e., have a limited number of mobile communication activities. When considering the error between real and estimated radius of gyration, in Fig. 6(b) for the CDR and Internet flow datasets, and in Fig. 6(c) and 6(d) for the MACACO or Geolife datasets, respectively, we observe the following: • The distribution of large errors is similar in all cases, and outlines a decent accuracy of the coarse-grained CDR or CDR-like datasets. For approximately 90% of the Internet flow users, 95% of the MACACO users and 70% of the Geolife users, the errors between the real and the estimated radius of gyration are less than 5 km. The higher errors obtained from Geolife dataset may be interpreted by the irregular sampling in the original data and the presence of very large gaps between consecutive logs. • A more accurate radius of gyration can be obtained for the CDR users who are especially active: 92% of the frequent CDR users have their errors lower than 5 km, while the percentage decreases to 86% for the rare CDR users. • When considering small errors, the distributions tend to differ, with far lower errors in the case of CDR than MACACO or Geolife. This is in fact an artifact of considering cell tower locations as the ground-truth user positions in the fine-grained Internet flow dataset (cf. Sec. 4.1). In the more accurate GPS data of MACACO and Geolife, around 30% and 10% of the users enjoy their errors lower than 100 meters, while around 35% of the users in the CDR dataset have errors below 1 meter. Overall, these results confirm the previous findings on the limited suitability of CDR for the assessment of the spread of human mobility [START_REF] Ranjan | Are call detail records biased for sampling human mobility?[END_REF]. They also unveil how different datasets can affect the data reliability at diverse scales. Missing locations Due to spatiotemporal sparsity, the mobility information provided by CDR is usually incomplete. We investigate the phenomenon in the case of users in the CDR dataset, and plot in Fig. 7 r N L = N CDR L /N Flow L . (1) We notice that 42% in the population of study (i.e., all users) have their r N L higher than 0.8. For these users, 80% of their unique visited locations appear in the CDR data. The percentage of all users having this criterion is slightly higher for the frequent CDR users (50%) and lower for the rare CDR users (37%). These results confirm that using CDR to study very short-term mobility patterns is not a good idea due to the high temporal sparsity and the lack of locations in CDR. Important locations The identification of significant places where people live and work is generally regarded as an important step in the characterization of human mobility. Here, we focus on home and work locations: we separate the period of study into two time windows, mapping to work time (9 am to 5 pm) and night time (10 pm to 7 am) for both CDR-like and ground-truth datasets. For each user, the places where the majority of work time records occur are considered a proxy for work locations; the equivalent records at night time are considered a proxy for home locations [START_REF] Phithakkitnukoon | Socio-geography of human mobility: A study using longitudinal mobile phone data[END_REF]. It is worth noting that, as the Internet flow dataset covers only (10am, 6pm), we only infer work locations for this dataset. Formally, let us consider a user u from the user set. The visiting pattern of the user u is a sequence of samples ( home location H u of the user u is then defined as the most frequent location during night time: H u = mode( i u | t i u ∈ t H ), (2) where t H is the night time interval. The definition is equivalent for the work location W u of the user u, computed as W u = mode( i u | t i u ∈ t W ), (3) where t W is the work time interval. We use the definitions in ( 2) and (3) to determine home and work locations and then evaluate the accuracy of the CDR-based significant locations by measuring the geographical distance that separates them from the equivalent locations estimated via the corresponding fine-grained ground-truth datasets. The results are shown in Fig. 7(b)-(f) as the CDF of the spatial error in the position of home and work places for different user groups for the three datasets. We observe the following: • The errors related to home locations are fairly small in the MACACO dataset, but are relatively higher in the Geolife dataset. For the MACACO users, the errors are always below 1 km and 94% are within 100 meters. For the Geolife users, we observe that 17% of the errors are higher than 10 km. A possible interpretation is that some Geolife users are highly active and don't stay within a stable location during nighttime. • For both MACACO and Geolife users, the errors associated with work locations are sensibly higher than those measured for home locations. For instance, as shown in Fig. 7(d), while 75% of the MACACO users have an error of less than 300 meters, the work places of a significant portion of individuals (around 12%) are identified at a distance higher than 10 km from the positions extracted from the GPS data. A close behavior can be noticed from the Internet flow and Geolife users, as shown in Fig. 7(b) and Fig. 7(f). These large errors typically occur for users who do not seem to have a stable work location and may be working in different places depending on, e.g., the time of day. • The errors are significantly reduced when using cell tower locations as in the Internet flow dataset instead of actual GPS positions as in the MACACO or Geolife datasets. For the Internet flow users in Fig. 7(b), the errors between the real and the estimated significant locations are null for approximately 85% of all users, indicating that the use of the coarsegrained dataset is fairly sufficient for inferring these significant locations. • The errors are non-null for the remaining Internet flow users (15%). Among them, 10% have relatively small errors (less than 5 km), while 5% have errors larger than 5 km. • There is only a slight difference in the distribution of the errors associated with work locations between the rare and the frequent CDR users as shown in Fig. 7(b). The reason is that, most of CDR are generated in significant locations, and hence the most frequent location obtained from CDR of a user is likely to be her actual work location during daytime. Still, it is relatively difficult to capture actual location frequencies if a user has only a few of CDR. Hence the rare CDR users have higher errors. Overall, these results confirm previous findings [START_REF] Ranjan | Are call detail records biased for sampling human mobility?[END_REF], and further prove that CDR yield enough details to detect significant locations in users' visiting patterns. Besides, the results reveal a small possibility of incorrect estimation in the ranking among such locations. Current approaches to CDR completion The previous results confirm the quality of mobility information inferred from CDR, regarding the span of user's movement and significant locations. They also indicate that some biases are present: specifically, although transient and less important places visited may be lost in CDR data, capturing most of one's historical locations is not impossible. The good news is that, even in those cases, the error induced by CDR is relatively small. A major issue remains that CDR only provide instantaneous information about user's locations at a few time instants over a whole day. Overcoming the problem would help the already significant efforts in mobility analyses with CDR [START_REF] Naboulsi | Large-scale Mobile Traffic Analysis: a Survey[END_REF], allowing the exploration of scales much larger than those enabled by GPS datasets. Temporal CDR completion aims at filling the time gaps in CDR, by estimating users' locations in between their mobile communication activities. Several strategies for CDR completion have been proposed to date. In this section, we introduce and discuss the two most popular solutions adopted in the literature. Baseline static solution A simple solution is to hypothesize that a user remains static at the same location where she is last seen in her CDR. This methodology is adopted, e.g., by Khodabandelou et al. [START_REF] Khodabandelou | Population estimation from mobile network traffic metadata[END_REF] to compute subscriber's presence in mobile traffic meta-data used for population density estimation. We will refer to this approach as the static solution and will use it as a basic benchmark for more advanced techniques. It is worth noting that this solution has no spatiotemporal flexibility; its performance only depends on the number of CDR a user generates in the period of study: i.e., the higher is the number of CDR, the lower will be the spatial error in the completed data by the static solution. In other words, there is no space (configurable setting or initial parameter) for customizing this solution to obtain better accuracy. Baseline stop-by solution Building on in-depth studies proving individuals to stay most of the time in the vicinity of their voice call places [START_REF] Ficek | Inter-call mobility model: A spatio-temporal refinement of call data records using a gaussian mixture model[END_REF], Jo et al. [START_REF] Jo | Spatiotemporal correlations of handset-based service usages[END_REF] assume that users can be found at the locations where they generate some digital activities for an hour-long interval centered at the time when each activity is recorded. If the time between consecutive activities is shorter than one-hour, the inter-event interval is equally split between the two locations where the bounding events occur. This solution will be denoted as stop-by in the remaining sections. The drawback of the stop-by is that it uses a constant hour-long interval for all calls as well as users in CDR, which may be not always suitable. This solution lacks flexibility in dealing with various human mobility behaviors. As exemplified in Fig. 8, a single CDR is observed at time t CDR at cell C. Following the stop-by solution, the user is considered to be stable at this cell C during the period d = (t CDR -|d|/2, t CDR + |d|/2), while in fact the user has moved to two other cell towers during this period. We call the period estimated from an instant CDR entry, a temporal cell boundary. In the example of Fig. 8, this temporal cell boundary is overestimated. Nevertheless, this solution has more flexibility than the static solution does, i.e., the time interval |d| affects its performance and is configurable. Although a one-hour interval (|d| = 60 minutes) is usually adopted in the literature, we are interested in evaluating the performance of the stop-by solution over different intervals, which has never been explored before. Intuitively, a spatial error occurs if the user moves to other different cells during the temporal cell boundary. To have a quantitative manner of such an error, we define the spatial error of a temporal cell boundary with a period d as follows: error(d) = 1 |d| d c (CDR) -c (real) t geo dt. ( 4 ) This measure represents the average spatial error between a user's real cell location over time c (real) t and her estimated cell location c (CDR) , during the time period d. The interpretation of the spatial error is straightforward, as follows: • When error(d) = 0, it means that the user stays at the cell c (CDR) during the whole temporal cell boundary. Still, the estimation of d may be conservative, since a larger |d| could be more adapted in this case. • When error(d) > 0, it means that the temporal cell boundary is oversized: the user in fact, moves to other cells in the corresponding time period. Thus, a smaller |d| should be used for the cell. Due to the relevance of this parameter on the model performance, in the following we evaluate the impact of |d| on the spatial error. Impact of parametrization on stop-by accuracy We evaluate the performance of the stop-by approach, by considering the CDR and ground-truth Internet flow datasets (cf. Sec. 3). CDR are used to generate temporal cell boundaries, while locations in the fine-grained data of flows are adopted as actual locations and are used to compute the spatial errors. We error, under any |d|. To account for this aspect, we exclude the static users in the following, and only consider the mobile users, i.e., ones having R u g > 0. An interesting consideration is that the spatial error incurred by the stop-by approach is not uniform across cells. Intuitively, a cell tower covering a larger area is expected to determine longer user dwelling times and hence better estimates with stop-by. We thus compute for each cell its coverage as the cell radius: specifically, we assume a homogeneous propagation environment and an isotropic radiation of power in all directions at each cell tower, and roughly estimate each cell radius as that of the smallest circle encompassing the Voronoi polygon of the cell tower. We remark that this approach yields overlapping coverage at temporal cell boundaries, which reflects what happens in real-world deployments. In the target area under study, shown in Fig. 4, 70% of the cells have radii within 3 km, and the median radius is approximately 1 km. We can now evaluate the probability of having a temporal cell boundary with a null spatial error, as P e0 = Pr{error(d) = 0}. Fig. 10(a) and 10(b) present the probabilities P e0 grouped by the cell radius, when applying varying sizes of temporal cell boundary on the days of study. We notice the following. • The probability P e0 decreases with the increasing period marked by |d|, indicating that using a large period on the temporal cell boundary increases the chances of generating some spatial errors. For instance, for |d| = 30 minutes, the probability of having a null spatial error is around 0.7 depending on the date and on the cell radius. When a larger |d| is used, the probability significantly increases (e.g., for |d| = 60 minutes, the probability P e0 reduces to around 0.6). • The probability P e0 correlates positively with the cell radius r. This trend is seen on both Monday and Sunday (except some cases), indicating that the cell size has an impact on the time interval during which the user stays within the cell coverage. Intuitively, handovers are frequent for users moving among small cells and less so for users traveling across large cells. The results support the idea that there is a strong correlation between the temporal cell boundary and the cell coverage. Nevertheless, since CDR are usually sparse in time, using a small temporal cell boundary could only cover an insignificant amount of cell visiting time, while using a big temporal cell boundary increases the risk of having a non-null spatial error. To investigate this trade-off, we plot the variation of the statistical distribution of the spatial errors after excluding the null errors (i.e., keeping only cases with non-null error(d)) in Fig. 10(c) and 10(d). We observe that: • The spatial error varies widely: it goes from less than 1 km to very huge values (up to 3.6 km on Monday and to 7.5 km on Sunday). Hence, for some users, the stop-by solution is unsuitable for reconstructing visiting patterns due to the presence of such high spatial errors. • The spatial error grows with the cell radius: when the cell size increases, the variation of the error becomes wider, while the mean value also increases. This is reasonable because the higher the cell radius is, the farther the cell is from its cell neighbors. Hence, when a spatial error occurs, it means that the user is actually in a far cell that has a larger distance to c (CDR) . Key insights Overall, we assert that temporal cell boundary estimates user's locations with a high accuracy when |d| is small. This validates the previous finding that users usually stay in proximity of call locations for certain time. The accuracy reduces significantly, giving rise to spatial errors, when increasing |d|. Hence, the trade-off between the completion and the accuracy should be carefully considered when completing CDR using temporal cell boundaries. Using a constant |d| over all users as in the stop-by solution is unlikely to be an appropriate approach. Building on these considerations, we propose enhancements to the stop-by and static solutions in the remainder of the paper. The data completion strategies introduced in the following leverage common trends in human mobility, in terms of (1) attachment to a specific location during night periods, and (2) a tendency to stay for some time in the vicinity of locations where digital activities take place. In particular, we tell apart strategies for CDR completion at night time and daytime: Sec. 6 presents nighttime completion strategies inferring the home location of users; Sec. 7 introduces our adaptive temporal cell boundary strategies leveraging human mobility regularity during daytime. Identifying temporal home boundaries The main goal of our strategies for CDR completion during nighttime is to infer temporal boundaries where users are located, with a high probability, at their home locations. We refer to this problem as the identification of the user's temporal home boundary. Gaps in CDR occurring within the home boundary of each user are then filled with the identified home location. The rationale for this approach stems from our previous observations that CDR allow identifying the home location of individuals with high accuracy. Proposed solutions We extend the stop-by solution (cf. Sec. 5.2) in the following ways. Note that all techniques below assume that the home location is the user's most active location during some night time interval h, and that CDR not in h are completed via legacy stop-by. • The stop-by-home strategy adds fixed temporal home boundaries to the stop-by technique. If a user's location is unknown during h = (10pm, 7am) due to the absence of CDR in that period, the user will be considered at her home location during h. • The stop-by-flexhome strategy refines the previous approach by exploiting the diversity in the habits of individuals. The fixed night time temporal home boundaries are relaxed and become flexible, which allows adapting them on a per-user basis. Specifically, instead of considering h = (10pm, 7am) as the fixed home boundaries for all users, we compute for each user u the most probable interval of time h flex ⊆ h during which the user is at her home location. Then, as for stop-by-home, the user will be considered at her home location to fill gaps in her CDR data during h (u) flex . • The stop-by-spothome strategy augments the previous technique by accounting for positioning errors that can derive (1) from users who are far from home during some nights, or (2) from ping-pong effects in the association to base stations when the user is within their overlapping coverage region. In this approach, if a user's location during h (u) flex is not identified, and if she is last seen at no more than 1 km from her home location, she is considered to be at her home location. We compare the above strategies with the static and the legacy stop-by solution introduced in Sec. 5, assuming |d| = 60 min. Our evaluation considers dual perspectives. The first is accuracy, i.e., the spatial error between mobility metrics computed from ground-truth GPS data and from CDR completed with the different techniques above. The second is completion, i.e., the percent of the time during which the position of a user is determined. Note that the static solution (cf. Sec. 5) provides user locations at all times, but this is not true for stop-by or the derived techniques above. In this case, the CDR is completed only for a portion of the total period of study, and the users' whereabouts remain unknown in the remaining time. Accuracy and completion results We first compute the geographical distance between the positions in the GPS records in MACACO and Geolife and those in their equivalent CDR-like coarse-grained datasets. These strategies are not designed to provide positioning information at all times expect the static solution, hence distances are only measured for GPS samples whose timestamps fall in the time periods for which completed data is available. • The static approach provides the worst accuracy in both datasets. • The stop-by-flexhome technique largely improves the data precision, with an error that is lower than 100 meters in 90 -92% of cases for the MACACO users and with a median error around 250 meters for the Geolife users. • The stop-by-spothome technique provides the best performance for both datasets. For instance, about 95% of samples lie within 100 meters of the ground-truth locations in the MACACO dataset, while the median error is 250 meters (the lowest result) in the Geolife dataset. These results confirm that a model where the user remains static for a limited temporal interval around each measurement timestamp is fairly reliable when it comes to accuracy of the completed data. They also support previous observations on the quite static behavior of mobile network subscribers [START_REF] Ficek | Inter-call mobility model: A spatio-temporal refinement of call data records using a gaussian mixture model[END_REF]. More importantly, the information of home locations can be successfully included in such models, by accounting for the specificity of each user's habits overnight. The stop-by and derived solutions do not provide full completion by design. Overall, the combination of the results in Fig. 11 indicates the stop-by-spothome solution as that achieving the best combination of high accuracy and fair completion, among the different completion techniques considered. Identifying temporal cell boundaries We now consider the possibility of completing CDR during daytime. Our strategy is based again on inferring temporal boundaries of users. However, unlike what has been done with nighttime periods in Sec. 6, here we leverage the communication context of human mobility habits and extend the time span of the position associated with each communication activity to so-called temporal cell boundaries. Factors impacting temporal cell boundaries Hereafter, we aim to answer the following question: how to choose a proper and adaptive period for a temporal cell boundary instead of a static fixed-toall period? To answer the question, we need to understand the correlation between the routine behavior of users in terms of mobile communications and their movement patterns. For this, we first study how human behavior factors that can be extracted from CDR may affect daytime temporal cell boundaries. We categorize factors in three classes, i.e., event-related, long-term behavior, and location-related, as detailed next. Then, we leverage them to design novel approaches to estimate temporal cell boundaries. The algorithm applies Hartigan's clustering [START_REF] Hartigan | Clustering[END_REF] on visited cell locations of users in CDR and use logistic regression to estimate a location's importance to the user from factors extracted from the cluster that the location belongs to. To start with, the cluster approach chooses the cell tower from the first CDR and makes it the first cluster. Then, it recursively checks all cell towers in the remaining CDR. If a cell tower is within the distance threshold (we use 1 km) to the centroid of a certain cluster, the cell tower is added to the cluster, and the centroid of the cluster is moved to the weighted average of the locations of all the cell towers in the cluster. The weights assigned to locations are the fractions of days in which they are visited over the whole observing period. The clustering process finishes once all cell towers are assigned to clusters. Once clusters are defined, the importance of each cluster is identified according to the following observable factors: (i) the number of days on which any cell tower in the cluster was contacted (CDay); (ii) the number of days that elapse between the first and the last contact with any location in the cluster (CDur); (iii) the sum of the number of days cell towers in the cluster were contacted (CTDay); (iv) the number of cell towers inside the cluster (CTower); (v) the distance from the registered location of the activity to the centroid of the cluster (CDist). These factors derived from a cluster correlate with the time that the user spends in the cluster's locations, as shown by Isaacman et al. via their logistic regression model [START_REF] Isaacman | Identifying important places in peopleś lives from cellular network data[END_REF]. It is worth noting that we cannot reproduce the exact model in [START_REF] Isaacman | Identifying important places in peopleś lives from cellular network data[END_REF], since the used ground-truth is not publicly available. However, we can still use the same factors for our objective, i.e., identifying temporal cell boundaries. Supervised temporal cell boundary estimation So far, we have introduced human behavior factors that might be directly or indirectly related to temporal cell boundaries. In order to use them for our purpose, we need a reliable model linking them to actual temporal cell boundaries. In the following we introduce two approaches to do so, both based on supervised machine learning. Symmetric and asymmetric temporal cell boundaries We define two kinds of temporal cell boundaries: symmetric and asymmetric. Given a CDR entry at time t, determining its temporal cell boundary means to expand the instantaneous time t to a time interval d, during which the user is assumed to remain within coverage of the same cell. For a symmetric temporal cell boundary, this period is generated from a CDR-based parameter d ± as d = (td ± , t + d ± ), i.e., it is symmetric with respect to the CDR time t. Instead, the period of an asymmetric temporal cell boundary is generated from two independent parameters d + and d -as d = (td -, t + d + ). We design sym-adaptive and asym-adaptive approaches, both of which receive a CDR entry as input and return an estimate of its associated temporal cell boundary. More precisely, the factors discussed in Sec. 7.1 are extracted for each user and CDR record, and converted to an input vector x, under the following rules: (i) the categorical factor type is converted to two binary features by one-hot encoding5 ; (ii) the time is converted to the distances (in seconds) separating it from 10am and from 6pm6 ; (iii) the other factors are used as plain scalar values. Given a CDR entry and its input vector x, we have the following approaches: • The sym-adaptive approach contains one model that accepts the input vector and predicts the parameter d ± as a symmetric estimation of the corresponding temporal cell boundary, i.e., d ± = F sym (x). • The asym-adaptive approach contains two models that separately predict the parameters d + and d -as a joint asymmetric estimation of the corresponding temporal cell boundary, i.e., d + = F + asym (x) and d -= F - asym (x). We use supervised machine learning techniques to build the models. It is worth noting that the user identifier is not in the input vector x because we do not want to train models that bound themselves to any particular user. This gives our models better flexibility and ensures higher potential for applying the trained model into other mobile phone datasets where the same factors can be derived. Estimating temporal cell boundaries via supervised learning We detail our methodology and results, by (i) formalizing the optimization problems that capture our goal, (ii) discussing how they can be addressed via supervised machine learning, and (iii) presenting a complete experimental evaluation. Optimization problems. All the models are generalized from a training set X consisting of CDR entries (as input vectors) and their real temporal cell boundaries (which are originally asymmetric), i.e., X = {(x i , d + i , d - i )}. To build the asym-adaptive approach, the objective is to find two separate approximations, as F + asym (x) and F - asym (x), to functions F + (x) and F -(x) that respectively minimize the expected values of two losses L(d + , F + (x)) and L(d -, F -(x)), i.e., F + asym (x) = arg min F + E d + ,x [L(d + , F + (x))], (5) F - asym (x) = arg min F -E d -,x [L(d -, F -(x))], (6) where L is the squared error loss function, i.e., L(x, y) = 1 2 (xy) 2 . To build the sym-adaptive approach, a modified training set X ± = {(x i , d ± i )} is firstly generated from the original X by applying d ± i = min{d + i , d - i } on each real asymmetric temporal cell boundary. Then, as our objective, we need to find an approximation F sym (x) to a function F ± (x) that minimizes the expected and Internet flow datasets, we first extract for each CDR entry of these selected users its corresponding input vector x as well as the parameters d + , d -of its real temporal cell boundary. We then build the two training sets X and X ± . The second step is to build the approximation functions (i.e., F + asym , F - asym , and F sym ) from the training sets. For that, we have to first tune the M and ν parameters of Alg. 1 of each approximation function. To this end, we use a equal-sized three subsets. For each combination of M and ν, we train the model corresponding to each approximation function based on one subset and validate it on the other two subsets. We repeat this operation three times with each of the subsets used as training data. We select as our actual parameters the M and ν values that achieve the lowest loss in the cross-validation. Finally, we use the training sets X and X ± and the tuning parameters that we select to build the functions F + asym , F - asym , and F sym corresponding to the asym-adaptive and sym-adaptive approaches. Fig. 12 shows the relative importance of factors with respect to the estimation of a temporal cell boundary in the training procedure of the GBRT technique. For each factor, its importance is computed as a relative value of the sum of its corresponding importance in all the three approximations. The importance indicates the degree of a feature contributing to the construction of the regression trees. This figure allows us drawing the following main conclusions, valid for both approaches. • The three most important factors are the timestamp of the activity, the cell radius, and the radius of gyration. This indicates that the time spent by a user within coverage of a same cell mainly depends on the cell size, the precise time when the activity occurred, and the user's long-term mobility. • Surprisingly, the activity's type is the least relevant factor, indicating that knowing whether a user generates a call or a message is useless in 825 determining a temporal cell boundary. Accuracy and completion results We compare our two trained approaches with the stop-by and static approaches using the CDR from the remaining 50% of the randomly-selected users. For the two sym-adaptive and asym-adaptive approaches, we build two test-adaptive symmetric and asymmetric temporal cell boundaries using the input vectors in the testing sets. Besides, we let the stop-by approach generate temporal cell boundaries using |d| = {10, 60, 180} minutes. As in Sec. 6, we make a comparative study by evaluating the solutions regarding accuracy and completion, where the accuracy is measured by evaluating the spatial error in (4) (cf. Sec. 5). Recall that a good data completion approach should cover the observing period as much and precise as possible, i.e., satisfying high accuracy and completion simultaneously. Fig. 13(a) and 13(b) display the distribution of the spatial errors over all temporal cell boundaries. Our results confirm that the spatial error increases as t d becomes larger when using the stop-by approach. More importantly, the two adaptive approaches perform slightly better than the stop-by approach does with its most common setting (|d| = 60 minutes) in terms of the spatial error. As expected, the static solution has the worst performance, similarly to what observed in the case of home boundaries using the MACACO and Geolife datasets. Fig. 13(c) and 13(d) plot the distribution of the completion per users over all approaches except static (of which the completed data always covers the whole period). The x-axis of the figures has 8 hours because the Internet flow dataset only covers an eight-hour day time, i.e., (10am, 6pm). We remark that both our adaptive approaches score a significant performance improvement in terms of completion: the amount of time during which users' locations stay unidentified is substantially reduced with respect to the legacy stop-by approach. On average, only approximately 2 hours (25% of the period of study) of the user's day time remains unidentified after applying the asym-adaptive approach, while 3 hours remains unidentified after using the sym-adaptive and stop-by (|d| = 180 minutes) approaches. The stop-by approach with its most common setting (|d| = 60 minutes) has the same degree of accuracy as the adaptive approaches but has a far less degree of completion (i.e., a median of 6 unidentified hours). Overall, these results highlight a clear advantage provided by adaptive approaches for CDR completion based on supervised learning. Consequently, the adaptive approaches achieve a slightly better performance in terms of accuracy but have a far better performance in terms of completion. The asym-adaptive approach has an obvious advantage than the competitors: it completes 75% of the day hours with a fairly good accuracy. Conclusion In this paper, we leveraged real-world CDR and GPS datasets to characterize the bias induced by the use of CDR for the study of human mobility, and evaluated CDR completion techniques to reduce some of the emerging limitations of this type of data. Our results confirm previous findings on the sparsity of CDR, and, more importantly, provide a first comprehensive investigation of techniques for CDR completion. In this context, we propose solutions that (i) dynamically extend the time intervals spent by users at locations where they are pinpointed by the CDR data during daytime, and (ii) sensibly place users at their home locations during nighttime. Extensive tests with heterogeneous realworld datasets prove that our approaches can achieve excellent combinations of accuracy and completion. On average, for daytime hours, our approaches can complete 75% of the time in which 95% have errors below 1 km; for nighttime hours, our refinements of the legacy solution have a performance gain of 4-5 or 3-7 hours on two datasets regarding completion and up to 10% of a performance gain regarding accuracy. Particularly, compared with the most common proposal in the literature, our best adaptive approach outperforms by 5% of accuracy and 50% of completion. Figure 1 : 1 Figure 1: Distributions of the inter-event time in the CDR dataset at different day times. Figure 2 : 2 Figure 2: (a) CDF of the inter-event time in the Internet flow fine-grained dataset; (b) CDF of the number of records (flows or CDR) per user in a weekend and a weekday. Figure 3 : 3 Figure 3: Combinations of corresponding coarse-and fine-grained datasets. Fig. 3 3 Fig. 3 outlines them.An important remark is that, as already mentioned in Sec. 3.2, the Internet flow dataset only covers working hours, from 10 am to 6 pm. As a result, the first data combination is well suited to the investigation of CDR completion during daytime. The relevant analysis is presented in Sec. 7. Figure 4 : 4 Figure 4: Deployment of cell towers in the target metropolitan area. Purple dots represent the base stations, whose coverage is approximated by a Voronoi tessellation. Figure 5 : 5 Figure 5: Distributions of the distances to the nearest cell tower (shifts), for 718, 987 GPS locations in the MACACO data of users in (a) the whole area and (b) major metropolitan areas (Paris Region, Lyon, Toulouse) in France. FF Figure 6: (a) CDF of the radius of gyration of two categories (Rare and Frequent) of CDR users in the Internet flow dataset. (b)(c)(d) CDF of the distance between the real and the estimated radius of gyration from CDR over the users of the (b) Internet flow, (c) MACACO, and (d) Geolife datasets. (a) the ratio r N L of unique locations detected from CDR (N CDR L ) to those from the ground-truth (N Flow L ), i.e., Internet flow data, as Figure 7 : 7 Figure 7: (a) CDF of the radio r N L of the number of locations in each user's coarse-grained trajectory to the one in her fine-grained trajectory. (b)(c)(d)(e)(f) CDF of the distances between each user's real and estimated important locations located by her CDR and groundtruth: (b) work locations over the Internet flow users; (c) home and (d) work locations over the MACACO users; (e) home and (f) work locations over the Geolife users. Figure 8 : 8 Figure 8: An example of a temporal cell boundary in the stop-by approach: A period (t CDR -|d|/2, t CDR + |d|/2) is given as a temporal cell boundary at the cell C attached with a CDR entry at time t CDR . In this temporal cell boundary, the user is assumed to be at the cell C, while actually she moves from the cell B to D: this leads to a spatial error. Figure 9 : 9 Fig. 9(a) and 9(b) show the CDF of the spatial error of temporal cell boundary on Monday and Sunday, respectively. We observe that error(d) = 0 for 80% of CDR on Monday (cf. 75% on Sunday) when |d| = 60 minutes, and for 60% of CDR on Monday (cf. 53% on Sunday) when |d| = 240 minutes. This result is a strong indicator that users tend to remain in cell coverage areas for long intervals around their instant locations recorded by CDR. It is also true that many users are simply static, i.e., only appear at one single location in their Internet flows, and, consequently have an associated radius of gyration R u g = 0:this behavior accounts for approximately 35% and 40% on Monday and Sunday, respectively. The high percentage of temporal cell boundaries with error(d) = 0 in Fig.9may be due to these static users, since they will not entail any spatial 3 ( 3 (Figure 10 : 3310 Figure 10: Spatial errors of temporal cell boundaries of CDR generated by the stop-by solution over users with their Rg > 0: (a)(b) the probability (P e0 ) of having a non-error temporal cell boundary (-|d|, |d|), where |d| ∈ {10, 30, 60, 120, 180, 240} minutes, under several groups of cell radius on (a) Monday and (b) Sunday; (c)(d) Box plot of non-zero spatial errors, grouped by the cell radius and the time period of temporal cell boundary on (c) Monday and (d) Sunday. Each box denotes the median and 25 th -75 th percentiles and the whiskers denote 5 th -95 th percentiles. Fig. 11 ( 11 Fig. 11(a) and 11(b) summarize the results of our comparative evaluation of accuracy, and allow drawing the following main conclusions: Fig. 11 (Figure 11 : 1111 Fig. 11(c) and 11(d) show the CDF of the hours per day during which a user 7. 1 . 1 . 11 Event-related factorsWe include in this class the meta-data contained in records of common CDR datasets. They include the activity time, type (i.e., voice call or text message), and duration 4 . Intuitively, these factors have direct effects on temporal cell boundaries. For instance, in terms of time, a user may stay within a fixed cell during her whole working period. In terms of type and duration, a long phone call may imply that the user is static, while a single text message may indicate that the user is on the move. Besides, these factors are commonly found in and easily extracted from any common CDR entries.7.1.2. Long-term behavior factorsThis class includes factors describing users' activities over extended time intervals. They are the radius of gyration (URg), the number of unique visited locations (ULoc), and the number of active days during which at least one event is recorded (UDAY). These factors characterize a user by giving indications of (i) her long-term mobility and (ii) her habit on generating calls and text messages, which may be indirectly related to her temporal cell boundaries. For each user, these factors are computed from our CDR dataset (cf. Sec. 3.1) by aggregating data during the whole 3-month period of study.7.1.3. Location-related factorsFactors in this class relate to positioning information. The first factor is the cell radius (CR), which we already proved to be affecting the reliability of CDR completion schemes in Sec. 5. The other location-related factors take account for the relevance that different places have for each user's activities. The intuition is that individuals spend long time periods at their important places. Specifically, we explore it by applying the algorithm presented by Isaacman et al.[START_REF] Isaacman | Identifying important places in peopleś lives from cellular network data[END_REF], which determines prominent locations where the user usually spends a large amount of time or visits frequently. 3 -Figure 12 : 312 Figure 12: Relative Importance of features in determining accurate temporal cell boundaries. Figure 13 : 13 Figure 13: CDF of the spatial errors of temporal cell boundaries computed on (a) Sunday and (b) Monday; CDF of the completion of completed data on (c) Sunday and (d) Monday, across the stop-by, static, sym-adaptive, and asym-adaptive approaches. Table 1 : 1 Overview of the Internet Flow Dataset Day of the week Users Rare CDR users Frequent CDR users Sunday 10, 856 6, 154 4, 702 Monday 14, 353 7, 215 7, 138 Due to a non-disclosure agreement with the data owner, we cannot reveal the geographical area or the exact collecting period of this dataset. Available at https://macaco.inria.fr/MACACOApp/. We set the duration text messages to 0 seconds. Used to deal with the unbalanced occurrence of the types. Daytime interval covered by the used dataset (cf. Sec. 3.2). ing sets from the CDR entries of the remaining users. We then let them generate Acknowledgment This work is supported by the EU FP7 ERANET program under grant CHIST-ERA-2012 MACACO and is performed in the context of the EMBRACE Associated Team of Inria. value of the loss L(d ± , F ± (x)), i.e., F sym (x) = arg min Learning technique. In order to compute the approximations, we utilize a typical supervised machine learning technique, i.e., Gradient Boosted Regression Trees (GBRT) [START_REF] Friedman | Greedy function approximation: a gradient boosting machine[END_REF][START_REF] Friedman | The elements of statistical learning[END_REF]. Although several supervised learning techniques can be adopted, we pick the GBRT technique because (i) it is a well-understood approach with thoroughly-tested implementations, (ii) it has advantages over alternative techniques, in terms of predicative power, training speed, and flexibility to accommodate heterogeneous input (which is our case) [START_REF]Ensemble methods[END_REF], and (iii) it returns quantitative measures about the contribution of each factor to the overall approximation [START_REF] Friedman | Greedy function approximation: a gradient boosting machine[END_REF]. In the GBRT technique, an approximation function is the weighted sum of an ensemble of regression trees. Each tree divides the input space (i.e., the vector x of factors) into disjoint regions and predicts a constant value in each region. The GBRT technique combines the predictive power of all regression trees having a weak predicting performance by making a joint predictor: it is proved that the performance of such a joint predictor is better than that of each single regression tree [START_REF] Friedman | The elements of statistical learning[END_REF]. The ensemble is initialized with a single-leaf tree (i.e., a constant value). During each iteration, a new regression tree is added to the ensemble by minimizing the loss function via gradient descent. An algorithm of the GBRT technique for building the approximation of the function F sym in the sym-adaptive approach is given in Alg. 1. In the algorithm, the function FitRegrTree is used to build a regression tree based on the input and the gradients of the function in the last iteration, of which we refer the reader to [31, Chapter 9.2.2] for the detail. Two important tuning parameters are in the algorithm, i.e., the number of iterations M (i.e., the number of regression trees to be added to the ensemble) and the learning rate ν (i.e., the level of contribution expected by a new regression tree), which we determine via cross validation and discuss later. In the asym-adaptive approach, the same algorithm is used except that the training set X ± is replaced by X . 12 return F M (x); Experiments. The first step is to build the training sets. For that, we randomly select 50% of the users from the two available days (i.e., a Monday and a Sunday) in the Internet flow dataset (cf. Sec. 3.2). In particular, from the CDR
71,038
[ "6477", "176356", "948034" ]
[ "300340", "1289", "267245", "47325", "531448" ]
01756934
en
[ "info" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01756934/file/MMSJ.2018.pdf
Ben Chiheb Emmanuel Ameur Bernard Mory Eugen Cousin Dedu Ameur Emmanuel Mory Bernard Cousin Eugen Dedu C Ben Ameur Performance Evaluation of TcpHas: TCP for HTTP Adaptive Streaming Keywords: HTTP Adaptive Streaming, TCP Congestion Control, Cross-layer Optimization, Traffic Shaping, Quality of Experience, Quality of Service HTTP Adaptive Streaming (HAS) is a widely used video streaming technology that suffers from a degradation of user's Quality of Experience (QoE) and network's Quality of Service (QoS) when many HAS players are sharing the same bottleneck link and competing for bandwidth. The two major factors of this degradation are: the large OFF period of HAS, which causes false bandwidth estimations, and the TCP congestion control, which is not suitable for HAS given that it does not consider the different video encoding bitrates of HAS. This paper proposes a HAS-based TCP congestion control, TcpHas, that minimizes the impact of the two aforementioned issues. It does this by using traffic shaping on the server. Simulations indicate that TcpHas improves both QoE, mainly by reducing instability and convergence speed, and QoS, mainly by reducing queuing delay and packet drop rate. Introduction Video streaming is a widely used service. According to 2016 Sandvine report [START_REF]Global internet phenomena report[END_REF], in North America, video and audio streaming in fixed access networks accounts for over 70% of the downstream bandwidth in evening hours. Given this high usage, it is of extreme importance to optimize its use. This is usually done by adapting the video to the available bandwidth. Numerous adaptation methods have been proposed in the literature and by major companies, and their differences mainly rely on the entity that does the adaptation (client or server), the variable used for adaptation (the network or sender or client buffers), and the protocols used; the major companies having finally opted for HTTP [START_REF] Dedu | A taxonomy of the parameters used by decision methods for adaptive video transmission[END_REF]. HTTP Adaptive Streaming (HAS) is a streaming technology where video contents are encoded and stored at different qualities at the server and where players (clients) can choose periodically the quality according to the available resources. Popular HAS-based methods are Microsoft Smooth Streaming, Apple HTTP Live Streaming, and MPEG DASH (Dynamic Adaptive Streaming over HTTP). Still, this technology is not optimal for video streaming, mainly because its HTTP data is transported using the TCP protocol. Indeed, video data is encoded at distinct bitrates, and TCP does not increase the throughput sufficiently quickly when the bitrate changes. TCP variants (such as Cubic, Illinois, and West-wood+) specific to high bandwidth-delay product networks achieve high bandwidth more quickly and seem to give better performance for HAS service than classical TCP variants such as NewReno and Vegas [START_REF] Ameur | Combining traffic shaping methods with congestion control variants for HTTP adaptive streaming[END_REF], but the improvement is limited. Another reason for this suboptimality is the highly periodic ON-OFF activity pattern specific to HAS [START_REF] Akhshabi | What happens when HTTP adaptive streaming players compete for bandwidth?[END_REF]. Currently, a HAS player estimates the available bandwidth by computing the download bitrate for each chunk at the end of the download (for the majority of players, this estimation is done by dividing the chunk size by its download duration). As such, it is impossible for a player to estimate the available bandwidth when no data is being received, i.e. during OFF periods. Moreover, when several HAS stream compete in the same home network, bandwidth estimation becomes more difficult. For example, if the ON period of a player coincides with the OFF period of a second player, the first player will overestimate its available bandwidth, and makes it select for the next chunk a quality level higher than in reality. This, in turn, could lead to a congestion event if the sum of the downloading bitrates of the two players exceeds the available bandwidth of the bottleneck. An example is given in [START_REF] Ameur | Combining traffic shaping methods with congestion control variants for HTTP adaptive streaming[END_REF] (table 4): the congestion rate for two competing HAS clients is considerably reduced when using a traffic shaping. Finally, unstable quality levels are harmful to user's Quality of Experience (QoE) [START_REF] Seufert | A survey on quality of experience of HTTP adaptive streaming[END_REF]. Traffic shaping, which expands the ON periods and shrinks the OFF periods, can considerably limit the drawbacks mentioned above [START_REF] Abdallah | Cross layer optimization architecture for video streaming in WiMAX networks[END_REF][START_REF] Houdaille | Shaping HTTP adaptive streams for a better user experience[END_REF][START_REF] Ameur | Shaping HTTP adaptive streams using receive window tuning method in home gateway[END_REF][START_REF] Villa | Group based traffic shaping for adaptive HTTP video streaming by segment duration control[END_REF][START_REF] Akhshabi | Server-based traffic shaping for stabilizing oscillating adaptive streaming players[END_REF][START_REF] Ameur | Evaluation of gateway-based shaping methods for HTTP adaptive streaming[END_REF]. One method to reduce occurrences of ON-OFF patterns is to use server-based shaping at application layer [START_REF] Akhshabi | Server-based traffic shaping for stabilizing oscillating adaptive streaming players[END_REF]. This approach is cross-layer because it interacts with the TCP layer and its parameters such as the congestion window, cwnd, and the round-trip time estimation, RTT. Hence, implementing HAS traffic shaping at the TCP level is naturally more practical and easier to manage; in addition, this should offer better bandwidth share among HAS streams, reduce congestion events and improve the QoE of HAS users. Despite the advantages of using a transport layer-based method for HAS, and in contrast with other types of streaming, where methods at the transport layer have already been proposed (RTP, Real-time Transport Protocol, and TFRC, TCP Friendly Rate Control [START_REF] Floyd | TCP Friendly Rate Control (TFRC): Protocol specification[END_REF]), to the best of our knowledge, there is no proposition at the transport level specifically designed for HAS. For commercial video providers YouTube, Dailymotion, Vimeo and Netflix, according to [START_REF] Hoquea | Mobile multimedia streaming techniques: QoE and energy saving perspective[END_REF], "The quality switching algorithms are implemented in the client players. A player estimates the bandwidth continuously and transitions to a lower or to a higher quality stream if the bandwidth permits." The streaming depends on many parameters, such as player, video quality, device and video service provider etc., and uses various techniques such as several TCP connections, variable chunk sizes, different processing for audio and video flows, different throttling factors etc. To conclude, all these providers use numerous techniques, all of them based on client. Therefore, in this paper, we extend our previous work [START_REF] Ameur | TcpHas: TCP for HTTP adaptive streaming[END_REF] by proposing a HAS-oriented TCP congestion control variant, TcpHas, that aims to minimize the aforementioned issues (TCP throughput insufficient increase and ON-OFF pattern) and to unify all the techniques given in the previous paragraph. It uses four sub-modules: bandwidth estimator, optimal quality level estimator, ssthresh adjusting, and cwnd adjusting to the shaping rate. Simulation results show that TcpHas considerably improves both QoS (queuing delay, packet drop rate) and QoE (stability, convergence speed), performs well with several concurrent clients, and does not cause stalling events. The remainder of this paper is organized as follows: Section 2 presents server-based shaping methods and describes possible optimizations at TCP level. Then, Section 3 describes TcpHas congestion control and Section 4 evaluates it. Section 5 concludes the article. Background and related works Our article aims to increase QoE and QoS by fixing the ON-OFF pattern. Many serverbased shaping methods have been proposed in the literature to improve QoE and QoS of HAS. Their functioning is usually separated into two modules: 1. Estimation of the optimal quality level, based on network conditions, such as bandwidth, delay, and/or history of selected quality levels, and available encoding bitrates of the video. 2. The shaping function of the sending rate, which should be suitable to the encoding bitrate of the estimated optimal quality level. The next two subsections describe constraints and proposed solutions for each module. The last subsection presents some possible ways of optimization, which provides the basis for the TcpHas design. Optimal Quality Level Estimation A major constraint of optimal quality level estimation is that the server has no visibility on the set of flows that share the bottleneck link. Ramadan et al. [START_REF] Ramadan | Avoiding quality oscillations during adaptive streaming of video[END_REF] propose an algorithm to reduce the oscillations of quality during video adaptation. During streaming, it marks each quality as unsuccessful or successful, depending on whether it has led to lost packets or not. A successfulness value is thus attached to each quality, and is updated regularly using an EWMA (Exponential Weighted Moving Average) algorithm. The next quality increase is allowed if and only if its successfulness value does not exceed some threshold. We note that, to discover the available bandwidth, this method increases throughput and pushes to packet drop, which is different from our proposed method, where the available bandwidth is computed using an algorithm. Akhshabi et al. [START_REF] Akhshabi | Server-based traffic shaping for stabilizing oscillating adaptive streaming players[END_REF] propose a server-based shaping method that aims to stabilize the quality level sent by the server by detecting oscillation events. The shaping function is activated only when oscillations are detected. The optimal quality level is based on the history of quality level oscillations. Then, the server shapes its sending rate based on the encoding bitrate of the estimated optimal quality level. However, when the end-to-end available bandwidth increases, the HAS player cannot increase its quality level when the shaper is activated. This is because the sending bitrate is limited on the server side and when the endto-end available bandwidth increases, the player is still stable on the same quality level that matches the shaping rate. To cope with that, the method deactivates the shaping function for some chunks and uses two TCP parameters, RTT and cwnd, to compute the connection throughput that corresponds to the end-to-end available bandwidth ( cwnd RT T ). If the estimated bandwidth is higher than the shaping rate, the optimal quality level is increased to the next higher quality level and the shaping rate is increased to follow its encoding bitrate. We note that this method is implemented in the application layer. It takes as inputs the encoding bitrates of delivered chunks and two TCP parameters (RTT and cwnd). The authors indicate that their method stabilizes the competing players inside the same home network without significant bandwidth utilization loss. Accordingly, the optimal quality estimation process is based on two different techniques: quality level oscillation detection and bandwidth estimation using the throughput measurement. The former is based on the application layer information (i.e., the encoding bitrate of the actual and previous sent chunks) and is sufficient to activate the shaping function (i.e., the shaper). However, to verify whether the optimal quality level has been increased or not, the server is obliged to deactivate the shaper to let the TCP congestion control algorithm occupy the remaining capacity available for the HAS stream. Although this proposed method offers performance improvements on both QoE and QoS, the concept of activating and deactivating the shaper is not sufficiently solid, especially against unstable network conditions, and raises a number of open questions about the duration of deactivation of the traffic shaping and its impact on increasing the OFF period duration. In addition, this method is not proactive and the shaper is activated only in the case of quality level oscillation. What is missing in this proposed method is a good estimation of the available bandwidth for the HAS flow. This method relies on the throughput measurement during non-shaped phases. If a bandwidth estimation less dependent on cwnd could be given, we could keep the shaper activated during the whole HAS stream and adapt the estimation of optimal quality level to the estimation of available bandwidth. Traffic Shaping Methods Ghobadi et al. propose a shaping method on the server side called Trickle [START_REF] Ghobadi | Trickle: Rate limiting youtube video streaming[END_REF]. It was proposed for YouTube in 2011, when it adopted progressive download technology. Its key idea is to place a dynamic upper bound on cwnd such that TCP itself limits the overall data rate. The server application periodically computes the cwnd bound from the product between the round-trip time (RTT) and the target streaming bitrate. Then it uses a socket option to apply it to the TCP socket. Their results show that Trickle reduces the average RTT by up to 28% and the average TCP loss rate by up to 43%. However, HAS differs from progressive download by the change of encoding bitrate during streaming. Nevertheless, Trickle can also be used with HAS by adapting the cwnd bound to the encoding bitrate of each chunk. We note that the selection of the shaping rate by the server-based shaping methods does not mean that the player will automatically start requesting that next higher quality level [START_REF] Akhshabi | Server-based traffic shaping for stabilizing oscillating adaptive streaming players[END_REF]. The transition to another shaping rate may take place several chunks later, depending on the player's bitrate controller and the server-based shaping method efficiency. Furthermore, it was reported [START_REF] Ameur | Combining traffic shaping methods with congestion control variants for HTTP adaptive streaming[END_REF] that ssthresh has a predominant effect on the convergence speed of the HAS client to select the desired optimal quality level. Indeed, when ssthresh is set higher than the product of shaping rate and RTT, the server becomes aggressive and causes congestions and a reduction of quality level selection on the player side. In contrast, when ssthresh is set lower than this product, cwnd takes several RTTs to reach the value of this product, because in the congestion avoidance phase the increase of cwnd is relatively slow (one MSS, Maximum Segment Size, each RTT). Consequently, the server becomes conservative and needs a long time to occupy its selected shaping rate. Hence, the player would have difficulties reaching its optimal quality level. Accordingly, shaping the sending rate by limiting cwnd, as described in Trickle, has a good effect on improving the QoE of HAS. However, it is still insufficient to increase the reactivity of the HAS player and consequently to accelerate the convergence speed. Hence, to improve the performance of the shaper, ssthresh needs to be modified too. The value of ssthresh should be set at the right value that allows the server to quickly reach the desired shaping rate. Optimization of Current Solutions What can be noted from the different proposed methods for estimating the optimal quality level is that an efficient end-to-end estimator of available bandwidth can improve their performance, as shown in Subsection 2.1. In addition, the only parameter from the application layer needed for shaping the HAS traffic is the encoding bitrate of each available quality level of the corresponding HAS stream. As explained in Subsection 2.2, the remaining parameters are found in the TCP layer: the congestion window cwnd, the slow-start threshold ssthresh, and the round-trip time RTT. We are particularly interested in adjusting ssthresh to accelerate the convergence speed. This is summed up in figure 1. Naturally, what is missing here is an efficient TCP-based method for end-to-end bandwidth estimation. We also need a mechanism that adjusts ssthresh based on the output of the bandwidth estimator scheme. Both this mechanism and estimation schemes used by various TCP variants are introduced in the following. Adaptive Decrease Mechanism In the literature, we found a specific category of TCP variants that set ssthresh using bandwidth estimation. Even if the estimation is updated over time, TCP uses it only when a congestion event is detected. The usefulness of this mechanism, known as adaptive decrease mechanism, is described in [START_REF] Mascolo | Testing TCP Westwood+ over transatlantic links at 10 gigabit/second rate[END_REF] as follows: "the adaptive window setting provides a congestion window that is decreased more in the presence of heavy congestion and less in the presence of light congestion or losses that are not due to congestion, such as in the case of losses due to unreliable links". This low frequency of ssthresh updating (only after congestion detection) is justified in [START_REF] Capone | Bandwidth estimation schemes for TCP over wireless networks[END_REF] by the fact that, in contrast, a frequent updating of ssthresh tends to force TCP into congestion avoidance phase, preventing it from following the variations in the available bandwidth. Hence, the unique difference of this category from the classical TCP congestion variant is only the adaptive decrease mechanism when detecting a congestion, i.e., when receiving three duplicated ACKs or when the retransmission timeout expires. This mechanism is described in Algorithm 1. Algorithm 1 TCP adaptive decrease mechanism. We remark that the algorithm uses the estimated bandwidth, Bwe, multiplied by RT T min to update the ssthresh value. The use of RT T min instead of the actual RTT is justified by the fact that RT T min can be considered as an estimation of RTT of the connection when the network is not congested. Bandwidth Estimation Schemes The most common TCP variant that uses bandwidth estimation to set ssthresh is Westwood. Other newer variants have been proposed, such as Westwood+ and TIBET (Time Intervalsbased Bandwidth Estimation Technique). The only difference between them is the bandwidth estimation scheme used. In the following, we introduce the different schemes and describe their performance. Westwood estimation scheme [START_REF] Mascolo | TCP Westwood: Bandwidth estimation for enhanced transport over wireless links[END_REF]: The key idea of Westwood is that the source performs an end-to-end estimation of the bandwidth available along a TCP connection by measuring the rate of returning acknowledgments [START_REF] Mascolo | TCP Westwood: Bandwidth estimation for enhanced transport over wireless links[END_REF]. It consists of estimating this bandwidth by properly filtering the flow of returning ACKs. A sample of available bandwidth Bwe k is computed each time t k the sender receives an ACK: Bwe k = d k t k -t k-1 (1) where d k is the amount of data acknowledged by the ACK that is received at time t k . d k is determined by an accurate counting procedure by taking into consideration delayed ACKs, duplicate ACKs and selective ACKs. Then, the bandwidth samples Bwe k are low-pass filtered by using a discrete-time low-pass filter to obtain the bandwidth estimation Bwe k . The low-pass filter employed is generally the exponentially-weighted moving average function: Bwe k = γ × Bwe k-1 + (1 -γ) × Bwe k (2) where 0 ≤ γ ≤ 1. Low-pass filtering is necessary because congestion is due to low-frequency components of the available bandwidth, and because of the delayed ACK option [START_REF] Li | Link capacity allocation and network control by filtered input rate in high-speed networks[END_REF][START_REF] Mascolo | Additive increase early adaptive decrease mechanism for TCP congestion control[END_REF]. However, this estimation scheme is affected by the ACK compression phenomenon. This phenomenon occurs when the time spacing between the received ACKs is altered by the congestion of the routers on the return path [START_REF] Zhang | Observations on the dynamics of a congestion control algorithm: The effects of two-way traffic[END_REF]. In fact, when ACKs pass through one congested router, which generates additional queuing delay, they lose their original time spacing because during forwarding they are spaced by the short ACK transmission time [START_REF] Capone | Bandwidth estimation schemes for TCP over wireless networks[END_REF]. The result is ACK compression that can lead to bandwidth overestimation when computing the bandwidth sample Bwe k . Moreover, the low-pass filtering process is also affected by ACK compression because it cannot filter bandwidth samples that contain a high-frequency component [START_REF] Mascolo | Additive increase early adaptive decrease mechanism for TCP congestion control[END_REF]. Accordingly, the ACK compression causes a systematic bandwidth overestimation when using the Westwood bandwidth estimation scheme. ACK compression is commonly observed in real network operation [START_REF] Mogul | Observing TCP dynamics in real networks[END_REF] and thus should not be neglected in the estimation scheme. Another phenomenon that distorts the Westwood estimation scheme is clustering: As already noted [START_REF] Capone | Bandwidth estimation schemes for TCP over wireless networks[END_REF][START_REF] Zhang | Observations on the dynamics of a congestion control algorithm: The effects of two-way traffic[END_REF], the packets belonging to different TCP connections that share the same link do not intermingle. As a consequence, many consecutive packets of the same connection can be observed on a single channel. This means that each connection uses the full bandwidth of the link for the time needed to transmit its cluster of packets. Hence, a problem of fairness between TCP connections is experienced when the estimation scheme does not take the clustering phenomenon into consideration and continues to estimate the bandwidth of the whole shared bottleneck link instead of their available bandwidth. Westwood+ estimation scheme [START_REF] Mascolo | Performance evaluation of Westwood+ TCP congestion control[END_REF]: To estimate correctly the bandwidth and alleviate the effect of ACK compression and clustering, a TCP source should observe its own link utilization for a time longer than the time needed for entire cluster transmission. For this purpose, Westwood+ modifies the bandwidth estimation (Bwe) mechanism to perform the sampling every RTT instead of every ACK reception as follows: Bwe = d RT T RT T (3) where d RT T is the amount of data acknowledged during one RT T . As indicated in [START_REF] Mascolo | Performance evaluation of Westwood+ TCP congestion control[END_REF], the result is a more accurate bandwidth measurement that ensures better performance when compared with NewReno and it is still fair when sharing the network with other TCP connections. Bwe is updated once per RT T . The bandwidth estimation samples are low-pass filtered to give a better smoothed estimation of Bwe. However, the amount of acknowledged data during one RT T (d RT T ) is bounded by the sender's window size, min(cwnd, rwnd), which is defined by the congestion control algorithm. In fact, min(cwnd, rwnd) defines the maximum amount of data to be transmitted during one RT T . Consequently, the bandwidth estimation of Westwood+, given by each sample Bwe, is still always lower than the sender sending rate (Bwe ≤ min(cwnd,rwnd) RT T ). Hence, although the Westwood+ estimation scheme reduces the side effects of ACK compression and clustering, it is still dependent on the sender sending rate rather than the available bandwidth of the corresponding TCP connection. TIBET estimation scheme [START_REF] Capone | Bandwidth estimation schemes for TCP over wireless networks[END_REF][START_REF] Capone | Bandwidth estimates in the TCP congestion control scheme[END_REF]: TIBET (Time Interval-based Bandwidth Estimation Technique) is another technique that gives a good estimation of bandwidth even in the presence of packet clustering and ACK compression. The basic idea of TIBET is to perform a run-time sender-side estimate of the average packet length and the average inter-arrival separately. The bandwidth estimation scheme is applied to the stream of the received ACKs and is described in Algorithm 2 [START_REF] Capone | Bandwidth estimation schemes for TCP over wireless networks[END_REF], where acked is the number of segments acknowledged by the last ACK, packet size is the average segment size in bytes, now is the current time and last ack time is the time of the previous ACK reception. Average packet length and Average interval are the low-pass filtered measures of the packet length and the interval between sending times. Algorithm 2 Bandwidth estimation scheme. Al pha (0 ≤ al pha ≤ 1) is the pole of the two low-pass filters. The value of al pha is critical to TIBET performance: If al pha is set to a low value, TIBET is highly responsive to changes in the available bandwidth, but the oscillations of Bwe are quite large. In contrast, if al pha approaches 1, TIBET produces more stable estimates, but is less responsive to network changes. Here, we note that if al pha is set to zero we have the Westwood bandwidth estimation scheme, where the sample Bwe varies between 0 and the bottleneck bandwidth. TIBET estimation scheme uses a second low-pass filtering, with parameter γ, on the estimated available bandwidth Bwe to give a better smoothed estimation Bwe, as described in Equation 2. γ is a variable parameter, equal to e -T k , where T k = t k -t k-1 is the time interval between the two last received ACKs. This means that bandwidth estimation samples Bwe with high T k values are given more importance than those with low T k values. Simulations [START_REF] Capone | Bandwidth estimation schemes for TCP over wireless networks[END_REF] indicate that TIBET gives bandwidth estimations very close to the correct values, even in the presence of other UDP flows with variable rates or other TCP flows. TcpHas Description As shown in the previous section, a protocol specific to HAS needs to modify several TCP parameters and consist of several algorithms. Our HAS-based TCP congestion control, TcpHas, is based on the two modules of server-based shaping solution: optimal quality level estimation and sending traffic shaping itself, both with two submodules. The first module uses a bandwidth estimator submodule inspired by the TIBET scheme and adapted to HAS context, and an optimal quality level estimator submodule to define the quality level, QLevel, based on the estimated bandwidth. The second module uses QLevel in two submodules that update respectively the values of ssthresh and cwnd over time. This section progressively presents TcpHas by describing the four submodules, i.e., the bandwidth estimator, the optimal quality level estimator, ssthresh updating process, and cwnd value adaptation to the shaping rate. Bandwidth Estimator of TcpHas As described in Section 2, TIBET performs better than other proposed schemes. It reduces the effect of ACK compression and packet clustering and is less dependent on the congestion window than Westwood+. The parameter γ used by TIBET to smooth Bwe estimations (see Equation 2) is variable and equal to e -T k . However, this variability is not suited to HAS. Indeed, when the HAS stream has a large OFF period, the HTTP GET request packet sent from client to server to ask for a new chunk is considered by the server as a new ACK. As a consequence, the new bandwidth estimation sample, Bwe, will have an underestimated value and γ will be reduced. Hence, this filter gives higher importance to an underestimated value to the detriment of the previous better estimations. For example, if the OFF period is equal to 1 second, γ will be equal to 0.36, which means that a factor of 0.64 is given to the new underestimated value in the filtering process. Consequently, the smoothed bandwidth estimation, Bwe, will be reduced at each high OFF period. However, the objective is rather to maintain a good estimation of available bandwidth, even in the presence of large OFF periods. For this purpose, we propose to make parameter γ constant. Hence, the bandwidth estimator of TcpHas is the same as in the TIBET bandwidth estimation scheme, except for the low-pass filtering process: we use a constant value of γ instead of e -T k as defined by TIBET. Optimal Quality Level Estimator of TcpHas TcpHas' optimal quality level estimator is based on the estimated bandwidth, Bwe, described in Subsection 3.1. This estimator is a function that adapts HAS features to TCP congestion control and replaces Bwe value by the encoding bitrate of the estimated optimal quality level QLevel. One piece of information from the application layer is needed: the available video encoding bitrates, which are specified in the index file of the HAS stream. In TcpHas they are specified, in ascending order, in the EncodingRate vector. TcpHas' estimator is defined by the function QLevelEstimator, described in Algorithm 3, which selects the highest quality level whose encoding bitrate is equal to or lower than the estimated bandwidth, Bwe. Algorithm 3 QLevelEstimator function. 1: for i = length(EncodingRate) -1 downto 0 do 2: if EncodingRate[i] ≤ Bwe then 3: QLevel = i 4: return 5: end if 6: end for 7: QLevel = 0 QLevel parameter is updated only by this function. However, the time and frequency of its updating is a delicate issue: -We need to use the adaptive decrease mechanism (see Algorithm 1), because when a congestion occurs QLevel needs to be updated to the new network conditions. Hence, this function is called after each congestion detection. -Given that TcpHas performs a shaping rate that reduces OFF occupancy, when TcpHas detects an OFF period, it may mean that some network conditions have changed (e.g. an incorrect increase of the shaping rate). Accordingly, to better estimate the optimal quality level, this function is called after each OFF period. The EncodingRate vector is also used by TcpHas during application initialization to differentiate between a HAS application and a normal one: when the application returns an empty vector, it is a normal application, and TcpHas just makes this application be processed by classical TCP, without being involved at all. Ssthresh Modification of TcpHas The TCP variants that use the TCP decrease mechanism use RT T min multiplied by the estimated bandwidth, Bwe, to update ssthresh. However, given that the value of ssthresh affects the convergence speed, it should correspond to the desired shaping rate instead of Bwe. Also, the shaping rate is defined in Trickle [START_REF] Ghobadi | Trickle: Rate limiting youtube video streaming[END_REF] to be 20% higher than the encoding bitrate, which allows the server to deal better with transient network congestion. Hence, for TcpHas we decided to replace Bwe by EncodingRate[ QLevel] × 1.2, which represents its shaping rate: ssthresh = EncodingRate[ QLevel] × RT T min × 1.2 (4) The timing of ssthresh updating is the same as that of QLevel: when detecting a congestion event and just after an idle OFF period. Moreover, the initial value of ssthresh should be modified to correspond to the context of HAS. These three points are presented in the following. Congestion Events Inspired by Algorithm 1, the TcpHas algorithm when detecting a congestion event is described in Algorithm 4. It includes the two cases of congestion events: three duplicated ACKs, and retransmission timeout. In both cases, Qlevel is updated from Bwe using the QLevelEstimator function. Then, ssthresh is updated according to Equation 4. The update of cwnd is as in Algorithm 1. Algorithm 4 TcpHas algorithm when congestion occurs. Idle Periods As explained in [START_REF] Allman | TCP congestion control[END_REF][START_REF] Ameur | Evaluation of gateway-based shaping methods for HTTP adaptive streaming[END_REF][START_REF] Ameur | Combining traffic shaping methods with congestion control variants for HTTP adaptive streaming[END_REF], the congestion window is reduced when the idle period exceeds the retransmission timeout RTO, and ssthresh is updated to max(ssthresh, 3/4 × cwnd). In HAS context, the idle period coincides with the OFF period. In addition, we denote by OFF the OFF period whose duration exceeds RTO. Accordingly, reducing cwnd after an OFF period will force cwnd to switch to slow-start phase although the server is asked to deliver the video content with the optimal shaping rate. To avoid this, we propose to remove the cwnd reduction after the OFF period. Instead, as presented in Algorithm 5, TcpHas updates Qlevel and ssthresh, then sets cwnd to ssthresh. This modification is very useful in the context of HAS. On the one hand, it eliminates the sending rate reduction after each OFF period, which adds additional delay to deliver the next chunk and may cause a reduction of quality level selection on the player side. On the other hand, the update of ssthresh each OFF period allows the server to adjust its sending rate more correctly, especially when the client generates a high OFF period between two consecutive chunks. Algorithm 5 TcpHas algorithm after an OFF period. Initialization By default, TCP congestion control uses an initial value of ssthresh, initial ssthresh, of 65535 bytes. The justification comes from the TCP classical goal to occupy quickly (exponentially) the whole available end-to-end bandwidth. However, in HAS context, initial ssthresh is better to match an encoding bitrate. We decided to set it to the highest quality level at the beginning of streaming for two reasons: 1) to give a similar initial aggressiveness as classical TCP and 2) to avoid setting it higher than the highest encoding bitrate to maintain the HAS traffic shaping concept. This initialization should be done in conformity with Equation 4, hence the computation of RTT is needed. Consequently, TcpHas just updates the ssthresh when the first RTT is computed. In this case, our updated ssthresh serves the same purpose as initial ssthresh. TcpHas initialization is presented in Algorithm 6. Cwnd Modification of TcpHas for Traffic Shaping As shown in Subsection 2.2, Trickle does traffic shaping on the server-side by setting a maximum threshold for cwnd, equal to the shaping rate multiplied by the current RTT. However, during congestion avoidance phase (i.e., when cwnd > ssthresh), cwnd is increased very slowly by one MSS each RTT. Consequently, when cwnd is lower than this threshold, it takes several RTTs to reach it, i.e. a slow reactivity. Algorithm 6 TcpHas initialization. 1: QLevel = length(EncodingRate) -1 the highest quality level 2: cwnd = initial cwnd 3: ssthresh = initial ssthresh i.e. 65535 bytes 4: RT T = 0 5: if new ACK is received then 6: if RT T = 0 then i.e. when the first RTT is computed 7: ssthresh = EncodingRate[ QLevel] × RT T × 1.2 8: end if 9: end if To increase its reactivity, we modify TCP congestion avoidance algorithm by directly tuning cwnd to match the shaping rate. Given that TcpHas does not change its shaping rate (EncodingRate[ QLevel] × 1.2) during the congestion avoidance phase (see Subsection 3.2), we update cwnd according to the RTT variation. However, in this case, we are faced to the following dilemma related to RTT variation: -On the one hand, the increase of RTT means that queuing delay increases and could cause congestion when the congestion window is still increasing. Worse, if the standard deviation of RTT is important (e.g., in the case of a wireless home network, or unstable network conditions), an important jitter of RTT would force cwnd to increase suddenly and cause heavy congestion. -On the other hand, the increase of RTT over time should be taken into account by the server in its cwnd updating process. In fact, during the ON period of a HAS stream, the RTT value is increasing [START_REF] Mansy | Sabre: A client based technique for mitigating the buffer bloat effect of adaptive video flows[END_REF]. Consequently, using a constant value of RT T (such as RT T min ) does not take into consideration this increase of RTT and may result in a shaping rate lower than the desirable rate. One way to mitigate RTT fluctuation and to take into account the increase of RTT during the ON period is to use smoothed RTT computations. We propose to employ a low-pass filter for this purpose. The smoothed RTT that is updated at each ACK reception is: RT T k = ψ × RT T k-1 + (1 -ψ) × RT T k (5) where 0 ≤ ψ ≤ 1. TcpHas algorithm during the congestion avoidance phase is described in Algorithm 7, where EncodingRate[ QLevel] × 1.2 is the shaping rate. Algorithm 7 TcpHas algorithm in congestion avoidance phase. 1: if new ACK is received and cwnd ≥ ssthresh then 2: cwnd = EncodingRate[ QLevel] × RT T × 1.2 3: end if To sum up, TcpHas is a congestion control optimized for video streaming of type HAS. It is implemented in the server, at transport layer, no other modifications are needed. TcpHas needs only one information from the application layer: the encoding bitrates of the selected video level. It coexists gracefully with TCP on server, the transport layer simply checking whether the EncodingRate vector returned by application is empty or not, as explained in section 3.2. It is compatible with all TCP clients. There is no direct interaction between the client and the server to make the adaptation decision. When TcpHas performs bandwidth estimation (at the server), it is independent of the estimation made in the HAS player on the client side. The only objective of the bandwidth estimation of the server is to shape the sending bitrate in order to prevent the HAS player to select a quality level higher than the optimal one. Hence, the bandwidth estimation in the server provides a proactive optimization: it limits the sending bitrate before the client may select an inappropriate quality level. TcpHas Evaluation The final goal of our work is to implement our idea in real software. However, at this stage of the work, we preferred instead to use a simulated player because of the classical advantages of simulation over experimentation, such as reproducibility of results, and measurement of individual parameters for better parameter tuning. More precisely, we preferred to use a simulated player instead of a commercial player for the following main reasons: -First, the commercial players have complex implementation with many parameters. Besides, the bitrate controller is different among commercial players and even between different versions of the same player. Accordingly, using our own well-controlled player allows better evaluations than a "black box" commercial player that could give incomprehensible behaviors. -Second, some image-oriented perceptual factors used in a real player (e.g. video spatial resolution, frame rate or type of video codec) are of no interest for HAS evaluation. -Third, objective metrics are increasingly employed for HAS evaluation. In reliable flows, such as those using TCP, objective metrics lead to the same results no matter the file content. Hence, with a fully controlled simulated player we can easily get the variation of its parameters during time and use them for objective QoE metric computation. -Fourth, for our simulations, we need to automatize events such as triggering the beginning and the end of HAS flows at precise moments. Using a simulated player offers easier manipulation than a real player, especially when many players need to be launched simultaneously. In this section, we evaluate TcpHas using the classical ns-3 simulator, version 3.17. In our scenario, several identical HAS players share the same bottleneck link and compete for bandwidth inside a home network. We first describe the network setup used in all of the simulations. Then, we describe the parameter settings of TcpHas. Afterwards, we show the behavior of TcpHas compared to the other methods. Finally, after describing the QoE and QoS metrics used, we analyze results for 1 to 9 competing HAS flows in the home network and with background traffic. Simulation setup Fig. 2 presents the architecture we used, which is compliant with the fixed broadband access network architecture used by Cisco to present its products [START_REF]Broadband network gateway overview[END_REF]. The HAS clients are located inside the home network, a local network with 100 Mbps bandwidth. The Home Gateway (HG) is connected to the DSLAM. The bottleneck link is located between HG and DSLAM and has 8 Mbps. The queue of the DSLAM uses Drop Tail discipline with a length that corresponds to the bandwidth-delay product. Nodes BNG (Broadband Network Gateway) and IR (Internet Router), and links AggLine (that simulates the aggregate line), ISPLink (that simulates the Internet Service Provider core network) and NetLink (that simulates the route between the IR and the HAS server) are configured so that their queues are large enough (1000 packets) to support a large bandwidth of 100 Mbps and high delay of 100 ms without causing significant packet losses. We generate Internet traffic that crosses ISPLink and AggLine, because the two simulated links are supposed to support a heavy traffic from ISP networks. For Internet traffic, we use the Poisson Pareto Burst Process (PPBP) model [START_REF] Zukerman | Internet traffic modeling and future technology implications[END_REF], considered as a simple and accurate traffic model that matches statistical properties of real-life IP networks (such as their bursty behavior). PPBP is a process based on the overlapping of multiple bursts with heavy-tailed distributed lengths. Events in this process represent points of time at which one of an infinite population of users begins or stops transmitting a traffic burst. PPBP is closely related to the M/G/∞ queue model [START_REF] Zukerman | Internet traffic modeling and future technology implications[END_REF]. We use the PPBP implementation in ns-3 [START_REF] Ammar | PPBP in ns-3[END_REF][START_REF] Ammar | A new tool for generating realistic internet traffic in ns-3[END_REF]. In our configuration, the overall rate of PPBP traffic is 40 Mbps, which corresponds to 40% of ISPLink capacity. HAS module Emulated The ns-3 simulated TcpHas player we use is similar to the emulated player described in [START_REF] Ameur | Combining traffic shaping methods with congestion control variants for HTTP adaptive streaming[END_REF], with a chunk duration of 2 seconds and a playback buffer of 30 seconds (maximum video size the buffer can hold). Note that [START_REF] Ameur | Combining traffic shaping methods with congestion control variants for HTTP adaptive streaming[END_REF] compares four TCP variants and two routerbased traffic shaping methods, whereas the current article proposes a new congestion control to be executed on server. Our player is classified as Rate and Buffer based (RBB) player, following classification proposed in [START_REF] Yin | A control-theoretic approach for dynamic adaptive video streaming over HTTP[END_REF][START_REF] Yin | Toward a principled framework to design dynamic adaptive streaming algorithms over HTTP[END_REF]. Using buffer occupancy information is increasingly proposed and used due to its advantages for reducing stalling events. In addition, the bandwidth estimator we used consists in dividing the size of received chunk by its download duration. The buffer occupancy information is used only to define an aggressiveness level of the player, which allows the player to ask a quality level higher than the estimated bandwidth. The player uses HTTP GET requests to ask for each chunk. It has two phases: buffering and steady state. During buffering phase it fills up its playback buffer by asking for chunks of the lowest video quality level, each chunk immediately after the other. When the playback buffer fills up, the player switches to the steady state phase. In this phase, it asks for the next chunk of the estimated quality level each time the playback buffer occupancy drops for more than 2 seconds (i.e. it remains less than 28 seconds of video in the buffer). When the playback buffer is empty, the player re-enters the buffering phase. All tests use five video quality levels with constant encoding rates presented in Table 1, and correspond to the quality levels usually used by many video service providers. Since objective metrics are used, cf. section 4, and given that TCP use ensures that all packets arrive to the destination, the exact video type used does not influence results (in our case we used random bits for the video file). We also use the HTTP traffic generator module given in [START_REF] Cheng | HTTP traffic generator[END_REF][START_REF] Cheng | Transactional traffic generator implementation in ns-3[END_REF]. This module allows communication between two nodes using HTTP protocol, and includes all features that generate and control HTTP GET Request and HTTP response messages. We wrote additional code into this HTTP module by integrating the simulated HAS player. We call this implementation the HAS module, as presented in Fig. 2. Streaming is done from S f to C f , where 0 ≤ f < N. The round-trip propagation delay between S f and C f is 100 ms. We show results for TcpHas, Westwood+, TIBET and Trickle. We do not consider Westwood because Westwood+ is supposed to replace Westwood since it performs better in case of ACK compression and clustering. Concerning Trickle, it is a traffic shaping method that was proposed in the context of progressive download, as described in SubSection 2.2. In order to adapt it to HAS, we added to it the estimator of optimal quality level of TcpHas, the adaptive decrease mechanism of Westwood+ (the same as TIBET), and applied the Trickle traffic shaping based on the estimated optimal quality level. This HAS adaptation of Trickle is simply denoted by "Trickle" in the reminder of this article. For all evaluations, we use competing players that are playing simultaneously during K seconds. We set K = 180 seconds, which allows the HAS players to reach stationary behavior when they are competing for bandwidth [START_REF] Houdaille | Shaping HTTP adaptive streams for a better user experience[END_REF]. TcpHas Parameter Settings The parameter γ of the Bwe low-pass filter is constant, in conformity with Subsection 3.1. We set γ = 0.99 to reduce the oscillation of bandwidth estimations, Bwe, over time. We set initial cwnd = 2 × MSS. The initial bandwidth estimation value of Bwe is set to the highest encoding bitrate. If the first value was set to zero or to the first estimation sample, the low-pass filtering process with parameter γ = 0.99 would be too slow to reach the correct estimation. In addition, we want TcpHas to quickly reach the highest quality level at the beginning of the stream, as explained in Subsection 3.3. The parameter al pha of the TIBET estimation scheme (see Algorithm 2) is chosen empirically in our simulations and is set to 0.8. A higher value produces more stable estimations but is less responsive to network changes, whereas a lower value makes TcpHas more aggressive with a tendency to select the quality level that corresponds to the whole bandwidth (unfairness). The parameter ψ used for low-pass filtering the RTT measurements in Subsection 3.4 is set to 0.99. The justification for this value is that it reduces better the RTT fluctuations, and consequently reduces cwnd fluctuation during the congestion avoidance phase. TcpHas. Fig. 3 Quality level selection over time for the four methods compared. TcpHas Behavior Compared to the Other Methods We present results for a scenario with 8 competing identical clients. Figure 3 shows the quality level selection over time of one of the competing players for the above methods. The optimal quality level that should be selected by the competing players is n • 2. During the buffering phase, all players select the lowest quality level, as allows by slow start phase. However, during the steady-phase the results diverge: Westwood+ player frequently changes the quality level between n • 0 and n • 3, which means that not only the player produces an unstable HAS stream, but also a high risk of generating stalling events. TIBET player is more stable and presents less risk of stalling events. Trickle player has an improved performance and becomes more stable around the optimal quality level n • 2, with some oscillations between quality levels n • 1 and n • 3. In contrast, TcpHas player is stable at the optimal quality level during the steady-state phase, hence it performs the best among all the methods. Given that the congestion control algorithms of these four methods use bandwidth estimation Bwe to set ssthresh, it is interesting to present Bwe variation over time, shown in Figure 4. The optimal Bwe estimation should be equal to the bottleneck capacity (8 Mbps) divided by the number of competing HAS clients (8), i.e., 1 Mbps. For Westwood+, Bwe varies between 500 kbps and 2 Mbps. For TIBET, Bwe is more stable but varies between 1.5 Mbps and 2 Mbps, which is greater than the average of 1 Mbps; this means that an unfairness in bandwidth sharing occurred because this player is more aggressive than the other 7 competing players. For TcpHas, Bwe begins by the initial estimation that corresponds to the encoding bitrate of the highest quality level (4256 kbps), as described in Algorithm 6, then Bwe converges rapidly to the optimal estimation value of 1 Mbps. Both Trickle and TcpHas present a similar Bwe shape because they use the same bandwidth estimator. The dissimilarity between the four algorithms is more visible in Figure 5, which presents the variation of cwnd and ssthresh. Westwood+ and TIBET yield unstable ssthresh and even more unstable cwnd. In contrast, Trickle and TcpHas provide stable ssthresh values. For Trickle, cwnd is able to increase to high values during the congestion avoidance phase because Trickle limits the congestion window by setting an upper bound in order to have a sending bitrate close to the encoding bitrate of the optimal quality level. For TcpHas, ssthresh is stable at around 14 kB, which corresponds to the result of Equation 4 when QLevel = 2 and RT T min = 100 ms. Besides, cwnd is almost in the same range as ssthresh and increases during ON periods because it takes into acocount the increase of RTT as presented in Algorithm 7. QoE and QoS Metrics This subsection describes the specific QoE and QoS metrics we selected to evaluate objectively TcpHas, and justify their importance in our evaluation. QoE Metrics We use three QoE metrics described by formulas in [START_REF] Ameur | Evaluation of gateway-based shaping methods for HTTP adaptive streaming[END_REF][START_REF] Ameur | Combining traffic shaping methods with congestion control variants for HTTP adaptive streaming[END_REF]: the instability of video quality level [START_REF] Ameur | Evaluation of gateway-based shaping methods for HTTP adaptive streaming[END_REF][START_REF] Ameur | Combining traffic shaping methods with congestion control variants for HTTP adaptive streaming[END_REF][START_REF] Jiang | Improving fairness, efficiency, and stability in HTTPbased adaptive video streaming with festive[END_REF] (0% means the same quality, 100% means the quality changes each period), the infidelity to the optimal quality level [START_REF] Ameur | Evaluation of gateway-based shaping methods for HTTP adaptive streaming[END_REF][START_REF] Ameur | Combining traffic shaping methods with congestion control variants for HTTP adaptive streaming[END_REF] (percentage in seconds where the optimal quality is used), and the convergence speed to the optimal quality level [8, 9, 21] (time to stabilize on the optimal quality and be stable over at least 1 minute). The optimal quality level L C-S,opt used in our evaluation is given by the highest encoding bitrate among the quality levels which is lower than or equal to the ratio between the bottleneck bandwidth, avail bw, and the number of competing HAS players, N, as follows: L C-S,opt = max 0≤L C-S ≤4 (L C-S ) EncodingRate(L C-S ) × 1.2 ≤ avail bw N (6) This formula applies to all the flows, i.e., we attach the same optimal quality level to all flows; this is because our focus is on fairness among flows. We acknowledge that this does not use the maximum achievable bandwidth in some cases, for example for six clients sharing an 8 Mbps bottleneck link, the above formula gives 928 kbps for each client, and not 928 kbps for five clients and 1632 kbps for the sixth client (see Table 1 for the bitrates). We however noticed that TcpHas does maximize the bandwidth use in some cases, as presented in the next section. The fourth metric is the initial delay, metric adopted by many authors [START_REF] Krogfoss | Analytical method for objective scoring of HTTP adaptive streaming (HAS)[END_REF][START_REF] Shuai | Olac: An open-loop controller for low-latency adaptive video streaming[END_REF], which accounts for the fact that the user dislikes to wait a long time before the beginning of video display. The fifth metric is the stalling event rate; the user is highly disturbed when the video display is interrupted while concentrating on watching [START_REF] Hoßfeld | Initial delay vs. interruptions: Between the devil and the deep blue sea[END_REF]. We define the stalling event rate, StallingRate(K), as the number of stalling events during a K-second test duration, divided by K and multiplied by 100: StallingRate(K) = number o f stalling events during K seconds K × 100 (7) The greater the StallingRate(K), the greater the dissatisfaction of the user. A streaming technology must try as much as possible to have a zero stalling event rate. QoS Metrics We use four QoS metrics, described in the following. The first metric is the frequency of OFF periods [START_REF] Ameur | Evaluation of gateway-based shaping methods for HTTP adaptive streaming[END_REF][START_REF] Ameur | Combining traffic shaping methods with congestion control variants for HTTP adaptive streaming[END_REF]. OFF is an OFF period whose duration exceeds TCP retransmission timeout duration (RTO); such periods lead to a reduction of bitrate and potentially to a degradation of performance [START_REF] Ameur | Evaluation of gateway-based shaping methods for HTTP adaptive streaming[END_REF][START_REF] Ameur | Combining traffic shaping methods with congestion control variants for HTTP adaptive streaming[END_REF]. This metric is defined as the total number of OFF periods divided by the total number of downloaded chunks of one HAS flow: f r OFF = number o f OFF periods number o f chunks (8) A high queuing delay is harmful to HAS and for real-time applications [START_REF] Yang | Opportunities and challenges of HTTP adaptive streaming[END_REF]. We noticed in our tests that this delay could vary considerably, and so the RTT of the HAS flow, so we use as the second metric the average queuing delay, defined as: Delay C-S (K) = RT T C-S,mean (K) -RT T 0 C-S (9) where RT T C-S,mean (K) is the average among all RT T C-S samples of the whole HAS session between client C and server S for a K-second test duration, and RT T 0 C-S is the initial roundtrip propagation delay between the client C and the server S. The congestion detection events greatly influence both QoS and QoE of HAS because the server decreases its sending rate at each such event. This event is always accompanied by a ssthresh reduction. Hence, we use a third metric, congestion rate, which we define as the rate of congestion events detected on the server side: CNG C-S (K) = D ssthresh C-S (K) K × 100 (10) where D ssthresh C-S (K) is the number of times the ssthresh has been decreased for the C-S HAS session during the K-second test duration. The fourth metric we use is the average packet drop rate. The rationale is that the number of dropped packets at the bottleneck gives an idea of the congestion severity of the bottleneck link. We define this metric as the average packet drop rate at the bottleneck during a Ksecond test duration: DropPkt(K) = number o f dropped packets during K seconds K × 100 [START_REF] Capone | Bandwidth estimates in the TCP congestion control scheme[END_REF] Note that this metric is different from the congestion rate described above, because the TCP protocol at the server could detect a congestion event whereas there is no packet loss. Performance Evaluation In this subsection we evaluate objectively the performance of TcpHas compared to West-wood+, TIBET and Trickle. For this, we give and comment results of evaluation in two scenarios: when increasing the number of competing HAS flows in the home network and when increasing the background traffic in access network. We use 16 runs for each simulation. We present the mean value for QoE and QoS among the competing players and among the number of runs for each simulation. We present the performance unfairness measurements among HAS clients with vertical error bars. We chose 16 runs because the relative difference between mean value of instability and infidelity of 16 and 64 runs is less than 4%. Effect of Increasing the Number of HAS Flows Here, we vary the number of competing players from 1 to 9. We select a maximum of 9 competing HAS clients because in practice the number of users inside a home network does not exceed 9. QoE results are given in Figure 6. In this Figure, the lowest instability rate is that of TcpHas (less than 4%), with a negligible instability unfairness between players. Trickle shows a similar instability rate when the number of competing players is between 4 and 7, but for the other cases it has a high instability rate, whose cause is that Trickle does not take into consideration the reduction of cwnd during OFF periods which causes a low sending rate after each OFF period. Hence, Trickle is sensitive to OFF: we can see in the Figure a correlation between instability and frequency of OFF period. In contrast, the instability of Westwood+ and TIBET is much greater and increases with the number of competing players. The infidelity and convergence speed of TcpHas are satisfactory, as presented in the Figure : the infidelity rate is less than 30% and convergence speed is smaller than 50 seconds in all but two cases. When there are 5 or 9 competing HAS clients, TcpHas selects a quality level higher than the optimal quality level that we defined (equation 6); TcpHas is thus able to select a higher quality level, converge to it, and be stable on it for the whole duration of the simulation. This result is rather positive, because TcpHas is able to maximize the occupancy of the home bandwidth to almost 100% in these two particular cases. In contrast, Westwood+ and TIBET present high infidelity to the optimal quality level and have difficulties to converge to it. For Trickle, due to its traffic shaping algorithm, the infidelity rate is lower than 45%, and lower than 25% when the OFF frequency (hence the instability rate) is low. Stalling event rate. Fig. 6 Values of QoE metrics when increasing the number of competing HAS clients for the four methods compared. The initial delay of the four methods increases with the number of competing HAS clients, as presented in Figure 6. The reason is that during the buffering phase the HAS player asks for chunks successively, and when the number of competing HAS clients increases, the bandwidth share for each flow decreases, thus generating additional delay. We also notice that Westwood+, TIBET and TcpHas present initial delay in the same range of values. However, Trickle has a lower delay; the reason is that, as shown in figures 4 and 5, during buffering state Trickle is able to maintain the initial bandwidth estimation that corresponds to the encoding bitrate of the highest quality level and does not provoke congestions. In other words, Trickle is able to send video chunks with a high sending bitrate without causing congestions. This leads to the reduction of the initial delay. Finally, Figure 6 shows that TcpHas and Trickle generate no stalling events, whereas Westwood+ and TIBET do starting from 7 competing HAS clients. The result for TcpHas and Trickle comes from their high stability rate even for a high number of competing HAS clients. QoS results are given in Figure 7. It shows that the OFF period frequency of TcpHas is kept near to zero and much lower than Westwood+, TIBET and Trickle, except in the case when the home network has only one HAS client. In this case, the optimal quality level is n • 4 whose encoding rate is 4.256 Mbps. Hence, the chunk size is equal to 8.512 Mbits. Consequently, when TcpHas shapes the sending rate according to this encoding rate while delivering chunks with large sizes, it would be difficult to reduce OFF periods below the retransmission timeout duration, RTO. Note that we have taken this case into account when proposing TcpHas by eliminating the initialization of cwnd after idle periods, as explained in Algorithm 5, to preserve high QoE. Frequency of OFF periods. Queuing delay. As presented in Figure 7, although the queuing delay of the four methods increases with the number of competing HAS clients, TcpHas and Trickle present a lower queuing delay than Westwood+ and TIBET. The reason is that both TcpHas and Trickle shape the HAS flows by reducing the sending rate of the server which reduces queue overflow in the bottleneck. Additionally, we observe that TcpHas reduces better the queuing delay than Trickle; TcpHas has roughly half the queuing delay of Westwood+ and TIBET. Besides, TcpHas does not increase its queuing delay more than 25 ms even for 9 competing players, while Trickle increases it to about 50 ms. This result is mainly due to the high stability of the HAS quality level generated by TcpHas which offers better fluidity of HAS flows inside the bottleneck. The same reason applies for the very low congestion detection rate and packet drop rate at the bottleneck of TcpHas, given in Figure 7. Furthermore, the congestion rate of Trickle is correlated to its frequency of OFF periods; this means that ssthresh reduction of Trickle is principally caused by the detection of OFF periods. In addition, due to its corresponding traffic shaping method that reduces the sending rate of HAS server, the packet drop rate of Trickle is quite similar to that of TcpHas, as shown in Figure 7. Number of competing HAS players Stalling events rate To summarize, TcpHas is not affected by the increase of the competing HAS clients in the same home network. From QoE point of view, it preserves high stability and high fidelity to optimal quality level, and has a tendency to increase the occupancy of the home bandwidth. From QoS point of view, it maintains a low OFF period duration, low queuing delay, and low packet drop rate. Background Traffic Effect Here we vary the burst arrival rate λ p of the Poisson Pareto Burst Process (PPBP) that simulates the traffic that crosses Internet Router (IR) and DSLAM from 10 to 50. Table 2 shows the percentage of occupancy of the 100 Mbps ISP network (ISPLink in our network) for each selected λ p value (without counting HAS traffic). Hence, we simulate a background traffic in the ISP network ranging from 20% to 100% of network capacity. In our simulations, we used two competing HAS clients inside the same home network. Instability. Infidelity. QoE results are given in Figure 8. It shows that the instability, infidelity, convergence speed and stalling event rate curves of TcpHas present two different regimes. When λ p < 35, TcpHas keeps practically the same satisfying measurements much better than Westwood+, TIBET and Trickle. However, when λ p > 35, the four measurements degrade suddenly and stabilize around same high values; even for infidelity and stalling rate, TcpHas yields worse values than Weswood+, TIBET and Trickle. We deduce that TcpHas is sensitive to additional load of ISP network, and could be more harmful than the three other methods. Trickle presents relatively better performance than Westwood+ and TIBET in terms of average values. However, it presents higher unfainess between clients, as shown by its big vertical error bars. In addition, we observe that TcpHas presents the same initial delay as the other methods, which is around 3 seconds, and does not exceed 5 seconds and does not disturb the user's QoE. QoS results are presented in Figure 9. Westwood+, TIBET and Trickle present high frequency of OFF periods, which decreases when increasing the ISP network load, whereas TcpHas presents low OFF frequency. The average queuing delay generated by TcpHas is lower than that of Westwood+, TIBET and Trickle for λ p < 40. The reason for this is explained in the Figure : the congestion detection rate increases with λ p (especially above 40), while the packet drop rate at the bottleneck is still null for TcpHas. Hence, we deduce Frequency of OFF periods. Queuing delay. that the bottleneck is no more located in the link between DSLAM and IR, but is rather transposed inside the loaded ISP link. To summarize, TcpHas yields very good QoE and QoS results when the ISP link is not too loaded. Beyond 70% load of the ISP link, the congestion rate increases, which degrades the QoS and forces TcpHas to frequently update its estimated optimal quality level, which in turn degrades the QoE. Conclusion This paper presents and analyses server-based shaping methods that aim to stabilize the video quality level and improve the QoE of HAS users. Based on this analysis, we propose and describe TcpHas, a HAS-based TCP congestion control that acts like a server-based HAS traffic shaping method. It is inspired by the TCP adaptive decrease mechanism and uses the end-to-end bandwidth estimation of TIBET to estimate the optimal quality level. Then, it shapes the sending rate to match the encoding bitrate of the estimated optimal quality level. The traffic shaping process is based on updating ssthresh when detecting a congestion event or after an idle period, and on modifying cwnd during the congestion avoidance phase. We evaluate TcpHas in the case of HAS clients that share the bottleneck link and are competing for the same home network under various conditions. Simulation results indicate that TcpHas considerably improves both HAS QoE and network QoS. Concerning QoE, it offers a high stability, high fidelity to optimal quality level, a rapid convergence speed, and an acceptable initial delay. Concerning QoS, it reduces the frequency of large OFF periods that exceed TCP retransmission timeout, reduces queuing delay, and reduces considerably the packet drop rate in the shared bottleneck queue. TcpHas performs well when increasing the number of competing HAS clients and does not cause stalling events. It shows excellent performance for small and medium loaded ISP network. As future work, we plan to implement TcpHas in real DASH servers of a video content provider, and to offer a large-scale evaluation during a long duration of tests in real and variable network conditions when hundreds of DASH players located in different access networks are asking for video content. Fig. 1 1 Fig.1TCP parameter setting process for quality level selection. 1 : 4 : 5 : 145 if ACK is received then 2: sample length = acked × packet size × 8 3: sample interval = nowlast ack time Average packet length = al pha × Average packet length + (1al pha) × sample length Average interval = al pha × Average interval + (1al pha) × sample interval 6: Bwe = Average packet length/Average interval 7: end if Fig. 2 2 Fig.2Network architecture used in ns-3 for the evaluation. Fig. 4 4 Fig. 4 Estimated bandwidth Bwe over time for the four methods compared. Fig. 5 5 Fig. 5 cwnd and ssthresh values over time for the four methods compared. 20 Chiheb 20 Ben Ameur et al. Fig. 7 7 Fig.7Values of QoE metrics when increasing the number of competing HAS clients for the four methods compared. Fig. 8 8 Fig.8Values of QoE metrics when increasing the burst arrival rate for the four methods compared. Fig. 9 9 Fig.9Values of QoS metrics when increasing the burst arrival rate for the four methods compared. Table 1 1 Available encoding bitrates for the video file used in simulations. Video quality level L C-S 0 1 2 3 4 Encoding bitrate (kbps) 248 456 928 1632 4256 Table 2 2 ISP network load when varying the burst arrival rate λ p . λ p 10 15 20 25 30 35 40 45 50 ISP network load (%) 20 30 40 50 60 70 80 90 100 Instability IS (%) 10 0 5 10 15 20 15 20 Westwood+ 25 TIBET PPBP burst arrival rate λp 30 35 40 TcpHas Trickle 45 50 10 0 20 40 60 80 100 Infidelity IF (%) 15 PPBP burst arrival rate λp 20 25 30 35 40 Westwood+ TcpHas TIBET Trickle 45 50
69,816
[ "974245", "3628" ]
[ "531459", "491312", "105160", "866" ]
01625993
en
[ "sdv", "info" ]
2024/03/05 22:32:10
2017
https://hal.science/hal-01625993/file/article.pdf
Benjamin Béouche-Hélias David Helbert email: david.helbert@univ-poitiers.fr Cynthia De Malézieu Nicolas Leveziel Christine Fernandez-Maloigne Neovascularization Detection in Diabetic Retinopathy from Fluorescein Angiograms Keywords: Diabetic retinopathy, neovascularization, classification, anti-VEGF, diabetes Even if a lot of work has been done on Optical Coherence Tomography (OCT) and color images in order to detect and quantify diseases such as diabetic retinopathy, exudates or neovascularizations, none of them is able to evaluate the diffusion of the neovascularizations in retinas. Our work has been to develop a tool able to quantify a neovascularization and the fluorescein leakage during an angiography. The proposed method has been developed following a clinical trial protocol, images are taken by a Spectralis (Heidelberg Engineering). Detections are done using a supervised classification using specific features. Images and their detected neovascularizations are then spatially matched by an image registration.We compute the expansion speed of the liquid that we call diffusion index. This last one specifies the state of the disease and permits to indicate the activity of neovascularizations and allows a follow-up of patients. The method proposed in this paper has been built to be robust, even with laser impacts, to compute a diffusion index. Introduction The detection and follow-up of diabetic retinopathy, an increasingly important cause of blindness, is a public health issue. Indeed, loss of vision can be prevented by early detection of diabetic retinopathy and increased monitoring by regular examination. There are now many algorithms for the automatic detection of common anomalies of the retina (microaneurysms, haemorrhages, exudates, tasks, ...). However, very few researches have been done on the detection of a major pathology, which is neovascularization, corresponding to the growth of new blood vessels due to a large lack of oxygen in the retinal capillaries. Our work has not been to substitute manual detections of experts but to help them doing it by suggesting what areas of the retina could or not be considered as having neovascularizations (NVs) and by providing quantitative and qualitative proliferative diabetic retinopathy such as the area of NV, the location to the optical nerve, the activity of NV (diffusion index). The main goal has been to provide a diffusion index of the injected fluorescent liquid, which indicates the severity of the pathology, to follow the patient over the years. Diabetic retinopathies is one of the first cause of visual impairment worldwide, due to the increasing incidence of diabetes. Proliferative diabetic retinopathy (PDR) is define by the outgrowth of preretinal vessels leading to retinal complication i.e. intravitreous hemorrhages and retinal detachments. Today the laser photocoagulation is the standard of care treatment of proliferative diabetic retinopathy, leading to a decrease of growth factors secretion in photoagulated areas of the retina. Vascular endothelial growth factor (VEGF) is responsible of the growth of healthy vessels, but also of the NVs due to diabetes. Research is active on finding a specific type of anti-VEGF that could stop the growth of the NVs specifically. A clinical trial (ClinicalTrials.gov Identifier: NCT02151695), called "Safety and Efficacy of Aflibercept in Proliferative Diabetic Retinopathy" is in progress at the CHU of Poitiers, testing the effects of a specific anti-VEGF : Aflibercept. This drug has been approved by the European Medicines Agency (EMA) and the United States Food and Drug Administration (FDA) for treatment of exudative age-related macular degeneration, another retinal disease characterized by choroidal new vessels. The aim of this pilot study is to evaluate the efficacy and the safety of Aflibercept intravitreal injections compared to panretinal photocoagulation for proliferative diabetic retinopathy. In ophthalmology, the majority of the works on the retinal diseases is about the detection of the exudates, [START_REF] Zhang | Exudate detection in color retinal images for mass screening of diabetic retinopathy[END_REF][START_REF] Niemeijer | Automated detection and differentiation of drusen, exudates, and cotton-wool spots in digital color fundus photographs for early diagnosis of diabetic retinopathy[END_REF][START_REF] Abramoff | Automated detection of diabetic retinopathy: barriers to translation into clinical practice[END_REF][START_REF] Osareh | Automated identification of diabetic retinal exudates in digital colour images[END_REF][START_REF] Imani | Fully automated diabetic retinopathy screening using morphological component analysis[END_REF] the healthy vessels segmentation, [START_REF] Staal | Ridgebased vessel segmentation in color images of the retina[END_REF][START_REF] Mendona | Segmentation of retinal blood vessels by combining the detection of centerlines and morphological reconstruction[END_REF][START_REF] Nguyen | An effective retinal blood vessel segmentation method using multi-scale line detection[END_REF] the detection of the neural disk [START_REF] Sinthanayothin | Automated localisation of the optic disc, fovea, and retinal blood vessels from digital colour fundus images[END_REF][START_REF] Duanggate | Parameter-free optic disc detection[END_REF] but none of them is about the detection of the proliferative diabetic retinopathy within angiograms. Some works have also been done on the image registration for retinal images. Can [START_REF] Can | A featurebased, robust, hierarchical algorithm for registering pairs of images of the curved human retina[END_REF] et al have proposed the registration of a pair of retinal images by using branching points and cross-over points in the vasculature. Zheng et al have developped in [START_REF] Zheng | Salient feature region: a new method for retinal image registration[END_REF] a registration algorithm by using salient feature region description. Legg et al have illustrated in [START_REF] Legg | Improving accuracy and efficiency of mutual information for multi-modal retinal image registration using adaptive probability density estimation[END_REF] the efficiency of mutual information for registration of fundus colour photographes and colour scanning laser ophthalmoscope images. Few steps are needed to compute that diffusion index. It is a growth in time which means that we have to detect and quantify the pathology on both injection times and compare their area. As it is nearly impossible to have exactly the same conditions during the acquisition (eye movements, focus of the camera, angle), we need an image registration to have an estimation of deformations and to correctly spatially correlate NVs. For the segmentation, we used a supervised classification by Random Forests [START_REF] Breiman | Random forests[END_REF] using intensity, textural and contextual features, and a database of trained images from the clinical trials. These steps are shown in Fig. 1. The paper is organized as follows. In section 2 we present the microscope and the acquisition protocol. The image registration on both injection times is proposed in section 3. We then propose a novel neovascularization detection method in section 4 and an automatic diffusion index computation in section 5. Materials Our database is made of images taken by the c Spectralis Heidelberg Engineering microscope with an ultrawidefield lens covering a 102 o field of view. It delivers undistorted images of a great part of the retina, making the detection easier and monitor abnormal peripheral changes. Images are in gray levels, the areas that are bright are mainly 1/ those leaking during fluorescein angiography due to NV, 2/ normal retinal vessels or 3/ laser impacts. Some images we have taken from patients who have been treated by laser photocoagulation, visible on Fig. 2. These impacts are annoying because they are also very bright for some parts. The blood still spread through some impacts, and can be big enough to be wrongly assimilated to a NV. To qualify the PDR by the index leakage, different times of acquisition during fluorescein angiography were used. The protocol presented below is the clinical trial's protocol which is composed of two image acquisitions: 1. fluorescein injection into the patient's arm; 2. acquisition of the early time injection (t 0 ); acquisition of the late time injection (t f ). The few minutes left between different acquisitions times allow visualization of the fluorescein laekage defined as a progressive increase of the NV area with blurring edges of the NV. No leakage is observed on normal retinal vessels. On Fig. 3 we can see pictures of the same eye with acquisitions at t 0 and t f , where we can see the fluorescein spreading first into arteries and then bleeds in neovascularizations. As images are taken by three minutes, some spatial differences occur and we need to spatially correlate both images with an image registration which is presented in the next part. 15 diabetic patients were included in the analysis and ophthalmologists have identified 60 different NV from fluorescein angiographies on wide field imaging. Image registration The image registration does not aim to be perfect but to allow spatial comparison between NV taken in both images to compute quantitative data. The best registration model should be a local method, but for that reason just explained, a global method is widely enough for the comparison. Some of them are very popular and have been tested by many experts like Scale Invariant Feature Transform (SIFT), [START_REF] Lowe | Distinctive image features from scaleinvariant keypoints[END_REF] Maximally Stable Extremal Regions (MSER) [START_REF] Forssen | Shape descriptors for maximally stable extremal regions[END_REF] or Speeded Up Robust Features (SURF). [START_REF] Bay | Surf : speeded up robust features[END_REF] We found that SIFT was robust and fast enough for the deformations we have on images. Constraints Images are taken with a manually movable camera with no spacial landmarks to help. Moreover the eye of the patient slightly moves between each capture, even when focusing a specific direction, which means that the images for the two injection times can be geometrically different, with translations (x and y), scaling (z) and some small rotations (θ). Futhermore, tissues on the retina can slightly be different over the time, depending on several biological factors, like the heat, the light or the blood flood. We then have global and local geometrical deformations. The brightness of the images mainly depends on the diffusion of the fluorescent liquid injected in the patient.Some tissues will appear more or less bright between both images and sometimes will simply be or not present onto them. For example, healthy arteries will appear darker on the late time injection because the liquid first flood into them (t 0 ) and then spread into different tissues like neovessels (t f ). That is why NVs appear brighter and are easier to detect on t f . We finally have global colorimetry changes, which impact on the general contrast of the image, and very local changes. Deformation computation First steps are the extraction and the description of keypoints on the image. These keypoints need to be invari-ant to image scaling and rotation, and mostly invariant to change in illumination, they also need to be highly distinctive by their description to be matched further. To match the extracted keypoints, we use the brute force method. For each keypoint we take the two nearest points in terms of Euclidean distance and we only keep those who the first nearest point is inferior to 0.8 times the second nearest neighbor (as proposed in [START_REF] Lowe | Distinctive image features from scaleinvariant keypoints[END_REF] ). The deformation matrix is finally computed by the RANSAC algorithm (Random Sample Consensus). [START_REF] Fischler | Random sample consensus : A paradigm model fitting with applications to image analysis and automated cartography[END_REF] Results and discussion We know that deformations we have between both images are relatively small. Even with small movements from the eye or the camera, the lens used takes wide images enough to avoid big deformations because it makes the margin of movement very small, so we removed matching points that obviously are too far from each other. Within the accordance of the experts and the visualization of the different images, we set the distance threshold to a ratio (r) of the diagonal of the image, where r is the constant set which can be adjusted depending on the strength of the deformations. For example, you can set r to 0.5 if you want to set the threshold at half the length of the diagonal. We can see on Fig. 4 that the registration process works well. It is still a global image registration that could be more precise with a specific local non rigid algorithm, but the aim is to pair NVs and be spatially correct when comparing the leakage areas, so we do not need to have a perfect registration. Once the image registration is done, we can process both images. The segmentation method we used is explained in the next part. Neovascularizations detection Principle The aim of a supervised classification is to set the rules that will let the algorithm classify objects into classes from features describing these objects. The algorithm is first trained with a portion of the available data to learn the classification rules (see Fig. 5). As supervised classification tends to give better results when it is possible to have a good trained database we choose to use the Random Forests of decision trees [START_REF] Breiman | Random forests[END_REF] (RF), which is a supervised classification algorithm that gives good results even with a small database. The noise is very high in most images, notably the laser impacts that some patients can have (see Fig. 2). They have some close properties like the brightness that is very high and sometimes have the same shape and size. Some noise is also due to the acquisition itself : eyelashes can blur a part of the images and the automated settings of the camera can lead to more or less blur just as examples. Classification Algorithm 4.2.1 Features Supervised classification can be used with as many features as we want but can be poor if too many bad features are used. To prune the features, it is possible to use multiple features and try to see which are the best by some tests. Once you have found the most important features, you can decide if you want to get rid of the other features or not, depending of your needs in terms of accuracy and time computation. Note that images being only in gray level. We choose to take enough features to prune our selection because our database is not big enough to take on many features and still be a good predicate. NVs being very bright, we choose to have several features based on the intensity, we also add textural and one contextual features as listed below. Intensity Because leakages are bright, we put a lot of weight onto the features based on the intensity : mean, maximum and minimum intensity in the neighborhood. We also take in account the single value of the classified pixel. The values are normalized.Mean, maximum and minimum values are computed in a 5 × 5 and a 9 × 9 neighborhood, which leads to six features. Texture Texture can be a discriminator between laser impacts and NVs because laser impacts are more likely heterogeneous than the second one. For that, we calculate the variance on a 5 × 5 and a 9 × 9 neighborhood. We compute an isotropic gradient with a 3×3 and 5×5 Sobel operator.We add some Haralick's texture features: angular second moment, contrast and correlation. [START_REF] Robert | Textural features for image classification[END_REF] Contextual Contextual features are very important because the intensity is often not enough and is very sensitive to the noise. We add a vessel segmentation in our process, which we translate into a feature. Healthy vessels could sometimes be classified as NVs if we only take into account the intensity features because they are very similar. We base our method on the method proposed in. [START_REF] Nguyen | An effective retinal blood vessel segmentation method using multi-scale line detection[END_REF] It is a morphological segmentation based on the width and the homogeneity of the vessels and weighted by the luminance. See figure 6. With our dataset, the importance of the proposed features are listed on Fig. 7 (values have been rounded for visibility). The most important features are the minimum intensity and the mean intensity in the 9 × 9 neighborhood. As expected the intensity of the classified pixel is poor because several noise is also bright (e.g laser impacts and healthy vessels). Fig. 8 is an example of a classification with the Random Forests algorithm. Compared to the ground truth, the true positives are in green, false positives in red, false negatives in blue and true negatives in white. Post processing Because it is a pixel-wise classification, it is not sufficient enough by itself to have compact and fulfilled regions, so we added few post-treatments. As the leakage is almost isotropic in the vitreous, it is correct to compare the leakage with a cloud that is more or less dense but mostly filled (i.e without holes). Classification sometimes gives regions with little holes that can easily be filled with a closing operation by mathematical morphology. Moreover some thin line detections can happen onto laser impacts edges or healthy vessels for example, we can remove them with an opening operation. Thus, after the classification, a morphological closing and a morphological closing are directly applied to remove thin false detections and they fill the holes of the detected NVs. Results and discussion Random forests algorithm gives a probability for each pixel to belong to the class "NV" or to the class "other". Because the results may vary due to the probabilities, we tried the algorithm with different thresholds (λ) of probability. Results are obtained using a cross validation process on our database. For each image, the training set is composed of all the data except those extracted from the current image. In this way the data of the image are not taken into account for the training and the statistical model is not wrongly influenced. As results, we compare expert manual and automated segmentation to classify the resulting pixels into four classes : true positive (TP), false positive (FP), true negative (TN) and false negative (FN). Given these classes, we can calculate the sensibility (S), the specificity (Sp) and the pixel prediction value (PPV) as following: S = T P T P + F N (1) Sp = T N F P + T N (2) P P V = T P T P + F P However, NVs are mainly small compared to the size of the image and results in a big disparity between the number of positive and negative pixels. The specificity is then always very close to 1 because the pixel number belonging to the background is too much compared to the positives, so we neglecte this feature from our results. Detection at t f Results of the detection for the t f images are given in the figure 9. We can see that the pixel prediction value is very influenced by probability threshold λ compared to the sensibility which is less influenced. A λ under 0.8 gives a good detection of the NVs (high S, but very poor P P V ). When the λ is above 0.8, the S decreases a bit but stays very high, whereas the P P V becomes more reliable around a λ of 0.8 and becomes > 90% for a λ superior to 0.9. Results of the detection for the t 0 images are given in the figure 10. They are not as high as for the T F images, as expected, because it is not easy to distinguish them from healthy vessels before the big part of the spread. As for the t f images, P P V is very poor below a λ of 0.8 and becomes very high above. The problem is that above this threshold, the S decreases more than expected, until 60% for a 0.99 λ. 5 Diffusion index Methodology and Results The diffusion index has to give an indication about the severity of the diabetic retinopathy, which means that it has to compare two liquid spread volumes. As we only work with two dimension images, we can only guess that the spreading is isotropic and that an index computed only with the surface is enough to tell the strength of the leakage. Figure 11 recalls the processing: we detect the NV surfaces at time t f and inside these surfaces, we detect NV surfaces at time t 0 . The diffusion index is then computed by the differentiation of NV areas at t 0 and at t f . Results ans discussion The detection of NVs into the t 0 and t f images is quite complex and really depends on many parameters. The parameters are linked to the fact that the eyes of the patient moves between each capture and the images between two injections can be geometrically differents. Computed diffusion indices are close to the ground truth (cf Tab. 1), indeed the error is only 0.1 or 0.5%. Moreover the retina can slightly be different the time, depending biological factors, and the healthy arteries appear darker according the time of the capture. In our experience, for neovascularization of diabetic retinopathy, the algorithm shows a sensibility and pixel predictive values are effective to describe lesions. The detection of NVs onto the t 0 images is quite complex and really depends on many parameters. We obtain a low Mean Square Error for probability λ equal to 0.8. Conclusion We propose to compute diffusion indices after detecting neovascularizations in noisy angiogram images at initial time t 0 and at final time t f . First we extract the NV areas at time t f and we use the area to detect the NV areas at time t f . We also need to register images between the two acquisitions and we choose to detect interest points using SIFT and we estimate the geometrical transformation for each neovascularization. To detect neovascularizations, we learn features which characterize the NV. We so choose a random tree forest and this approach gives good detection results and the computed diffusion index is close to the ground truth. A clinical study about this algorithm and manual method is now necessary to comparate them, to permit the evaluation of clinical effectiveness and to propose a software solution for the ophthalmogists. Fig 1 : 1 Fig 1: Example of a full detection by our method. Fig 2 : 2 Fig 2: Image of an angiogram taken with the Heidelberg Spectralis. Laser impacts are present all over the image, some examples are highlighted in red. (a) Image of a retina at time t0. (b) Zoom on a NV at time t0. (c) Image of a retina at time t f . (d) Zoom on a NV at time t f . Fig 3 : 3 Fig 3: Retina acquired at initial time t 0 and final time t f . (a) Detected NVs on t f . (b) NVs from t f after image registration on t0. (c) Example of NVs with their bounding boxes. Yellow box is the original box, the green box is the deformed box. Fig 4 :Fig 5 : 45 Fig 4: Registration of the NVs from a t f image to a t 0 image. Fig 6 : 6 Fig 6: Detection of the healthy vessels. Fig 7 : 7 Fig 7: List of the feature importance. Result of the supervised classification Fig 8 : 8 Fig 8: Result of the RF classification on an image. Green regions represent the true positives, red regions the false positives and blue the false negatives. Fig 9 : 9 Fig 9: Sensibility (blue dots) and Pixel Prediction Value (red squares) results for the classification on t f depending on λ used for the probabilities. 4. 4 . 2 42 Detection at t0 Fig 10 : 10 Fig 10: Sensibility (blue dots) and Pixel Prediction Value (red squares) results for the classification on t 0 depending on λ used for the probabilities. 11 : 11 Methodology of diffusion index computation. Fig 12 : 12 Fig 12: Mean Square Errors for each computed diffusion index according to probability threshold λ used for the classification. Table 1 : 1 Diffusion index results. Ground Truth Automated Difference Mean 2.09 2.10 0.01 σ 0.33 0.54 0.52 Acknowledgments This research is financially supported by the clinical trial (ClinicalTrials.gov Identifier: NCT02151695) called "Safety and Efficacy of Aflibercept in Proliferative Diabetic Retinopathy" in progress at the CHU of Poitiers, France. The authors state no conflict of interest and have nothing to disclose.
24,610
[ "5497", "6283" ]
[ "300170", "444300", "54493", "300170", "300170", "444300", "54493" ]
01756957
en
[ "phys", "spi" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01756957/file/1-hal-preprints.pdf
Laurent Perrier Gilles Pijaudier-Cabot David Grégoire email: david.gregoire@univ-pau.fr G Pijaudier Extended poromechanics for adsorption-induced swelling prediction in double porosity media: modeling and experimental validation on activated carbon Keywords: Adsorption, swelling, double porosity media, poromechanical modelling Natural and synthesised porous media are generally composed of a double porosity: a microporosity where the fluid is trapped as an adsorbed phase and a meso or a macro porosity required to ensure the transport of fluids to and from the smaller pores. Zeolites, activated carbon, tight rocks, coal rocks, source rocks, cement paste or construction materials are among these materials. In nanometer-scale pores, the molecules of fluid are confined. This effect, denoted as molecular packing, induces that fluid-fluid and fluid-solid interactions sum at the pore scale and have significant consequences at the macroscale, such as instantaneous deformation, which are not predicted by classical poromechanics. If adsorption in nanopores induces instantaneous deformation at a higher scale, the matrix swelling may close the transport porosity, reducing the global permeability of the porous system. This is important for applications in petroleum oil and gas recovery, gas storage, separation, catalysis or drug delivery. This study aims at characterizing the influence of an adsorbed phase on the instantaneous deformation of micro-tomacro porous media presenting distinct and well-separated porosities. A new incremental poromechanical framework with varying porosity is proposed allowing the prediction of the swelling induced by adsorption without any fitting parameters. This model is validated by experimental comparison performed on a high micro and macro porous activated carbon. It is shown also that a single porosity model cannot predict the adsorption-induced strain evolution observed during the experiment. After validation, the double porosity model is used to discuss the evolution of the poromechanical properties under free and constraint swelling. Introduction Following the IUPAC recommendation [START_REF] Sing | Reporting physisorption data for gas/solid systems with special reference to the deter-mination of surface area and porosity[END_REF][START_REF] Thommes | Physisorption of gases, with special reference to the evaluation of surface area and pore size distribution (IUPAC Technical Report)[END_REF], the pore space in porous materials is divided into three groups according to the pore size diameters: macropores of widths greater than 50 nm, mesopores of widths between 2 and 50 nm and micropores (or nanopores) of widths less than 2 nm. Zeolites, activated carbon, tight rocks, coal rocks, source rocks, cement paste or construction materials are among these materials. In recent years, a major attention has been paid on these microporous materials because the surface-to-volume ratio (i.e., the specific pore surface) increases with decreasing characteristic pore size. Consequently, these materials can trap an important quantity of fluid molecules as an adsorbed phase. This is important for applications in petroleum and oil recovery, gas storage, separation, catalysis or drug delivery. For these microporous materials, a deviation from standard poromechanics [START_REF] Biot | General theory of three-dimensional consolidation[END_REF][START_REF] Coussy | Poromechanics[END_REF], is expected. In nanometer-scale pores, the molecules of fluid are confined. This effect, denoted as molecular packing, induces that fluid-fluid and fluid-solid interactions sum at the pore scale and have significant consequences at the macroscale, such as instantaneous deformation. A lot of natural and synthesised porous media are composed of a double porosity: the microporosity where the fluid is trapped as an adsorbed phase and a meso or a macro porosity required to ensure the transport of fluids to and from the smaller pores. If adsorption in nanopores induces instantaneous deformation at a higher scale, the matrix swelling may close the transport porosity, reducing the global permeability of the porous system or annihilating the functionality of synthesised materials. In different contexts, this deformation may be critical. For instance, in situ adsorption-induced coal swelling has been identified [START_REF] Larsen | The effects of dissolved CO2 on coal structure and properties[END_REF][START_REF] Pan | A theoretical model for gas adsorption-induced coal swelling[END_REF][START_REF] Sampath | Ch4co2 gas exchange and supercritical co2 based hydraulic fracturing as cbm production-accelerating techniques: A review[END_REF] as the principal factor leading to a rapid decrease in CO 2 injectivity during coal bed methane production enhanced by CO 2 injection. Conversely, gas desorption can lead to matrix shrinkage and microcracking, which may help oil and gas recovery in the context of unconventional petroleum engineering [START_REF] Levine | Model study of the influence of matrix shrinkage on absolute permeability of coal bed reservoirs[END_REF]. The effects of adsorbent deformation on physical adsorption has also been identified by [START_REF] Thommes | Physical adsorption characterization of nanoporous materials: Progress and challenges[END_REF] as one of the next major challenges concerning gas porosimetry in nano-porous non-rigid materials (e.g. metal organic framework). In conclusion, there is now a consensus in the research community that major attention has to be focused on the coupled effects appearing at the nanoscale within microporous media because they may have significant consequences at the macroscale. Experimentally, different authors tried to combine gas adsorption results and volumetric swelling data (see e.g. [START_REF] Gor | Adsorption-induced deformation of nanoporous materials -A review[END_REF] for a review). The pioneering work of [START_REF] Meehan | The Expansion of Charcoal on Sorption of Carbon Dioxide[END_REF] showed the effect of carbon dioxyde sorption on the expansion of charcoal but only mechanical deformation was reported and adsorption quantities were not measured. Later on, different authors [START_REF] Briggs | Expansion and contraction of coal caused respectively by the sorption and discharge of gas[END_REF][START_REF] Levine | Model study of the influence of matrix shrinkage on absolute permeability of coal bed reservoirs[END_REF][START_REF] Day | Swelling of australian coals in supercritical co2[END_REF][START_REF] Ottiger | Competitive adsorption equilibria of co2 and ch4 on a dry coal[END_REF][START_REF] Pini | Role of adsorption and swelling on the dynamics of gas injection in coal[END_REF][START_REF] Hol | Competition between adsorption-induced swelling and elastic compression of coal at co 2 pressures up to 100mpa[END_REF][START_REF] Espinoza | Measurement and modeling of adsorptive-poromechanical properties of bituminous coal cores exposed to co2: Adsorption, swelling strains, swelling stresses and impact on fracture permeability[END_REF] performed tests on bituminous coal, because it is of utmost importance in the context of CO 2 geological sequestration and coal bed reservoirs exploitation. However, most results were not complete in a sense that adsorption and swelling experiments were not measured simultaneously [START_REF] Meehan | The Expansion of Charcoal on Sorption of Carbon Dioxide[END_REF][START_REF] Robertson | Measuring and modeling sorption-induced coal strain[END_REF] or performed on exactly the same coal samples [START_REF] Ottiger | Competitive adsorption equilibria of co2 and ch4 on a dry coal[END_REF]. Other authors presented simultaneous in situ adsorption and swelling results but the volumetric strain was extrapolated from a local measurement -using strain gauges [START_REF] Levine | Model study of the influence of matrix shrinkage on absolute permeability of coal bed reservoirs[END_REF][START_REF] Harpalani | Influence of matrix shrinkage and compressibility on gas production from coalbed methane reservoirs[END_REF][START_REF] Battistutta | Swelling and sorption experiments on methane, nitrogen and carbon dioxide on dry selar cornish coal[END_REF] or LVDT sensors [START_REF] Chen | Method for simultaneous measure of sorption and swelling of the block coal under high gas pressure[END_REF][START_REF] Espinoza | Measurement and modeling of adsorptive-poromechanical properties of bituminous coal cores exposed to co2: Adsorption, swelling strains, swelling stresses and impact on fracture permeability[END_REF] -or by monitoring the silhouette expansion [START_REF] Day | Swelling of australian coals in supercritical co2[END_REF]. [START_REF] Perrier | A novel experimental set-up for simultaneous adsorption and induced deformation measurements in microporous materials[END_REF] presented an experimental setup providing simultaneous in situ measurements of both adsorption and deformation for the same sample in the exact same conditions, which can be directly used for model validation. Gas adsorption measurements are performed using a custom-built manometric apparatus and deformation measurements are performed using a digital image correlation set-up. This set-up allows full-field displacement measurements, which may be crucial for heterogeneous, anisotropic or cracked samples. As far as modeling is concerned, molecular simulations are the classical tools at the nanoscale. Important efforts have been involved in molecular simulations in order to characterise adsorption-induced deformation in nanoporous materials [START_REF] Vandamme | Adsorption and strain: The CO2-induced swelling of coal[END_REF]Brochard et al., 2012a;[START_REF] Hoang | Couplings between swelling and shear in saturated slit nanopores : A molecular simulation study[END_REF] and these investigations showed on few configurations that pressures applied on the pore surfaces may be very high (few hundred of MPa), depending on the thermodynamic conditions and on the pore sizes. Note that an alternative approach based on a non-local density functional theory can be used to obtain highly resolved evolutions of pore pressure versus pore widths and bulk pressure in slit-shaped pores for a large spectrum of thermodynamic conditions on the whole range of micropore widths, even for complex fluids [START_REF] Grégoire | Estimation of adsorption-induced pore pressure and confinement in a nanoscopic slit pore by a density functional theory[END_REF]. However, if macroscopic adsorption isotherms may be reconstructed in a consistent way from molecular simulations through the material pore size distribution [START_REF] Khaddour | A fully consistent experimental and molecular simulation study of methane adsorption on activated carbon[END_REF], molecular simulation tools are not tractable to predict resulting deformation at a macroscale due to the fluid confinement in nanopores (pore sizes below 2 nm). Note that [START_REF] Kulasinski | Impact of hydration on the micromechanical properties of the polymer composite structure of wood investigated with atomistic simulations[END_REF] proposed a molecular dynamic study where macroscopic swelling may be reconstructed from water adsorption in mesoporous wood (pore sizes in [4 -10] nm). If adsorption is essentially controlled by the amount and size of the pores, the mechanical effect of the pressure build up inside the pores due to fluid confinement requires some additional description about the topology and spatial organization of the porous network which is not easy to characterize, for sub-nanometric pores especially. Such a result motivates the fact that swelling is usually related to the adsorption isotherms instead of the pore pressure directly, the mechanical effect of the pore pressure being hidden in the poromechanical description. In this context, different enhanced thermodynamical or poromechanical frameworks have been proposed within the last ten years to link adsorption, induced deformation and permeability changes (e.g [START_REF] Pan | A theoretical model for gas adsorption-induced coal swelling[END_REF] 2014)). For instance, Brochard et al. (2012b) (resp. Vermorel and[START_REF] Vermorel | Enhanced continuum poromechanics to account for adsorption induced swelling of saturated isotropic microporous materials[END_REF]) proposed enhanced poromechanical frameworks where swelling volumetric deformation may be estimated as a function of the bulk pressure and a coupling (resp. a confinement) coefficient, which may be deduced from adsorption measurements. However, if these models are consistent with experimental results from the literature, they cannot be considered as truly predictive because the model parameters have to be identified to recover the experimental loading path. An incremental poromechanical framework with varying porosity has been proposed by [START_REF] Perrier | Poromechanics of adsorption-induced swelling in microporous materials: a new poromechanical model taking into account strain effects on adsorption[END_REF] allowing the full prediction of the swelling induced by adsorption for isotropic nano-porous solids saturated with a single phase fluid, in reversible and isothermal conditions. This single porosity model has been compared with experimental data obtained by [START_REF] Ottiger | Competitive adsorption equilibria of co2 and ch4 on a dry coal[END_REF] on bituminous coal samples filled with pure CH 4 and pure CO 2 at T = 45 o C and a fair agreement was observed for these low porosity coals but these types of models have to be enhanced to take into account the intrinsic double porosity features of such materials. This study aims at characterizing the influence of an adsorbed phase on the instantaneous deformation of microto-macro porous media presenting distinct and well separated porosities in term of pore size distribution. A model accounting for double porosity is proposed and validated by in-situ and simultaneous experimental comparisons. The novelty of the approach is to propose an extended poromechanical framework taking into account the intrinsic double porosity features of such materials capable of predicting adsorption-induced swelling for high porous materials without any fitting parameters. 1. An incremental poromechanical framework with varying porosity for double porosity media In this section, an incremental poromechanical framework with varying porosity proposed by [START_REF] Perrier | Poromechanics of adsorption-induced swelling in microporous materials: a new poromechanical model taking into account strain effects on adsorption[END_REF] for single porosity media is extended to double porosity media. We consider here a double porosity medium with distinct and separated porosities. The small porosity is called adsorption porosity (φ µ ) and the larger one transport porosity (φ M ). This medium is considered as isotropic with a linear poro-elastic behaviour and it is immersed and saturated by a surrounding fluid at bulk pressure P b under isothermal conditions. Confinement effects may change the thermodynamic properties of the interstitial fluids in both porosities. The adsorption porosity is saturated by an interstitial fluid of density ρ µ at pressure P µ . The transport porosity is fully saturated by an interstitial fluid (single-phase) of density ρ M at pressure P M (see Fig. 1). For saturated isotropic porous solids, in reversible and isothermal conditions and under small displacementgradient assumptions, classical poromechanics may be rewritten for double porosity media [START_REF] Coussy | Poromechanics[END_REF] : d Gs = dΨ s + dW s (1) = σ i j : dε i j + P M dφ M + P µ dφ µ dΨ s + d(-P M φ M -P µ φ µ ) dW s (2) = σ i j : dε i j -φ M dP M -φ µ dP µ . (3) of the skeleton. The state variables (ε i j , φ M , φ µ ) are respectively the infinitesimal strain tensor, the transport porosity and the adsorption porosity. The associated thermodynamical forces (σ i j , P M , P µ ) are respectively the Cauchy stress tensor and the fluid pore pressures in both porosities. For an isotropic linear poro-elastic medium, the state equations are then given by:                        σ i j = ∂ Gs ∂ε i j φ M = -∂ Gs ∂P M φ µ = -∂ Gs ∂P µ , and then:                        dσ = K(φ M , φ µ )dε -b M (φ M , φ µ )dP M -b µ (φ M , φ µ )dP µ dφ M = b M (φ M , φ µ )dε + dP M N MM (φ M ,φ µ ) - dP µ N Mµ (φ M ,φ µ ) dφ µ = b µ (φ M , φ µ )dε -dP M N µM (φ M ,φ µ ) + dP µ N µµ (φ M ,φ µ ) . (4) In Eq. 4, (σ = σ kk /3) and (ε = ε kk ) are respectively the total mean stress and the volumetric strain. (K, b M , b µ , N MM , N Mµ , N µM , N µµ ) are respectively the apparent modulus of incompressibility and six poromechanical properties, which depends on the two evolving porosities φ M and φ µ and on the constant skeleton matrix modulus. Considering a single cylindrical porosity2 , homogenization models [START_REF] Halpin | The Halpin-Tsai Equations: A Review[END_REF] yield to:            K(φ) = K s G s (1-φ) G s +K s φ , G s = 3K s (1-2ν s ) 2(1+ν s ) b(φ) = 1 -K(φ) K s , N(φ) = K s b(φ)-φ . (5) In Eq. 5, φ is the porosity and (G s , ν s ) are respectively the shear modulus and the Poisson ratio of the skeleton matrix. Practically and for high porosity media, an iterative process of homogenization is chosen to avoid discrepancies in the apparent properties estimation as noticed by [START_REF] Barboura | Modélisation micromécanique du comportement de milieux poreux non linéaires : Applications aux argiles compactées[END_REF]. The iterative process of homogenization for a cylindrical porosity is detailed in Appendix A. Full details on the iterative processes for both spherical and cylindrical porosities are presented in [START_REF] Perrier | Poromechanics of adsorption-induced swelling in microporous materials: a new poromechanical model taking into account strain effects on adsorption[END_REF]. Considering that the two porosities are distinct, well separated and both cylindrical, the iterative process can be used in two successive steps to determine the different modulus of incompressibility. Note that this two step homogenization process may be reversed as well to estimate the skeleton properties knowing the apparent ones: (K, G) = F n F n (K s , G s , φ µ ), φ M and (K s , G s ) = R n (R n (K, G, φ M ), φ µ ) . (6) In Eq. 6, (F n , R n ) stand as the standard and the reverse iterative processes of homogenization defined in Eq. A.1 and A.2 respectively. K s and G s (resp. K and G) are the skeleton (resp. apparent) incompressible and shear modulii. Based on stress/strain partitions [START_REF] Coussy | Poromechanics[END_REF] and on the response of the medium saturated by a non-adsorbable fluid [START_REF] Nikoosokhan | CO2 Storage in Coal Seams: Coupling Surface Adsorption and Strain[END_REF], the six poromechanical properties (b M , b µ , N MM , N Mµ , N µM , N µµ ) may be identified:                        b M = 1 -K K µ , b µ = K( 1 K µ -1 K s ) 1 N MM = b M -φ M K µ , 1 N Mµ = 1 N µM = (b M -φ M )( 1 K µ -1 K s ) , 1 N µµ = (b µ -φ µ ) K s + (b M -φ M )( 1 K µ -1 K s ) with K µ = F n (K s , G s , φ µ ) . (7) For a porous medium saturated by a fluid under isothermal conditions (isotropic surrounding/bulk pressure: P b , density: ρ b ), dσ = -dP b and Eq. 4 yield to:                      dε = -dP b K s dφ M = -φ M K s dP b dφ µ = -φ µ K s dP b . (8) Therefore, classical poromechanics predicts a shrinkage of the porous matrix and a decrease of the porosity under bulk pressure. This has been confirmed by experimental measurements (e.g. [START_REF] Reucroft | Gas-induced swelling in coal[END_REF] on a natural coal with a non-adsorbable gas). Considering that the fluid is confined within both porosities, the thermodynamic properties (pressures: (2015) for single porosity media: P M , P µ , densities: ρ M , ρ µ ) of            dP M = ρ M dP b ρ b = dP b 1-χ M dP µ = ρ µ dP b ρ b = dP b 1-χ µ . (9) In Eq. 9, (χ M = 1 -ρ b ρ M ) and (χ µ = 1 -ρ b ρ µ ) are the confinement degrees in the transport and in the adsorption porosities respectively, which characterize how confined is the interstitial fluid due to the number of adsorbate moles n ex M and n ex µ that exceeds the number of fluid moles at bulk conditions in porosities φ M and φ µ respectively:              χ M = n ex M n tot M with n tot M = n ex M + ρ b V φ M M χ µ = n ex µ n tot µ with n tot µ = n ex µ + ρ b V φµ M . ( 10 ) In Eq. 10, (V φ M , V φ µ ) are the connected porous volume corresponding to the transport porosity φ M and to the adsorption porosities φ µ respectively, n ex M and n ex µ are the number of adsorbate moles that exceeds the number of fluid moles at bulk conditions and n tot M and n tot µ are the total number of moles of interstitial fluid in porosities φ M and φ µ respectively. Generally, there is no way to link separately the two confinement degrees χ M and χ µ to quantities that can be measured experimentally because the partition of the excess number of adsorbate moles n ex , which can be measured experimentally, within the two porosities is unknown (n ex = n ex M + n ex µ ). However, assuming that the two scales of porosities are well separated, one can consider that most of the adsorption phenomenon occurs in the adsorption porosity (n ex µ >> n ex M ) and that the interstitial fluid is not confined in the transport porosity:            χ µ ≈ n ex n tot µ and dP µ = dP b 1-χ µ χ M ≈ 0 and dP M = dP b . (11) Finally, a new incremental poromechanical framework with varying porosities for double porosity media is proposed:                                                                              dε = ( b µ 1-χ µ + b M -1) dP b K dφ µ =                      b µ 1 -χ µ + b M -1 b µ K T 1 + 1 N µµ 1 -χ µ T 2 - 1 N µM T 3                      dP b dφ M =                      b µ 1 -χ µ + b M -1 b M K T 4 - 1 N µM 1 -χ µ T 5 + 1 N MM T 6                      dP b χ µ = n ex n tot µ with n tot µ = n ex + ρ b V φµ M = n ex + m s M ρ b ρ s φ µ 1-φ µ -φ M (12) In Eq. 12, K(φ M , φ µ ) is given by Eq. 6, (b M , b µ , N MM , N µM , N µµ ) all depend on (φ M , φ µ ) and are given by Eq. 7, (n ex , P b , ρ b ) are experimentally measurable and (m s , M, ρ s ) are respectively the adsorbent sample mass, the molar mass of the adsorbed gas and the density of the material composing the solid matrix of the porous adsorbent. Validation by experimental comparisons on a double porosity synthetic activated carbon The experimental results obtained by [START_REF] Perrier | A novel experimental set-up for simultaneous adsorption and induced deformation measurements in microporous materials[END_REF] on a double porosity synthetic activated carbon (Chemviron) are used in this study for validation purpose. The main advantage of the proposed method is to provide simultaneous in situ measurements of both adsorption and deformation for the same sample in the exact same conditions. The material and the adsorption-induced strain measurements are briefly recalled in section 2.1. The model input parameters are identified in section 2.2 and finally comparisons between experimental and model results are performed in section 2.3. Material description and adsorption-induced strain measurements In this section, the experimental results obtained by [START_REF] Perrier | A novel experimental set-up for simultaneous adsorption and induced deformation measurements in microporous materials[END_REF] on a double porosity synthetic activated carbon are briefly recalled. Full details may be found in [START_REF] Perrier | A novel experimental set-up for simultaneous adsorption and induced deformation measurements in microporous materials[END_REF]. An activated carbon (Chemviron) is used as adsorbent material. The sample is a cylinder and its main characteristics are collected in Table 1. The geometrical dimensions have been measured with a caliper, the mass has been measured with a Precisa scale (XT 2220 M-DR), the specific pore surface has been measured with a gas porosimeter (Micromeretics ASAP 2020) according to the BET theory [START_REF] Brunauer | Adsorption of Gases in Multimolecular Layers[END_REF]. The specific micropore volume has been estimated according to the IUPAC classification [START_REF] Thommes | Physisorption of gases, with special reference to the evaluation of surface area and pore size distribution (IUPAC Technical Report)[END_REF] (pore diameter below 2 nm) based on a pore size distribution deduced from a low-pressure adsorption isotherm (N 2 at 77 K from 8.10 -8 to 0.99 P/P 0 in relative pressure range) measured with the same gas porosimeter according to the HK theory [START_REF] Horvath | Method for the calculation of effective pore size distribution in molecular sieve carbon[END_REF]. The specific macropore volume has been estimated according to the IUPAC classification [START_REF] Thommes | Physisorption of gases, with special reference to the evaluation of surface area and pore size distribution (IUPAC Technical Report)[END_REF] (pore diameter above 50 nm) based on a pore size distribution deduced from mercury intrusion porosimetry. Both porosimetry techniques show that there are almost no pore of diameters between 2 nm and 50 nm in this material. The two porosities are well separated in term of pore size distribution. The adsorbates, CO 2 and CH 4 , as well as the calibrating gas, He, are provided with a minimum purity of 99.995%, 99.995% and 99.999% respectively. Fig. 3.a presents the results in term of excess adsorption/desorption isotherms. CO 2 and CH 4 gas sorption in activated carbon is a reversible phenomenon and no hysteresis is observed between adsorption and desorption paths as previously reported in the literature in [START_REF] Khaddour | A fully consistent experimental and molecular simulation study of methane adsorption on activated carbon[END_REF]. Noting that adsorbed quantity amount increases when temperature decreases, Fig. 3.a shows that CO 2 is preferentially adsorbed in carbon compare to CH 4 as previously reported in the literature [START_REF] Ottiger | Competitive adsorption equilibria of co2 and ch4 on a dry coal[END_REF][START_REF] Battistutta | Swelling and sorption experiments on methane, nitrogen and carbon dioxide on dry selar cornish coal[END_REF]. This is the reason why CO 2 injection is used to increase CH 4 recovery in Enhanced Coal Bed Methane production. Fig. 3.b presents the results in term of adsorption-induced volumetric strain. CO 2 and CH 4 gas adsorption-induced deformation is a reversible phenomenon but a small hysteresis is observed between the adsorption and the desorption paths. This hysteresis is not linked to the adsorption-deformation couplings but is due to an elastic compaction of the carbon matrix grains [START_REF] Perrier | Coupling between adsorption and deformation in microporous media[END_REF]. Cycling effect and material compaction are detailed in [START_REF] Perrier | A novel experimental set-up for simultaneous adsorption and induced deformation measurements in microporous materials[END_REF]. For a given pressure, CO 2 adsorption produces more volumetric deformation than CH 4 adsorption, which is the source of the rapid decrease in CO 2 injectivity during coal bed methane production enhanced by CO 2 injection. Identification of model input parameters The input parameters of the incremental poromechanical framework with varying porosities for double porosity media presented in Eq. ( 12) are: • The adsorbent sample mass (m s ), the molar mass of the adsorbed gas (M CO 2 , M CH 4 ), the density of the material composing the solid matrix of the porous adsorbent (ρ s ), the initial transport porosity (φ 0 M ) and the initial adsorption porosity (φ 0 µ ), which are all given in Table 1. • The surrounding fluid bulk pressure (P b ), the excess adsorbed quantities (n ex ) and the bulk density (ρ b ), which are both experimentally measured or deduced. From the experimental measurements of the excess adsorption isotherm (Fig. 3.a), a power-law fit is identified and used as an input in the incremental estimation of Eq. ( 12). From the bulk pressure and the temperature, the bulk density (ρ b ) of the surrounding fluid is estimated by its state equation using the AGA8 software [START_REF] Starling | Compressibility and super compressibility for natural gas and other hydrocarbon gases[END_REF]. • The skeleton incompressibility (K s ) and shear (G s ) modulii are deduced from the apparent ones from the two step homogenization reversed process presented in Eq. ( 6). The apparent properties are experimentally measured using an ultra-sonic technique where longitudinal and transverse waves are generated by a piezo-electric source and detected by a laser Doppler vibrometer [START_REF] Shen | Seismic wave propagation in heterogeneous limestone samples[END_REF]: K = ρ s (V 2 p -4 3 V 2 s ) G = ρ s V 2 s . ( 13 ) In Eq. 13, (V p , V s ) are respectively the velocities of the longitudinal and the transverse waves. With V p = (302 ± 2) m.s -1 and V s = (176 ± 1) m.s -1 , we get K = (120 ± 15) MPa and G = (75 ± 8) MPa and then K s = (6.0 ± 0.6) GPa and G s = (3.5 ± 0.4) GPa. Note that the experimental technique developed by [START_REF] Perrier | A novel experimental set-up for simultaneous adsorption and induced deformation measurements in microporous materials[END_REF] and allowing simultaneous measurements of adsorption-induced swelling may also be used to characterize K s directly as previously reported by [START_REF] Hol | Competition between adsorption-induced swelling and elastic compression of coal at co 2 pressures up to 100mpa[END_REF]. Indeed, if a non-adsorbable gas (such as helium) is used, the skeleton incompressible modulus may be deduced from bulk pressure and volumetric shrinkage strain measurements using Eq. 8. Figure 2 presents a typical result of K s direct identification. An experimental value of K s = (6 ± 1) GPa is then obtained, which is in good agreement with the latter one. Note that dynamic and static mechanical properties may differ for a lot of materials so a perfect match is not expected here. However, for this material with a low rigidity, the difference between the static and the dynamic properties is relatively not important and it may stand within the measurement uncertainty. As discussed in [START_REF] Perrier | A novel experimental set-up for simultaneous adsorption and induced deformation measurements in microporous materials[END_REF], activated carbon is subjected to cycling effect and material compaction. The process to produce the active carbon is composed of three phases: first the carbon is grinded, then it is activated, and finally it is compacted to obtain a cylindrical sample. During the first cycle of gas adsorption, there is a competition between the grain compaction shrinkage and the adsorption-induced volumetric swelling and a large hysteresis is observed because of the material compaction. This compaction is mostly irreversible and after the first cycle, the second and the third cycles are reversible. Fig. 3 presents the results in term of adsorptioninduced volumetric strain obtained during the third cycle when the activated carbon is fully compacted and swelling strain fully reversible. The two other cycles are presented in [START_REF] Perrier | A novel experimental set-up for simultaneous adsorption and induced deformation measurements in microporous materials[END_REF]. However, the ultra-sonic technique used to identify the apparent elastic properties has been performed on the sample before compaction and the skeleton elastic properties may differ after compaction. Therefore, a second direct K s identification has been performed after the adsorption-induced swelling test and the value of K s = 7.0 ± 0.8 GPa is obtained. Assuming that the shear skeleton modulus is affected by the compaction in the same proportion of the incompressible onei.e. assuming that the Poisson's ratio is not affected by the compaction -the following skeleton modulii are identified and further used in the comparisons with experimental data: K s = (7.0 ± 0.8) GPa G s = (4.1 ± 0.4) GPa . ( 14 ) Comparisons between experimental and model results Fig. 3 presents also the results obtained with the double porosity adsorption-induced deformation model presented in part 1. All the parameters being identified in section 2.2, the volumetric strain induced by gas adsorption is estimated step by step as well as the evolutions of the transport and adsorption porosities and the poromechanical properties without any fitted parameters. Fig. 3.b shows that the double porosity adsorption-induced deformation model presented in part 1 is capable to predict swelling induced by both CH 4 and CO 2 gas adsorption without any additional fitting parameters. The entry parameters in the model are those collected in Table 1, the skeleton elastic modulii corrected as explained in the latter section, and the adsorption isotherms. For this activated carbon, a swelling strain of ≈ 2% is recovered for a CO 2 bulk pressure up to 46 bar and a swelling strain of ≈ 1.5% is recovered for a CH 4 bulk pressure up to 107 bar. Fig. 4 shows the same results in term of excess adsorption quantities versus adsorption-induced swelling. One can note that the relationship between excess adsorbed quantities and resulting swelling is not linear. Moreover, the two evolutions for the two different gases are close together showing that the volumetric swelling is directly linked to the excess adsorbed quantity. Fig. 5 shows that for this challenging high micro and macro porous activated carbon, a single porosity model, as the one presented in [START_REF] Perrier | Poromechanics of adsorption-induced swelling in microporous materials: a new poromechanical model taking into account strain effects on adsorption[END_REF], highly overestimates the swelling deformation induced by gas adsorption in this activated carbon. The coupling appearing between the evolving adsorption porosity and the evolving transport porosity limits the macroscopic swelling of the material. This can only be captured with a double porosity model. Evolution of the poromechanical properties under free swelling The proposed double porosity model being validated by experimental comparisons in the latter section, we study here the evolution of the poromechanical properties under this free swelling. Fig. 6 presents the evolution of the confinement degree in the adsorption porosity under free swelling for an activated carbon filled with pure CO 2 and pure CH 4 at T = 318.15 K and T = 303.15 K respectively. At the early adsorption stage, the confinement degree is high (≥ 0.9) for both CO 2 and CH 4 . This is due to the fact that at the onset of adsorption, the interstitial fluid density is much higher that the bulk density (ρ b << ρ µ ). Upon swelling, 0 5 -0.14 -0.13 -0.12 -0.11 -0.1 -0.09 -0.08 Pressure [bar] Volumetric strain [%] Experiment Fit Figure 2: Activated carbon K s direct identification based on Eq. 8 thanks to helium bulk pressure and volumetric shrinkage strain measurements (the slope provides directly K s ). (1) CO 2 -adsorption (1) CO 2 -desorption (1) CH 4 -adsorption (1) CH 4 -desorption (2) Model -mean (2) Model -dispersion (1) Experiment [START_REF] Perrier | A novel experimental set-up for simultaneous adsorption and induced deformation measurements in microporous materials[END_REF], (2) Model (this study). (1) CO 2 -adsorption (1) CO 2 -desorption (1) CH 4 -adsorption (1) CH 4 -desorption (2) Model -mean (2) Model -dispersion (1) Experiment [START_REF] Perrier | A novel experimental set-up for simultaneous adsorption and induced deformation measurements in microporous materials[END_REF], (2) Model (this study). the confinement degree is decreasing for both CO 2 and CH 4 . This may due to two main reasons: firstly, the ratio of the bulk density over the interstitial fluid density is increasing; secondly, the adsorption porosity is increasing due to the active carbon swelling, and therefore, the confinement of the fluid decreases. That way the functional φ µ 1-φ µ -φ M in Eq. 12 is increasing even faster. Therefore the total number of adsorbed gas moles increases faster than the excess number of moles and the confinement degree is decreasing, the CO 2 interstitial fluid being more confined than the CH 4 one. Figs. 7 presents the evolution of the poromechanical properties in term of relative variations under free swelling. Fig. 7.a shows that all porosities are increasing with increasing bulk pressure under free swelling and, for this activated carbon, a relative variation of total porosity of ≈ 2% is recovered for a CO 2 bulk pressure up to 46 bar and a a relative variation of ≈ 1.5% is recovered for a CH 4 bulk pressure up to 107 bar. Due to the increase of porosities, Fig. Even if it may be counter-intuitive, Fig. 7.a shows that, under free swelling, the transport porosity is not decreasing even if the adsorption porosity increases. It simply increases less than the adsorption porosity. Note that this cannot be generalised for other materials or thermodynamic conditions because this is due to the complex couplings appearing between the transport and the adsorption porosities in Eq. 12. For the conditions considered here, Fig. 8 shows that, whatever the bulk pressure, the different contributions of the terms (T 1 to T 6 ) in Eq. 12 lead to a positive derivative of both porosities in respect to the bulk pressure, and therefore both porosities increases upon swelling (see Eq. 12 for T 1 to T 6 term expressions). δ r K µ δ r K (c) (d) -1.6 -1.4 -1.2 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0 2 4 6 8 10 12 CO 2 CH 4 Relative variation [%] Bulk pressure (P b ) [MPa] δ r b µ δ r b M -2 -1.8 -1.6 -1.4 -1.2 -1 -0.8 -0.6 -0.4 -0.2 0 0 2 4 6 8 10 12 CO 2 CH 4 Relative variation [%] Bulk pressure (P b ) [MPa] δ r N µ µ δ r N M M δ r N µ Μ where δ r X = X-X 0 X 0 is the relative variation of X. with n tot µ = n ex + ρ b V φµ M = n ex + m s M ρ b ρ s φ µ 1-φ µ -φ M (15) Figs. 9 presents the evolution of the poromechanical properties in term of relative variations under constrained swelling. Fig. 9.a shows that the total porosity and the transport porosity are now decreasing whereas the adsorption porosity still increases. Indeed, and for the conditions considered here, Fig. 10 shows that, whatever the bulk pressure, T 7 and T 8 in Eq. 15 lead to a positive derivative of the adsorption porosity in respect to the bulk pressure, whereas T 9 and T 10 in Eq. 15 lead to a negative derivative of the transport porosity. Figs. 9.b to 9.d show the corresponding evolution of the other poromechanical properties with increasing bulk pressure. Fig. 11 shows the evolution of the total mean stress under constraint swelling. The volumetric strain being imposed equal to zero, the continuum is submitted to compressive total mean stresses. Concluding remarks • A new incremental poromechanical framework with varying porosity has been proposed allowing the prediction of the swelling induced by adsorption. Within this framework, the adsorption-induced strain are incrementally estimated based on experimental adsorption isotherm measurements only. The evolution of the porosity and the evolutions of the poromechanical properties, such as the apparent incompressible modulus, the apparent shear modulus, the Biot modulus and the Biot coefficient, are also predicted by the model. • A double porosity model has been proposed for which the adsorption porosity and the transport porosity are distinguished. These two scales of porosity are supposed to be well separated and a two steps homogenization process is used to estimate incrementally the evolution of the poromechanical properties, which couple the evolutions of both porosities. • An existing custom-built experimental set-up has been used to test the relevance of this double porosity model. A challenging high micro and macro porous activated carbon has been chosen for this purpose. An adsorption porosity of 32.2±0.2% and a transport porosity of 41.6±0.2% have been characterized as well as its apparent and skeleton elastic properties. In situ adsorption-induced swelling has been measured for pure CO 2 and pure CH 4 at T = 318.15 K and T = 303.15 K respectively and the corresponding model responses have been estimated. It has been shown that the double porosity model is capable to predict accurately the swelling induced by both CH 4 and CO 2 gas adsorption without any fitting parameters. Conversely, it has been shown that a single porosity model highly overestimates the swelling deformation induced by gas adsorption for this high micro and macro porous activated carbon. The coupling appearing between the evolving adsorption porosity and the evolving where δ r X = X-X 0 X 0 is the relative variation of X. 18 transport porosity limits the macroscopic swelling of the material. This can only be captured with a double porosity model. • After validation, the double porosity model has been used to discuss the evolution of the poromechanical properties under free and constraint swelling. The case-study of constraint swelling consists here in assuming a global volumetric strain equal to zero. It has been shown that for the considered material, all porosities are increasing with increasing bulk pressure under free swelling, whereas the total porosity and the transport porosity are decreasing when the adsorption porosity still increases under constraint swelling. (K s , G s ) knowing the homogenized properties (K m , G m ) and a given number of increment n: i) , K (i-1) , G (i-1) ) i) , K (i-1) , G (i-1) ) (K s , G s ) = R n (K m , G m , φ) with:                                K (0) = K m , G (0) = G m , ∆φ = φ n , φ (i) = ∆φ 1-φ+i∆φ          K (i) = H K (φ ( G (i) = H G (φ ( K s = K (n) , G s = G (n) , where:              H K (φ, K s , G s ) = K s + φK s 1-(1-φ)×( Ks Ks +Gs ) H G (φ, K s , G s ) = G s + φG s 1-(1-φ)×( Ks+2Gs 2Ks +2Gs ) . (A.2) Bibliography Figure 1 : 1 Figure 1: Schematic of a double porosity media. the interstitial fluid in the two porosities (φ M , φ µ ) differ from the surrounding ones (P b , ρ b ) but the thermodynamic equilibrium imposes that the three fluids are chemically balanced (equality of the chemical potentials µ b , µ M and µ µ ). Assuming that the Gibbs-Duhem equation (dP = ρdµ) still applies for both the surrounding fluid and the interstitial ones, a macroscopic relation between the interstitial pore pressures and the surrounding one may be derived similarly to the relation initially proposed by[START_REF] Vermorel | Enhanced continuum poromechanics to account for adsorption induced swelling of saturated isotropic microporous materials[END_REF] and used inPerrier et al. Fig. 3 3 Fig.3presents the results of these simultaneous measurements for an activated carbon filled with pure CO 2 and pure CH 4 at T = 318.15 K and T = 303.15 K respectively. Full-field deformation maps and collected experimental data are reported in[START_REF] Perrier | A novel experimental set-up for simultaneous adsorption and induced deformation measurements in microporous materials[END_REF]. Figure 3 : 3 Figure 3: Simultaneous adsorption and induced swelling measurements for an activated carbon filled with pure CO 2 and pure CH 4 at T = 318.15 K and T = 303.15 K respectively: (a) experimental excess adsorption isotherms; (b) comparison between experimental and modeling adsorptioninduced swelling results. adsorbed quantity [mmol.g -1 ] Figure 4 :Figure 6 : 46 Figure 4: Comparison between experimental and modeling results in term of excess adsorption quantities versus adsorption-induced swelling for an activated carbon: (a) CO 2 at T = 318.15 K; (b) CH 4 at T = 303.15 K. 7.b shows that incompressible modulii are decreasing under free swelling, K decreasing faster than K µ . Consequently, b M is increasing and b µ upon free swelling as shown in Fig. 7.c. The evolution of the coupling poromechanical properties N MM , N µµ , N µM are more difficult to anticipate but Fig. 7.d shows that they are all decreasing upon swelling. Figure 7 :Figure 8 : 78 Figure 7: Evolution of the poromechanical properties in term of relative variations under free swelling for an activated carbon filled with pure CO 2 and pure CH 4 at T = 318.15 K and T = 303.15 K respectively. Figure 9 :Figure 10 :Figure 11 : 91011 Figure 9: Evolution of the poromechanical properties in term of relative variations under constrained swelling for an activated carbon filled with pure CO 2 and pure CH 4 at T = 318.15 K and T = 303.15 K respectively. Table 1 : 1 Main characteristics of the adsorbent and the adsorbates. 2 and CH 4 excess adsorption isotherms are built step by step from gas adsorption measurements performed using a custom-built manometric set-up. Simultaneously adsorption-induced swelling strain are measured based on digital image correlation. Full details are provided in[START_REF] Perrier | A novel experimental set-up for simultaneous adsorption and induced deformation measurements in microporous materials[END_REF]. Property Unit Symbol Value Height (cm) h 1.922 ± 0.004 Diameter (cm) d 2.087 ± 0.002 Volume (ml) V ech 6.57 ± 0.03 Adsorbent sample mass (g) m s 4.137 ± 0.001 Solid matrix density (kg/L) ρ s 2.4 ± 0.8 Specific pore surface (m 2 .g -1 ) S BET 1090 ± 10 Specific micropore volume (cm 3 .g -1 ) v φ µ 0.51 ± 0.01 Adsorption porous volume (cm 3 ) V φ µ 2.115 ± 0.001 Adsorption porosity Specific macropore volume (cm 3 .g -1 ) (%) φ 0 µ v φ M 32 ± 1 0.66 ± 0.01 Transport porous volume (cm 3 ) V φ M 2.712 ± 0.001 Transport porosity Total porosity (%) (%) φ 0 M φ 0 41 ± 1 73 ± 2 CO 2 molar mass CH 4 molar mass (g.mol -1 ) (g.mol -1 ) M CO 2 M CH 4 44.01 16.04 CO This assumption is discussed in[START_REF] Perrier | Poromechanics of adsorption-induced swelling in microporous materials: a new poromechanical model taking into account strain effects on adsorption[END_REF] where both spherical and cylindrical porosities are considered. Acknowledgements Financial supports from the Région Aquitaine through the grant CEPAGE ( 20121105002), from the Conseil Départemental 64 through the grant CEPAGE2 (2015 0768), from the Insitut Carnot ISIFoR and from the Université de Pau et des Pays de l'Adour through the grant Bonus Qualité Recherche are gratefully acknowledged. We also gratefully acknowledge Dr. Frédéric Plantier and Dr. Christelle Miqueu for their advices and our different discussions and Dr. Valier Poydenot for his help concerning the ultra-sonic technique and the apparent properties identification. D. Grégoire and G. Pijaudier-Cabot are fellows of the Institut Universitaire de France. (1) CO 2 -adsorption (1) CO 2 -desorption (1) CH 4 -adsorption (1) CH 4 -desorption (2) Double porosity model (3) Single porosity model (1) Experiment [START_REF] Perrier | A novel experimental set-up for simultaneous adsorption and induced deformation measurements in microporous materials[END_REF], (2) Model (this study), (3) Model [START_REF] Perrier | Poromechanics of adsorption-induced swelling in microporous materials: a new poromechanical model taking into account strain effects on adsorption[END_REF]. Figure 5: Comparison between the adsorption-induced swelling results provided by the single and the double porosity models for an activated carbon filled with pure CO 2 and pure CH 4 at T = 318.15 K and T = 303.15 K respectively. Case-study of constrained swelling In this section, one case-study of constrained swelling is considered. The volumetric strain is then assumed to be equal to zero and Eq. 12 may be rewritten as: Iterative process of homogenization Following Smaoui-Barboura [START_REF] Barboura | Modélisation micromécanique du comportement de milieux poreux non linéaires : Applications aux argiles compactées[END_REF], an iterative homogenization process may be applied for the linear homogenization functions used for a cylindrical porosity (Eq. 5) 3 . Considering a given cylindrical porosity φ, the local skeleton properties (K s , G s ) and a given number of increment n, the global homogenized properties (K m , G m ) are determined step by step using the following scheme: i) , K (i-1) , G (i-1) ) , where: . (A.1) Moreover, the latter iterative process may be reversed to determine step by step the local skeleton properties 3 For a full description on the iterative processes for both spherical and cylindrical porosities see [START_REF] Perrier | Poromechanics of adsorption-induced swelling in microporous materials: a new poromechanical model taking into account strain effects on adsorption[END_REF].
49,317
[ "175652", "169868" ]
[ "83656", "83656" ]
01756975
en
[ "info" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01756975/file/asprs_2017_range.pdf
P Biasutti email: pierre.biasutti@math.u-bordeaux.fr J-F Aujol M Brédif A Bugeau RANGE-IMAGE: INCORPORATING SENSOR TOPOLOGY FOR LiDAR POINT CLOUD PROCESSING This paper proposes a novel methodology for LiDAR point cloud processing that takes advantage of the implicit topology of various LiDAR sensors to derive 2D images from the point cloud while bringing spatial structure to each point. The interest of such a methodology is then proved by addressing the problems of segmentation and disocclusion of mobile objects in 3D LiDAR scenes acquired via street-based Mobile Mapping Systems (MMS). Most of the existing lines of research tackle those problems directly in the 3D space. This work promotes an alternative approach by using this image representation of the 3D point cloud, taking advantage of the fact that the problem of disocclusion has been intensively studied in the 2D image processing community over the past decade. Using the image derived from the sensor data by exploiting the sensor topology, a semi-automatic segmentation procedure based on depth histograms is presented. Then, a variational image inpainting technique is introduced to reconstruct the areas that are occluded by objects. Experiments and validation on real data prove the effectiveness of this methodology both in terms of accuracy and speed. INTRODUCTION Over the past decade, street-based Mobile Mapping Systems (MMS) have encountered a large success as the onboard 3D sensors are able to map full urban environments with a very high accuracy. These systems are now widely used for various applications from urban surveying to city modeling [START_REF] Serna | Urban accessibility diagnosis from mobile laser scanning data[END_REF][START_REF] Hervieu | Road marking extraction using a Model&Data-driven RJ-MCMC[END_REF][START_REF] El-Halawany | Detection of road curb from mobile terrestrial laser scanner point cloud[END_REF][START_REF] Hervieu | Semi-automatic road/pavement modeling using mobile laser scanning[END_REF][START_REF] Goulette | An integrated onboard laser range sensing system for on-the-way city and road modelling[END_REF]. Several systems have been proposed in order to perform these acquisitions. They mostly consist in optical cameras, 3D LiDAR sensor and GPS combined with Inertial Measurement Unit (IMU), built on a vehicle for mobility purposes [START_REF] Paparoditis | Stereopolis II: A multi-purpose and multi-sensor 3D mobile mapping system for street visualisation and 3D metrology[END_REF][START_REF] Geiger | Vision meets robotics: The KITTI dataset[END_REF]. They provide multi-modal data that can be merged in several ways, such as LiDAR point clouds colored by optical images or LiDAR depth maps aligned with optical images. Although these systems lead to very complete 3D mapping of urban scenes by capturing optical and 3D details (pavements, walls, trees, etc.), providing billions of 3D points and RGB pixels per hour of acquisition, they often require further processing to suit their ultimate usage. For example, MMS tend to acquire mobile objects that are not persistent to the scene. This often happens in urban environments with objects such as cars, pedestrians, traffic cones, etc. As LiDAR sensors cannot penetrate opaque objects, those mobile objects cast shadows behind them where no point has been acquired (Figure 1, left). Therefore, merging optical data with the point cloud can be ambiguous as the point cloud might represent objects that are not present in the optical image. Moreover, these shadows are also largely visible when the point cloud is not viewed from the original acquisition point of view. This might end up being distracting and confusing for visualization. Thus, the segmentation of mobile objects and the reconstruction of their background remain strategic issues in order to improve the understanding of urban 3D scans. We argue that working on simplified representations of the point cloud enables specific problems such as disocclusion to be solved not only using traditional 3D techniques but also using techniques brought by other communities (image processing in our case). Exploiting the sensor topology also brings spatial structure into the point cloud that can be used for other applications such as segmentation, remeshing, colorization or registration. The main contribution of this paper is a novel methodology for point cloud processing by exploiting the implicit topology of various LiDAR sensors that can be used to infer a simplified representation of the LiDAR point cloud while bringing spatial structure between every points. The utility of such a methodology is here demonstrated by two applications. First, a fast segmentation technique for dense and sparse point clouds to extract full objects from the scene is presented (Figure 1, center). Then, we introduce a fast and efficient variational method for the disocclusion of a point cloud using the range image representation while taking advantage of a horizontal prior without any knowledge of the color or texture of the represented objects (Figure 1, right). This paper is an extension of [START_REF] Biasutti | Disocclusion of 3D LiDAR point clouds using range images[END_REF] with improved technical details of the methodology as well as a complete validation of the proposed applications and a discussion about its limitations. The paper is organized as follows: after a review on the state-of-the-art of the two application scenarios (Section 2), we detail how the topology of various sensors can be exploited to turn a regular LiDAR point cloud into a range image (Section 3). In Section 4, a point cloud segmentation model using range images is introduced with corresponding results and a validation on several datasets. Then, a disocclusion method for point clouds is presented in Section 5 as well as results and validation on various datasets. Finally conclusions are drawn and potential future work is identified in Section 6. Related Works The growing interest for MMS over the past decade has lead to many works and contributions for solving problems that could be tackled using range images. In this part, we present a state-of-the-art on both segmentation and disocclusion. Point cloud segmentation The problem of point cloud segmentation has been extensively addressed in the past years. Three types of methods have emerged: geometry-based techniques, statistical techniques and techniques based on simplified representations of the point cloud. Geometry-based segmentation. The first well-known method in this category is regiongrowing where the point cloud is segmented into various geometric shapes based on the neighboring area of each point [START_REF] Huang | Automatic data segmentation for geometric feature extraction from unorganized 3D coordinate points[END_REF]. Later, techniques that aim at fitting primitives (cones, spheres, planes, cubes ...) in the point cloud using RANSAC [START_REF] Schnabel | RANSAC based out-of-core point-cloud shape detection for city-modeling[END_REF] have been proposed. Others look for smooth surfaces [START_REF] Rabbani | Segmentation of point clouds using smoothness constraint[END_REF]. Although these methods do not need any prior about the number of objects, they often suffer from over-segmenting the scene resulting in objects segmented in several parts. Semantic segmentation. The methods in this category analyze the point cloud characteristics [START_REF] Demantke | Dimensionality based scale selection in 3D LiDAR point clouds[END_REF][START_REF] Weinmann | Semantic point cloud interpretation based on optimal neighborhoods, relevant features and efficient classifiers[END_REF][START_REF] Landrieu | Comparison of belief propagation and graph-cut approaches for contextual classification of 3D LiDAR point cloud data[END_REF]. They analyze the geometric neighborhood of each point in order to perform a point-wise classification, possibly with spatial regularisation, which, in turn, yields a semantic segmentation. It leads to a good separation of points that belong to static and mobile objects, but not to the distinction between different objects of the same class. Simplified model for segmentation. MMS LiDAR point clouds typically represent massive amounts of unorganized data that are difficult to handle. Different segmentation approaches based on a simplified representation of the point cloud have been proposed. [START_REF] Papon | Voxel cloud connectivity segmentationsupervoxels for point clouds[END_REF] propose a method in which the point cloud is first turned into a set of voxels which are then merged using a variant of the SLIC algorithm for super-pixels in 2D images [START_REF] Achanta | SLIC superpixels compared to state-of-the-art superpixel methods[END_REF]. This representation leads to a fast segmentation but it might fail when the scale of the objects in the scene is too different. [START_REF] Gehrung | An approach to extract moving objects from MLS data using a volumetric background representation[END_REF] propose to extract moving objects from MLS data by using a probabilistic volumetric representation of the MLS data in order to cluster points between mobile objects and static objects. However this technique can only be used with 3D sensors. Another simplified model of the point cloud is presented by [START_REF] Zhu | Segmentation and classification of range image from an intelligent vehicle in urban environment[END_REF]. The authors take advantage of the implicit topology of the sensor to simplify the point cloud in order to segment it before performing classification. The segmentation is done through a graph-based method as the notion of neighborhood is easily computable on a 2D image. Although the provided segmentation algorithm is fast, it suffers from the same issues as geometry-based algorithms such as over-segmentation or incoherent segmentation. Finally, an approach for urban objects segmentation using elevation images is proposed in [START_REF] Serna | Detection, segmentation and classification of 3D urban objects using mathematical morphology and supervised learning[END_REF]. There, the point cloud is simplified by projecting its statistics onto a horizontal grid. Advanced morphological operators are then applied on the horizontal grid and objects are segmented using a watershed approach. Although this method provides good results, the overall precision of the segmentation is limited by the resolution of the projection grid and leads to the occurence of artifacts at object borders. Moreover, all those categories of segmentation techniques are not able to treat efficiently both dense and sparse LiDAR point clouds i.e. point clouds acquired with high or low sampling rates compared to the real-world feature sizes (e.g. macroscopic objects such as cars, pedestrians, etc.). For example, one sensor turn in the KITTI dataset [START_REF] Geiger | Vision meets robotics: The KITTI dataset[END_REF] corresponds to 10 5 points (sparse) whereas for a scene of similar size in the Stereopolis-II dataset [START_REF] Paparoditis | Stereopolis II: A multi-purpose and multi-sensor 3D mobile mapping system for street visualisation and 3D metrology[END_REF], the scene contains more than 4 • 10 6 points (dense). In this paper, we present a novel simplified model for segmentation based on histograms of depth in range images by leveraging grid-like topology without suffering from accuracy loss that is often caused by projection/rasterization. Disocclusion Disocclusion of a scene has only been scarcely investigated for 3D point clouds [START_REF] Sharf | Context-based surface completion[END_REF][START_REF] Park | Shape and appearance repair for incomplete point surfaces[END_REF][START_REF] Becker | LiDAR inpainting from a single image[END_REF]. These methods generally work on complete point clouds (with homogeneous sampling) rather than LiDAR point clouds. This task, also referred to as inpainting, has been much more studied in the image processing community. Over the past decades, various approaches have emerged to solve the problem in different manners. Patch-based methods such as the one proposed by Criminisi et al. (2004) (and more recently [START_REF] Lorenzi | Inpainting strategies for reconstruction of missing data in VHR images[END_REF] and Buyssens et al. (2015b)) have proven their strengths. They have been extended for RGB-D images (Buyssens et al., 2015a) and to LiDAR point clouds [START_REF] Doria | Filling large holes in LiDAR data by inpainting depth gradients[END_REF] by considering an implicit topology in the point cloud. Variational approaches represent another type of inpainting algorithms [START_REF] Weickert | Anisotropic diffusion in image processing[END_REF][START_REF] Bertalmio | Image inpainting[END_REF][START_REF] Bredies | Total generalized variation[END_REF][START_REF] Chambolle | A first-order primal-dual algorithm for convex problems with applications to imaging[END_REF]. They have been extended to RGB-D images by taking advantage of the bi-modality of the data [START_REF] Ferstl | Image guided depth upsampling using anisotropic total generalized variation[END_REF][START_REF] Bevilacqua | Joint inpainting of depth and reflectance with visibility estimation[END_REF]. Even if the results of the disocclusion are quite satisfying, these models require the point cloud to have color information as well as the 3D data. In this work, we introduce an improvement to a variational disocclusion technique by taking advantage of a horizontal prior. Range images derived from the sensor topology In this paper, we demonstrate that a simplified model of the point cloud can be directly derived from it using the intrinsic topology of the sensing pattern during acquisition. This section introduces this sensor topology and how it can be exploited on various kinds of sensors. Examples of its usages are presented. Sensor topology Most of modern LiDAR sensors offer an intrinsic 2D topology in raw acquisitions. However, this feature is rarely considered in recent works. Namely, LiDAR points may obviously be ordered along scanlines, yielding the first dimension of the sensor topology, linking each LiDAR pulse to the immediately preceding and succeeding pulses within the same scanline. For most LiDAR devices, one can also order the consecutive scanlines. It amounts to considering a second dimension of the sensor topology across the scanlines as it can be seen in Figure 2. From sensor topology to range image The sensor topology often varies with the type of LiDAR sensor that is being used. 2D LiDAR sensors (i.e., featuring a single simultaneous scanline acquisition) such as the one used in [START_REF] Paparoditis | Stereopolis II: A multi-purpose and multi-sensor 3D mobile mapping system for street visualisation and 3D metrology[END_REF] generally send an almost constant number H of pulses per scanline (or per turn for 360 degree 2D LiDARs) where each pulse was emitted at a certain θ angle value. Therefore, any measurement of the sensor might be organized in an image of size W × H, where W is the number of consecutive scanlines and thus a temporal dimension. This is illustrated in Figure 3 in which one can see how the 2D image is spanned by the sensor topology. In this work, such images are only built using the range measurement as pixel intensity, later refered to as range images. Note that these range images differ from typical range images (Kinect, RGB-D) as the origin of acquisition is not the same for each pixel and the 3D directions of pixels are not regularly spaced along the image, but warped by the orientation changes of the sensor trajectory. 3D LiDAR sensors are based on multiple simultaneous scanline acquisitions (e.g. H = 64 fibers) such as in the MMS proposed in [START_REF] Geiger | Vision meets robotics: The KITTI dataset[END_REF]. Again, each scanline contains the same number of points and each scanline may be stacked horizontally to form the same type of structure, as illustrated in Figure 4. Note that Figures 3 and4 are simplified for better understanding, but that realistic cases can be more chaotic as discussed later in this section. Whereas LiDAR pulses are emitted somewhat regularly, many pulses yield no range measurements due, for instance, to reflective surfaces, absorption or absence of target objects (e.g. in the sky direction) or an ignored measurement whenever the measure is too uncertain. Therefore the sensor topology is only a relevant approximation for emitted pulses but not for echo returns, such that the range image is sparse with undefined values where the sensor measured no echoes (or when further processing was performed on the acquisition, leading to the removal of points having a too incertain measurement). This is illustrated in Figure 5.b in which pulses with no echoes appear in dark. Note that considering multi-echo datasets as a multilayer depth image is beyond the scope of this paper, which only considers first returns. This 2D sensor topology encodes an implicit neighborhood between LiDAR measurement pulses. Whereas the implicit topology of pixels in optical images is supported by a regular geometry of rays (shared origin and regular grid of directions if geometric distortion is neglected), the proposed 2D sensor topology for LiDAR point clouds is supported by the trajectory-warped geometry of 3D rays. However, it readily provides, with minimal effort, an approximation of the immediate 3D point neighborhoods, especially if the sensor moves or turns slowly compared to its sensing rate. We argue however that this approximation a. b. is sufficient for most purposes, as it has the added advantage of providing pulse neighborhoods that are reasonably local both in terms of space and time, thus being robust to misregistrations, and being very efficient to handle (constant time access to neighbors). Moreover, as LiDAR sensor designs evolve to higher sampling rates within and/or across scanlines, the sensor topology will better approximate spatio-temporal neighborhoods, even in the case of mobile acquisitions. We argue that most raw LiDAR datasets contain all the information (scanline ordering, pulses with no echo, number of points per turn...) to enable the access to a well-defined implicit sensor topology. However it sometimes occurs that the dataset received further processings (points were reordered or filtered, or pulses with no return were discarded) or that the sensor does not acquire neighbouring points consecutively. Therefore, the sensor topology may then only be approximated using auxilliary point attributes (time, θ, fiber id...) and guesses about acquisition settings (e.g. guessing approximate ∆time or ∆θ values between successive pulse emissions). Using this information, one can recreate the range map by stacking points even if some points were discarded. Defining a grid-like topology is a good approximation if the number of pulses per scanline/per turn is close to an integer constant with relatively stable rotation offsets between pulses. Interest and applications The use of range images as the simplified representation of a point cloud directly brings spatial structure to the point cloud. Therefore, retrieving neighbors of a point, which was formerly done using advanced data structures [START_REF] Muja | Scalable nearest neighbor algorithms for high dimensional data[END_REF], is now a trivial operation and is given without any ambiguities. This was proved to be very useful in applications such as remeshing since faces can be directly associated to the grid structure of the range image. As shown in this paper, considering a point cloud as a range image supported by its implicit sensor topology enables the adaptation of the many existing image processing approaches to LiDAR point cloud processing (e.g.: segmentation, disocclusion). Moreover, when optical data was acquired along with LiDAR point clouds, the range image can be used for improving the point cloud colorization and the texture registration on the point cloud as the silouhettes present in the range image are likely to be aligned with the gradients of optical images. In the following sections, the LiDAR measurements, equipped with this implicit 2D topology, are denoted as the sparse range image u R . Application to point cloud segmentation In this section, a simple yet efficient segmentation technique that takes advantage of the range image will be introduced. Results will be presented and a quantitative analysis will be performed to validate the model. Range Histogram Segmentation technique We now propose a segmentation technique based on range histograms. For the sake of simplicity, we assume that the ground is relatively flat and we remove ground points, which are identified by plane fitting. Instead of segmenting the whole range image u R directly, we first split this image into S sub-windows u R s , s = 1 . . . S of size W s × H along the horizontal axis to prevent each sub-window from representing several objects at the same range. For each u R s , a depth histogram h s of B bins is built. This histogram is automatically segmented into C s classes using the a-contrario technique presented in [START_REF] Delon | A nonparametric approach for histogram segmentation[END_REF]. This technique presents the advantage of segmenting a 1D-histogram without any prior assumption, e.g. the underlying density function or the number of objects. Moreover, it aims at segmenting the histogram following an accurate definition of an admissible segmentation, preventing over-and under-segmentation. An example of a segmented histogram is given in Figure 6. Once the histograms of successive sub-images have been segmented, we merge together the corresponding classes by checking the distance between each of their centroids in order to obtain the final segmentation labels. Let us define the centroid C i s of the i th class C i s in the histogram h s of the sub-image u R s as follows: C i s = b∈C i s b × h s (b) b∈C i s h s (b) (1) a. b. where b are all bins belonging to class C i s . The distance between two classes C i s and C j r of two consecutive windows r and s can be defined as follows: d(C i s , C j r ) = |C i s -C j r | (2) Finally, we can set a threshold such that if d(C i s , C j r ) ≤ τ , classes C i s and C j r should be merged (e.g. they now share the same label). If two classes of the same window are eligible to be merged with the class of an other window, then only the one with lower depth should be merged. Results of this segmentation procedure can be found in the next subsection. The choice of W s , B and τ mostly depends on the type of data that is being treated (sparse or dense). For sparse point clouds (few thousand points per turn), B has to remain small (e.g. 50) whereas for dense point clouds (> 10 5 points per turn), this value can be increased (e.g. 200). In practice, we found out that good segmentations may be obtained on various kinds of data by setting W s = 0.5 × B and τ = 0.2 × B. Note that the windows are not required to be overlapping in most cases, but for very sparse point clouds, an overlap of 10% is enough to achieve good segmentation. For example in our experiments on the KITTI dataset [START_REF] Geiger | Vision meets robotics: The KITTI dataset[END_REF], for range images of size 2215 × 64px, W s = 50, B = 100, τ = 20 with no overlap. Results & Analysis Figure 7 shows two examples of segmentations obtained using our method on different point clouds from the KITTI dataset [START_REF] Geiger | Vision meets robotics: The KITTI dataset[END_REF]. Each object, of different scale, is correctly distinguished from all others as an individual entity. Moreover, both results appear to be visually plausible. Apart from the visual inspection, we also performed a quantitative analysis on the IQmulus dataset [START_REF] Vallet | TerraMobilita/IQmulus urban point cloud analysis benchmark[END_REF]. The IQmulus dataset consists of a manually annotated point cloud of 12 million points in which points are clustered into several classes corresponding to typical urban entities (cars, walls, pedestrian, etc.). Our aim is to compare the quality of our segmentation on several objects to the ground truth provided by this dataset. First, the point cloud is segmented using our technique, using 100px wide windows with a 10px overlap and a threshold for merging set to 50. After that, we manually select labels that correspond to the wanted object (hereafter: cars). We then compare the result of the segmentation to the ground truth in the same area, and compute the Jaccard distance (Intersection over Union) between our result and the ground truth. Figure 8 presents the result of such a comparison. The overall distance shows that the segmentation matches 97.09% of the ground truth, for a total of 59021 points. Although the result is very satisfying, our result differs in some ways from the ground truth. Indeed, in the first zoom of Figure 8, one can see that our model better succeeds in catching the points of the cars that are close to the ground (we remind here that the ground truth on IQmulus was manually labelled and thus subject to errors). In the second zoomed-in part, one can see that points belonging to the windows of the car were not correctly retrieved using our model. This is due to the fact that the measure in areas where the beam was highly deviated (e.g. beams that were not reflected in the same direction as the one they were emitted along) is not reliable as the range estimation is not realistic. Therefore our model fails in areas where the estimated 3D point is not close to the actual 3D surface. Note that a similar case appears for the review mirror (Figure 8, on the left) which is made of specular material that leads to bad measurements. In some extreme cases, the segmentation is not able to separate objects that are too close from the sensor point of view. Figure 9.a shows a result of the segmentation in a scene where two cars that are segmented with the same label (symbolised by the same color). In order to better distinguish the different objets, one can simply compute the connected components of the points regarding their 3D neighborhood (that can be computed using K-NN for example). Figure 9.b shows the result of such post-processing on the same two cars. We can notice how both cars are distinguished from one other. Application to disocclusion In this section, we show that the problem of disocclusion in a 3D point cloud can be addressed using basic image inpainting techniques. a. b. Range map disocclusion technique The segmentation technique introduced above provides labels that can be manually selected in order to build masks. As mentioned in the beginning, we propose a variational approach to the problem of disocclusion of the point cloud that leverages its range image representation. By considering the range image representation of the point cloud rather than the point cloud itself, the problem of disocclusion can be reduced to the estimation of a set of 1D ranges instead of a set of 3D points, where each range is associated with the ray direction of the pulse. The Gaussian diffusion algorithm provides a very simple algorithm for the disocclusion of objects in 2D images by solving partial differential equations. This technique is defined as follows:    ∂u ∂t -∆u = 0 in (0, T ) × Ω u(t = 0, x, y) = u R (x, y) in Ω (3) having u an image defined on Ω, t being a time range and ∆ the Laplacian operator. As the diffusion is performed in every direction, the result of this algorithm is often very smooth. Therefore, the result in 3D lacks of coherence as shown in Figure 10.b. In this work, we show that the structures that require disocclusion are likely to evolve smoothly along the x W and y W axes of the real world as defined in Figure 11.a. Therefore, we set η for each pixel to be a unitary vector orthogonal to the projection of z W in the u R range image (Figure 11.b). This vector defines the direction in which the diffusion should be done to respect this prior. Note that most MLS systems provide georeferenced coordinates of each point that can be used to define η. For example, using a 2D LiDAR sensor that is orthogonal to the path of the vehicle, one can define η as the projection of the pitch angle of the aquisition vehicle. We aim at extending the level lines of u along η. This can be expressed as ∇u, η = 0. Therefore, we define the energy F (u) = 1 2 ( ∇u, η ) 2 . The disocclusion is then computed as a solution of the minimization problem inf u F (u). The gradient of this energy is given by ∇F (u) = -(∇ 2 u) η, η = -u η η , where u η η stands for the second order derivative of u with respect to η and ∇ 2 u for the Hessian matrix. The minimization of F can be done by gradient descent. If we cast it into a continuous framework, we end up with the following equation to solve our disocclusion problem:    ∂u ∂t -u η η = 0 in (0, T ) × Ω u(t = 0, x, y) = u R (x, y) in Ω (4) using the notations introduced earlier. We recall that ∆u = u η η + u η T η T , where η T stands 280 for a unitary vector orthogonal to η. Thus, Equation (4) can be seen as an adaptation of the Gaussian diffusion equation (3) with respect to the diffusion prior in the direction η. Figure 10 shows a comparison between the original Gaussian diffusion algorithm and our modification. The Gaussian diffusion leads to an over-smoothing of the scene, creating an aberrant surface, whereas our modification provides a result that is more plausible. The equation proposed in (4) can be solved iteratively. The number of iterations simply depends on the size of the area that needs to be filled in. Results & Analysis In this part, the results of the segmentation of various objects and the disocclusion of their background are detailed. Sparse point cloud. A first result is shown in Figure 12. This result is obtained for a sparse point cloud (≈ 10 5 pts) of the KITTI database. A pedestrian is segmented out of the scene using our proposed segmentation technique (using the parameters introduced in 4.1) and a manual selection of the corresponding label. This is used as a mask for the disocclusion of its background using our modified variational technique for disocclusion. Figure 12.a shows the original range image. In Figure 12.b, the dark region corresponds to the result of the segmentation step for the pedestrian. For practical purpose, a very small dilation is applied to the mask (radius of 2px in sensor topology) to ensure that no outlier points (near the occluder's silhouette with low accuracy or on the occluder itself) bias the reconstruction. Finally, Figure 12.c shows the range image after the reconstruction. We can see that the disocclusion performs very well as the pedestrian has completely disappeared and the result is visually plausible in the range image. Notice how the implicit sensor topology of the range image has allowed here to use a standard 2D image processing technique from mathematical morphology to filter mislabelled and inaccurate points near silhouettes. In this scene, η has a direction that is very close to the x axis of the range image and the 3D point cloud is acquired using a 3D LiDAR sensor. Therefore, the coherence of the reconstruction can be checked by looking how the acquisition lines are connected. Figure 13 shows the reconstruction of the same scene in three dimensions. This reconstruction simply consists in the projection of the depth of each pixel along the axis formed by each corresponding point and the sensor origin. We can see that the acquisition lines are properly retrieved after removing the pedestrian. This result was generated in 4.9 seconds using Matlab on a 2.7GHz processor. Note that a similar analysis can be done on the results presented in Figure 1. Dense point cloud. In this work, we aim at presenting a model that performs well on both sparse and dense data. Figure 14 shows a result of the disocclusion of a car in a dense point cloud. This point cloud was acquired using the Stereopolis-II system [START_REF] Paparoditis | Stereopolis II: A multi-purpose and multi-sensor 3D mobile mapping system for street visualisation and 3D metrology[END_REF] and contains over 4.9 million points. In Figure 14.a, the original point cloud is displayed with the color based on the reflectance of the points for a better understanding of the scene. Figure 14.b highlights the segmentation of the car using our model (with the same parameters as in Section 4.2), dilated to prevent aberrant points. Finally, Figure 14.c depicts the result of the disocclusion of the car using our method. We can note that the car is perfectly removed from the scene. It is replaced by the ground that could not have been measured during the acquisition. Although the reconstruction is satisfying, some gaps are left in the point cloud. Indeed, in the data used for this example, pulses returned with large deviation values were discarded. Therefore, the windows and the roof of the car are not present in the point cloud before and after the reconstruction as no data is available. We could have added these no-return pulses in the inpainting mask as well to reconstruct these holes as well. Quantitative analysis. To conclude this section, we perform a quantitative analysis of our various point clouds in order to reconstruct them using our model. Therefore, the original point clouds can serve as ground truth. Note that areas are removed while taking care that no objects are present in those locations. Indeed, this test aims at showing how the disocclusion step behaves when reconstructing backgrounds of objects. The size of the removed areas corresponds to an approximation of a pedestrian's size at 8 meters from the 335 sensor in the range image (20 × 20px). The test was done on 20 point clouds in which an area was manually removed and then reconstructed. After that, we computed the MAE (Mean Absolute Error) between the ground truth and the reconstruction (where the occlusion was simulated) using both Gaussian disocclusion and our model. We recall that the MAE is expressed as follows: MAE(u 1 , u 2 ) = 1 N i,j∈Ω |u 1 (i, j) -u 2 (i, j)| (5) where u 1 , u 2 are images defined on Ω with N pixels where each pixel intensity represents the depth value. Table 1 sums up the result of our experiment. We can note that our method provides a great improvement compared to the Gaussian disocclusion, with an average MAE lower than 3cm. These results are obtained on scenes where objects are located from 12 to 25 meters away from the sensor. The result obtained using our method is very close to the sensor accuracy as mentionned by the manufacturer ( 2cm). Overlapping objetcs. Although the proposed disocclusion method performs well in realistic scenarios as demonstrated above, in some specific contexts, the reconstruction quality can be debatable. Indeed, when two small objects (pedestrians, poles, cars, etc.) overlap in front of the 3D sensor (e.g. one object is in front of the other), the disocclusion of the closest object may not fully recover the farthest object. Figure 16.a shows an example of such a scenario where the goal is to remove the cyclist (highlighted in green). In this case, a pole (Figure 16.a, in orange) is situated between the cyclist and the background. Conclusion In this paper, we have proposed a novel methodology for LiDAR point cloud processing that relies on the implicit topology that is brought by most recent LiDAR sensors. Considering the range image derived from the sensor topology has enabled a simplified formulation of the problem from having to determine an unknown number of 3D points to estimating only the 1D range in the ray directions of a fixed set of range image pixels. Beyond simplifying drastically the search space, it also provides directly a reasonable sampling pattern for the reconstructed point set. Moreover, it also directly provides a robust estimation of the neighborhood of each point according to the acquisition, while improving the computational time and the memory usage. To highlight the relevance of this methodology, we have proposed novel aproaches for the segmentation and the disocclusion of objects in 3D point clouds acquired using MMS. These models take advantage of range images. We have also proposed an improvement of a classical imaging technique that takes the nature of the point cloud into account (horizontality prior on the 3D embedding), leading to better results. The segmentation step can be done online any time a new window is acquired, leading to great speed improvement, constant memory requirements and the possibility of online processing during the acquisition. Moreover, our model is designed to work semi-automatically with using very few parameters in reasonable computational time. We have validated both the segmentation and the disocclusion methods by visual inspection as well as quantitative analysis against ground truth and we have proved their effectiveness in terms of accuracy. In the future, we will focus on extending the methodology to other point cloud processing tasks such as LiDAR point cloud colorization / registration using range images and optical images through variational models. Acknowledgement J-F. Aujol is a member of Institut Universitaire de France. This work was funded by the ANR GOTMI (ANR-16-CE33-0010-01) grant. We would like to thank the anonymous reviewer for his/her useful comments. Figure 1 : 1 Figure 1: Result of the segmentation and the disocclusion of a pedestrian in a point cloud using range images. (left) original point cloud, (center) segmentation using range image, (right) disocclusion using range image. The pedestrian is correctly segmented and its background is then reconstructed in a plausible way. Figure 2 : 2 Figure 2: Example of the intrinsic topology of a 2D LiDAR sensor built on a plane Figure 5 : 5 Figure 5: Example of a point cloud from the KITTI database (Geiger et al., 2013) (a) turned into a range image (b). Note that the dark area in (b) corresponds to pulses with no returns. Figure 6 : 6 Figure 6: Result of the histogram segmentation using the approach of Delon et al. (2007). (a) segmented histogram (bins of 50cm), (b) result in the range image using the same colors. We can see how well the segmentation follows the different modes of the histogram. Figure 7 : 7 Figure 7: Example of point cloud segmentation using our model on various scenes. We can note how each label stricly corresponds to a single object (pedestrian, poles, walls). Figure 8 : 8 Figure 8: Quantitative analysis of the segmentation of cars. Our segmentation result only slighly differs from the ground truth in areas close to the ground or for points that were largely deviated such as points through windows. Figure 9 : 9 Figure 9: Result of the segmentation of a point cloud where two objects end up with the same label (a), and the labeling after considering the connected components (b). Figure 10 : 10 Figure 10: Comparison between disocclusion algorithms. (a) is the original point cloud (white points belong to the object to be disoccluded), (b) the result after Gaussian diffusion and (c) the result with our proposed algorithm (1500 iterations). Note that the Gaussian diffusion oversmoothes the background of the object whereas our proposed model respects the coherence of the scene. Figure 11 : 11 Figure 11: (a) is the definition of the different frames between the LiDAR sensor (xL, yL, zL) and the real world (xW , yW , zW ), (b) is the definition and the visualization of η. Figure 12 : 12 Figure 12: Result of disocclusion on a pedestrian on the KITTI database (Geiger et al., 2013). (a) is the original range image, (b) the segmented pedestrian (dark), (c) the final disocclusion. Depth scale is given in meters. After disocclusion, the pedestrian completely disappears from the image, and its background is reconstructed accordingly to the rest of the scene. Figure 13 : 13 Figure 13: 3D representation of the disocclusion of the pedestrian presented in Figure 12. (a) is the original mask highlighted in 3D, (b) is the final reconstruction. Figure 14 : 14 Figure 14: Result of the disocclusion on a car in a dense point cloud. (a) is the original point cloud colorized with the reflectance, (b) is the segmentation of the car highlighted in orange, (c) is the result of the disocclusion. The car is entirely removed and the road is correctly reconstructed. disocclusion model on the KITTI dataset. The experiment consists in removing areas of a Figure 15 : 15 Figure 15: Example of results obtained for the quantitative experiment. (a) is the original point cloud (ground truth), (b) the artificial occlusion in dark, (c) the disocclusion result with the Gaussian diffusion, (d) the disocclusion using our method, (e) the Absolute Difference of the ground truth against the Gaussian diffusion, (f) the Absolute Difference of the ground truth against our method. Scales are given in meters. Figure 15 15 Figure 15 shows an example of disocclusion following this protocole. The result of our proposed model is visually very plausible whereas the Gaussian diffusion ends up oversmoothing the reconstructed range image which increases the MAE. Figure 16 . 16 Figure 16.b presents the disocclusion of the cyclist. The background is reconstructed in a plausible way, however, details of the occluded part of the pole are not recovered. Figure 16 : 16 Figure 16: Example of a scene where two objects overlap in the acquisition. (a) is the original point cloud colored with depth towards sensor with the missing part of a pole highlighted with dashed pink contour, (b) shows the two objects that overlap: a pole (highlighted in orange) and a cyclist (highlighted in green), (c) shows the disocclusion of the cyclist. Although the background is reconstructed in a plausible way, details of the occluded part of the pole are missing. Table 1 : 1 Comparison of the average MAE (Mean Absolute Error) on the reconstruction of occluded areas. Gaussian Proposed model Average MAE (meters) 0.591 0.0279 Standard deviation of MAEs 0.143 0.0232
42,450
[ "8673", "184062", "14494", "224" ]
[ "3102", "27730", "27730", "11157", "3102" ]
01756840
en
[ "shs" ]
2024/03/05 22:32:10
2018
https://shs.hal.science/halshs-01756840/file/JWIP_Stem%20Cells%20AWAM-2018.pdf
Stem cell technology is undergoing a rapid development for its high potential in versatile therapeutic applications. Patent protection is a vital factor affecting the development and commercial success of life sciences inventions; yet human stem cells-based inventions have been encountering significant restrictions particularly in the perspective of patentable subject matters. This article looks into the patentability limits and unique challenges for human stem cells-based patents in four regions: Europe, the United States, China and Japan. We will also provide suggestions for addressing the emerging issues in each region. Introduction Stem cell technology is an eye-catching and fast-growing research area with huge therapeutic potential. Human stem cells in particular are the focus of much interest in research, and much hope and hype surrounds their clinical therapy potential. However, ethical considerations and regulatory restrictions over human stem cells have framed the development of regenerative medicine and drug discovery. While the legal protection of life sciences inventions via patents has been recognized for years, it continuously raises challenges worldwide. It is especially true regarding patents based on human stem cells, particularly human Embryonic Stem Cells (hESCs). For instance, the recent Courts' decisions regarding the restriction of patents based on hESCs in Europe and the exclusion of natural products in the United States have undoubtedly hampered the patent protection for many human stem cells products. On the other hand, methods for the treatment or diagnosis of diseases practiced on human are generally not patentable in most major jurisdictions. While it is difficult to have a clear idea of how the patentability limits are affecting the human stem cells markets, it is obvious that developers of stem cells-based products or processes, whether they be academics or companies, have to adapt their research and commercial strategies to the scopes of patentability. Looking into the context of in-force patent laws, rules, regulations and relevant court decisions, this paper looks into the emerging issues in the patentability of inventions based on human stem cells in four regions: Europe, the United States (U.S.), China and Japan which have the majority of patents applications (World Intellectual Property Organization, 2015, p. 23). However, this article does not cover the patentability of human stem cells under the Trade-Related Aspects of Intellectual Property Rights (TRIPS) agreement (World Trade Organization, 1994), even though the U.S., China, Japan, all Member States of the European Union (EU) as well as the EU itself are contracting parties to the TRIPS agreement. It should be noted that, despite each region has its unique patent system imposing various patentability requirements, most countries build their patentability framework around five limbs of patentability; namely patentable subject matter, novelty, inventiveness, written description and enablement. This article does not only discuss these requirements and their standard in each of the four jurisdictions as far as stem cells are concerned. More importantly, the authors aim to provide an overview of the patent framework and exclusions of patentability in the four jurisdictions, and highlight the unique challenges for human stem cells-based patents in each region. We will also provide suggestions for overcoming these obstacles and adapting the changing landscape. Last but not least, specific aspects of the four patent systems are compared to provide a glimpse of global protection for stem cell inventions. I -Europe A) Background In Europe, patent law is relatively uniform although in addition to each State, two different organizations are regulating the field: the European Patent Organization (EPO) and the European Union (EU). The European Patent Organization covers 28 Member States of the EU and 10 non-EU countries. 1 It is based on the European Patent Convention (EPC), a multilateral Treaty signed in Munich on October 5, 1973. The EPO is mainly in charge of granting European patents. Its organizational structure notably includes 28 independent Technical Boards of Appeal that can refer to the Enlarged Board of Appeal to ensure a uniform application of the law. A European patent confers protection in all the contracting States that have been designated by the applicant as long as it has been validated by their national patent offices. The European Union regulates stem cells patents on the basis of the Directive on the legal protection of biotechnological inventions of July 6, 1998 (European Parliament and Council, 1998). The Directive harmonizes national patent laws: it has been transposed in every EU Member States and it is applied by their national patent offices. Contrary to the EPO that has been established to regulate patents only, the EU competency goes beyond patentability. Thus, it has not had a specific court for patents disputes and these matters have fallen under the remit of the general Court of Justice of the European Union. However, in 2012, the Member States of the EU (except Spain, Poland and Croatia) decided to establish an enhanced cooperation and to adopt the so-called "patent package". It includes a regulation creating a European patent with unitary effect (hereafter the "unitary patent") (European Parliament and Council, 2012), a regulation on the language regime applicable to the unitary patent (European Council, 2012) as well as an agreement between the EU countries to set up a specialized Unified Patent Court (European Council, 2013). This "patent package" will enter into force once ratified by any thirteen Member States including France, Germany and the United Kingdom (UK). 2 However, following the UK's vote to exit the EU, a minima new delay could be expected for such entry into force (Grubb et al., 2016; Jaeger, 2017). Besides already existing national patents (regulated by national laws that have been harmonized by the European Directive 98/44/EC) and the classical European patents (regulated by the EPC), the unitary patent will be a third option. Granted by the EPO under the provisions of the EPC, a unitary patent will be a European patent to which a unitary effect for the territories of the participating States will be given, at the patentee's request, after grant. Finally, it should be highlighted that even though the EPO and the EU are two distinct organizations, the contracting States of the former decided to incorporate the Directive 98/44/EC as secondary legislation into the Implementing Regulations to the EPC. This directive is used as a supplementary means of interpretation of the EPC since 1999. Thus, there is a trend for a global "uniformization" of European patents laws in the field of biotechnological inventions although a European patent relied on the EPC framework and a national patent relied on both the Directive 98/44/EC and on national law implementing it. While the articulation between the different kinds of patents, notably with the future unitary patent, and patent laws in Europe are raising lots of concern (Kaesling, 2013; Kaisi, 2014; Mahne, 2012; Pila and Wadlow, 2014; Plomer, 2015), what is going beyond the patentability of inventions based on human stem cells is not considered in this article. In Europe, there are four basic criteria for patentability. 3 First, there must be an invention belonging to any field of technology that is both a technical and concrete character. Second, the invention must be susceptible of industrial application, i.e. it can be made or used in any kind of industry as any physical activity of "technical character". Third, it must be new, not forming part of the state of the art. In the absence of grace period 4 , the invention must not have been made available to the public before the date of filing of the patent application. Fourth, it must involve an inventive step, such that it is not obvious to a person skilled in the art. Finally, the "sufficient disclosure" requirement implies that the full scope of a claim must be adequately enabled by disclosing methods of practicing the invention in the specification (Marty et al., 2014). It should be highlighted there is not a clear distinction between "enablement" and "clear written description" but both account for patentability in Europe (Schuster, 2007). B) The exclusions of patentability In Europe two types of exclusions could be distinguished: "moral exclusions" and "ineligible subject matters". Moral exclusions European patent law provides "moral exceptions" (Min, 2012) from patentability; that is, patents are not granted where the commercial exploitation of the inventions is contrary to ordre public or morality, beyond the straightforward prohibition by law or regulation. 5 Regarding the exclusions specific to the patentability of biotechnological inventions, the wording of the EPC and of the Directive 98/44/EC is the same. On one hand, "the human body, at the various stages of its formation and development, and the simple discovery of one of its elements, including the sequence or partial sequence of a gene, cannot constitute patentable inventions." 6 However, it is specified, "an element isolated from the human body or otherwise produced by means of a technical process, 7 including the sequence or partial sequence of a gene, 8 may constitute a patentable invention, even if the structure of that element is identical to that of a natural element." 9 Similarly, a microbiological or other technical process, or a product obtained by means of such process can be patented. 10 On the other hand, European patent law provides a non-exhaustive list of inventions of which the commercial exploitation is contrary to "ordre public" or morality: processes for cloning human beings; processes for modifying the germ line genetic identity of human beings; and uses of human embryos for industrial or commercial purposes. 11 Ineligible subject matters Article 52 (2) of the EPC provides general exclusions that are not specific to biotechnological inventions: (a) discoveries, scientific theories and mathematical methods; (b) aesthetic creations; (c) schemes, rules and methods for performing mental acts, playing games or doing business, and programs for computers; (d) presentations of information. Moreover, and apart from plant or animal varieties that are not covered in this article, European patent law also excludes "methods for treatment of the human or animal body by surgery or therapy and diagnostic methods practiced on the human or animal body"; this provision shall not apply to products, in particular substances or compositions, for use in any of these methods. 12 C) Main challenges for stem cells patents Exclusion of the uses of human embryos for industrial or commercial purposes In Europe, the main challenge to stem cells patents is related to embryonic stem cells and to the moral exclusions provided both by the Directive 98/44/EC and the EPC, especially the uses of human embryos for industrial or commercial purposes. Interpreted by the EPO and the European Court of Justice of the European Union, such disposal has been aligned to exclude from patentability inventions using human embryonic stem cells (hESCs) obtained either by de novo destruction of human embryos, or from publicly available hESC lines initially derived by a process destroying the human embryo (Mahalatchimy et al., 2015a; Mahalatchimy et al., 2015b). First, on November 25, 2008 in the Wisconsin Alumni Research Foundation case, the enlarged board of the European Patent Office decided that European patents with "claims directed to products which-as described in the application could be prepared, at the filing date, exclusively by a method which necessarily involved the destruction of human embryos" are prohibited. 13 Second, the Court of Justice of the European Union has gone a step further in the exclusion with the Brüstle v Greenpeace eV (hereafter the Brüstle) case. 14 On the hand, it has provided a wide definition of the human embryo: any human ovum after fertilization, and any nonfertilized human ovum into which the cell nucleus from a mature human cell has been transplanted or whose division and further development have been stimulated by parthenogenesis. However, the European Court of Justice of the European Union recently came back to its definition of the human embryo to allow the patentability of inventions using embryonic stem cells made from 'parthenotes' egg (Baeyens and Goffin, 2015; Bonadio and Rovati, 2015; Kirwin, 2015; Mansnérus, 2015; Stazi, 2015). Henceforth, "an unfertilized human ovum whose division and further development have been stimulated by parthenogenesis does not constitute a 'human embryo', (…) if, in the light of current scientific knowledge, it does not, in itself, have the inherent capacity of developing into a human being, this being a matter for the national court to determine." 15 (Faeh, 2015; Ribbons and Lynch, 2014) On the other hand, the Brüstle case has given an extensive interpretation of the patent's exclusion for uses of human embryos for commercial or industrial purposes: an invention is excluded from patentability where it involved the prior destruction of human embryos or their use as base material. It occurs whatever the stage at which such destruction takes place and even if the claim does not refer to the use of human embryos. The Brüstle case has been widely commented both by scientists (Wilmut, 2011; Koch et al, 2011; Vrtovec and Scott, 2011) and lawyers (Bonadio, 2012; Davies and Denoon, 2011; Plomer, 2012). Third, on February 4, 2014 in the Technion Research and Development Foundation Case, the European Patent Office followed the Brüstle case excluding from patentability inventions using hESCs obtained by destruction of human embryos, whenever such destruction takes place. Exclusion of surgery, therapy and diagnostic methods Even though methods for the treatment of the human or animal body by surgery or therapy and diagnostic methods practiced on the human or animal body are excluded from patentability in accordance with the EPC, such exclusion has been limited (Ventose, 2010). First, it does not cover products used in such methods (European Patent Office, 2015). Second, clarifications have been provided regarding the interpretations to be given to "treatments by surgery" and "treatment and diagnostic methods". For surgery, the EPO clarified that "treatments by surgery" are not confined to surgical methods pursuing a therapeutic purpose. 16 Consequently, beyond surgical treatment for therapeutic purposes, methods of treatment by surgery for embryo transfer or for cosmetic purposes are also excluded from patentability. As for treatment and diagnostic methods, exclusions are generally limited to those that are carried out on living human (or animal) bodies. Thus, where these treatment or diagnostic methods are carried out on dead bodies, they are not excluded from patentability. Similarly, they are not excluded from patentability if they are carried out in vitro, i.e. on tissues or fluids that have been removed from living bodies, as long as they are not returned to the same body. 17 Further, as in China and Japan, the aim of the methods is determining to their patent eligibility. Methods of treatment of living human beings or animals such as pure cosmetic treatment of a human by administration of a chemical product 18 or methods of measuring or recording characteristics of the human or animal body are patentable, where they are of a technical and not essentially biological character. 19 Perspectives European patent law excludes from patentability products and processes based on hESCs obtained by the destruction of human embryos, whenever such destruction takes place. According to the interpretation of the European Courts, this exclusion is not restricted to de novo destruction of human embryos and covers the use of publicly available hESC lines initially derived by a process destroying human embryos (Mahalatchimy et al., 2015a). This wide interpretation and the extensive definition given to the "human embryo" imply that no patent could be obtained on inventions based on human hESCs in Europe. However, several elements are limiting or could limit such broad exclusion. First, divergences have appeared in the national implementations of the Brüstle case regarding the proofs of hESCs via methods that do not involve the destruction of human embryos (Mahalatchimy, 2014). The EPO and the UK patent office have considered the inventor should prove that hESCs have been obtained by other methods than the destruction of human embryos. 20 On opposite, the German Federal Court has considered it is not required to be proved. 21 Consequently, a general claim of non-destruction of human embryos would be sufficient to obtain a patent for an invention based on hESCs as long as other criteria are satisfied. On the basis of such wide interpretation, it may be easier to obtain a patent for an invention based on hESCs than in other countries having a stricter interpretation of the Brüstle case as long as the Court of Justice of the European Union has not clarified such question. Second, the International Stem Cell Corporation case law 22 of the Court of Justice of the European Union which is generally seen as a clarification of European patent law on ESCs (Moore and Wells, 2015) is nevertheless questionable (Norberg and Minseen, 2016): should it be considered as an exception to the Brüstle case or as an opening to a wider reversal of jurisprudence of the Court of Justice of the European Union? Indeed, parthenotes are not considered anymore to be human embryos and consequently not excluded as long as they do not have the inherent capacity of developing into a human being. However, one can expect other techniques (that were previously interpreted as given rise to human embryos in the Brüstle case) could be claimed as not having the inherent capacity of developing into a human being. For instance, it could be considered that hESCs obtained by somatic cell nuclear transfer would need to be implanted in utero (although it is forbidden in accordance with the prohibition of human cloning) to have the capacity to develop into a human being. The same could be claimed for induced pluripotent stem cells (iPSCs) even though they have not been mentioned in these cases of the European Courts and they have not been included in the wide definition of human embryos. It seems the International Stem Cell Corporation case law should be seen as a clarification that narrows the extent of the exclusion from patentability of the uses of human embryos for industrial or commercial purposes. Indeed, the European Commission has been recommended to take no further action for clarification following the recent jurisprudence by most of the members of the Expert Group on the development and implications of patent law in the field of biotechnology and genetic engineering (Expert Group on the development and implications of patent law in the field of biotechnology and genetic engineering of the European Commission, 2016). Beyond hESCs, patents could be obtained on products and manufacturing methods based on allogeneic stem cells. However, for autologous cell-based regenerative therapies that are based on the patient's own cells (the donor of the cells is also the recipient of the final product made from these cells), it is not the product that is manufactured at the industrial scale (the product is patient specific as autologous); it can only be the manufacturing process industrially applicable. Moreover, treatment processes that generally occur by surgery in the field of cell therapy are excluded from patentability under the exclusion of surgery, therapy and diagnostic methods. Consequently, treatment methods based on stem cells would not be patentable. Thus, although European patent law is relatively uniform, national divergences remain in the interpretation of the most recent jurisprudence on stem cells and new issues need to be solved regarding the future unitary patent, especially on the implementation of the moral exclusions by both the Court of Justice of the European Union and Unified Patent Court (Aerts, 2014; McMahon, 2017). Inventiveness The inventive step criterion might be a hurdle in stem cell patenting. Inventions are opposed to mere discoveries that are not patentable. Indeed discoveries as such have no technical effect and are therefore not inventions in accordance with European patent laws. 23 Moreover, it should be noted that in Europe, secondary indicators such as an unexpected technical effect or a long-felt need may be regarded as indications of inventiveness. However, commercial success alone is not to be regarded as an indication of inventive step, except when coupled with evidence of a long-felt want provided that "the success derives from the technical features of the invention and not from other influences (e.g. selling techniques or advertising)". 24 The author will discuss this requirement in details in the following U.S. session for its seemingly higher standard than the other three regions. II -United States A) Background The United States Patent and Trademark Office (USPTO) is responsible for examining and granting patents under the U.S. Patent Law (Title 35 of United States Code, 35 U.S.C.) and Rules (Title 37 of Code of Federal Regulations, 37 C.F.R.). A utility patent is granted for any "new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof". 25 An invention must be useful 26 , new 27 and non-obvious 28 to be patentable in the United States. Moreover, the invention must have a specific and substantial utility that one skilled in the art would find credible. 29 Further, the statute mandates that an invention shall be fully described and enabled by the specification; 30 and that the specification must disclose the best mode of making and practicing the invention as the inventors contemplate, although it is not necessary to point out the best mode embodiment. 31,[START_REF]Manual of Patent Examining Procedure (MPEP) §2165.01[END_REF] Notably, the written description requirement is separate and distinct from the enablement requirement. The former requires that an invention shall be adequately described to show possession of the invention whereas the enablement requirement is satisfied where a person skilled in the art can make and use the claimed invention without undue experimentation. More specifically, the specification should disclose at least one method for making and using the claimed invention. [START_REF]Manual of Patent Examining Procedure (MPEP) §2164[END_REF] Unlike the mainstream absolute novelty standard, the U.S. offers a one-year grace period for inventors to file a patent application after disclosure of the invention. 34 B) The exclusions of patentability One notable feature which imparts a significant difference between U.S. and the other three regions is that U.S. patent law does not impose any moral consideration in determining whether a subject matter should be excluded from patent protection. Laws of nature, natural phenomena and abstract ideas have been historically excluded from the U.S. patent protection (Lesser, 2016). 35 Living organisms including animals, plants and microorganisms are all patent-eligible 36 but patents directed to or encompassing a human organism including a human embryo and a fetus are prohibited. [START_REF]Manual of Patent Examining Procedure (MPEP) §2105, Part III[END_REF] The U.S. has issued a wide range of stem cell patents from products (e.g. hESCs, iPSCs and regenerated tissues) to methods (e.g. manufacturing processes and therapeutic applications). However, the patent eligibility landscape changed dramatically in the past few years in light of several Supreme Court decisions Mayo in 2012, [START_REF][END_REF] Myriad in 2013 39 and Alice in 2014. 40 Mayo and Myriad respectively touched on claims involving the natural correlation between metabolite levels and drug effectiveness/toxicity and claims for human genes. They have significantly expanded the scope of exclusion to many biotech and pharmaceutical inventions. Very briefly, Mayo invalidated diagnostic claims that determine whether a particular dosage of a drug is ineffective or harmful to a subject based on the level of metabolites in the subject's blood. 41 Two physical steps were recited in the claim, namely a step of administering the drug to a subject and a step of determining the level of a specific metabolite in the subject. However, the Court regarded these steps as "well-understood, routine and conventional" activities which researchers are already engaged in the field to apply the natural correlation and hence the claim as a whole does not amount to "significantly more" than the natural law itself (Dutra, 2012; Chan et al., 2014; Selness, 2017). Myriad held that an isolated nucleic acid having sequence identical to a breast cancer-susceptible gene BRCA is not patent-eligible because it is a product of nature; whereas a complementary DNA (cDNA) having non-coding introns of the gene removed is eligible because the cDNA is not naturally-occurring and distinct from the natural gene. 42 Following the precedent ruling in Charkrabarty that upheld the patentability of a genetically engineered microorganism, 43 the Court looked for "markedly different characteristics from any found in nature" of the isolated gene to determine patent eligibility. The Court noted: "separating [the] gene from its surrounding genetic material is not an act of invention". 44 (Chan et al., 2014) Even if the claimed DNA molecules are somehow different from the genes in the genome in terms of chemical structure, the Court gave no deference to that because the claims were not relying on the chemical aspect of the DNA but the genetic information of the DNA that was neither created nor altered by the patentee. On the face of Myriad, isolated products such as chemicals, genes, proteins and even cells have to be different from the natural substances to a certain extent to be patent-eligible (Wong and Chan, 2014; Chan et al., 2014). Alice concerns abstract ideas and adopted the Mayo framework, ruled that "[t]he mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into patent-eligible invention", and that claims directed to an abstract idea is ineligible when the computer or software feature adds nothing more than generic or conventional functions to the invention. 45 Alice has a dramatic impact on inventions related to software or business methods (Stern, 2014; Jesse, 2014; Ford, 2016). New criteria for eligibility Evolving with the development of Court cases, the USPTO successively issued four guidelines during 2012-2014 for Examiners on how to apply Mayo, Myriad, Alice and other precedent cases to examine eligibility of claims related to natural phenomenon, laws of nature or abstract ideas. The in-effect guideline was released on December 16, 2014 (United States Patent and Trademark Office, 2014a), and supplemented by two updates (United States Patent and Trademark Office, 2015; United States Patent and Trademark Office, 2016a). For natural products, eligibility is determined principally on whether the claimed product possesses any structural, functional and/or other properties that represent "markedly different characteristics" from the natural counterparts. Importantly, neither innate characteristics of the natural product nor characteristics resulted irrespective of inventor's intervention qualify as "markedly different characteristics" (United States Patent and Trademark Office, 2016a). 46,47 A combination of natural products is examined as a whole rather than as individual components (United States Patent and Trademark Office, 2014a). While for claim which sets forth or describes an exception (in contrast to "is based on" or "involves" an exception), the claims must contain additional elements that "add significantly more" to the exception such that it is "more than a drafting effort designed to monopolize the exception" (United States Patent and Trademark Office, 2014a). General applications of natural products or natural laws employing well-understood, routine and conventional activities known in the field are not patenteligible (United States Patent and Trademark Office, 2014a). For example, a process claim is eligible if it is focused on a process of practically applying the product for treating a particular disease that does not seek to tie up the natural product (United States Patent and Trademark Office, 2014b). Most patents invalidated after Mayo, Myriad and Alice have been business method or software-related inventions. Among these patents, Ariosa is a notable case in which the decision has sparked intense debates and worries in the field of biotechnology, and more specifically molecular diagnosis. In Ariosa, the Federal Circuit upheld the invalidation of a patent for a method of detecting cell-free fetal DNA (cffDNA) in maternal serum or plasma under the Mayo framework. 48 Despite the Court acknowledged that the discovery of the presence of cffDNA in maternal plasma or serum was new and useful, it recognized that the steps of amplifying and detecting cffDNA with methods such as Polymerase Chain Reaction (PCR) are wellunderstood, routine and conventional activities in 1997; and the claimed method amounts to a general instruction to doctors to apply routine and conventional techniques to detect cffDNA and hence is not eligible for patent. The Court also noted that "preemption may signal patent ineligible subject matter, the absence of complete preemption does not demonstrate patent eligibility", meaning that a claim is not eligible merely for not blocking all other alternative uses of the natural product or law. In March 2016, Sequenom filed a Petition for Writ of Certiorari in the Supreme Court to challenge the Federal Court's decision in Ariosa; however, the highest Court declined to review and thus the Ariosa decision is finalized (Selness, 2017). 49 The disfavorable disposition was slightly relieved in Rapid Litigation Management, 50 in which the Federal Circuit, for the first time since the decisions in Mayo and Alice, upheld a patent that was drawn to a law of nature. In Rapid Litigation Management, the inventors of the concerned patent developed an improved method of preserving hepatocytes upon repeated steps of freezing and thawing. The claims at issue were drawn to a method of preparing multi-cryopreserved hepatocytes of which the resulting hepatocytes are capable of being frozen and thawed at least two times and exhibit 70% viability after the final thaw. The Federal Court found the claims patent-eligible because they were not directed to the ability of hepatocytes to survive multiple freeze-thaw cycles but a new and useful laboratory technique for preserving hepatocytes, noting that the inventors "employed their natural discovery to create a new and improved way of preserving hepatocyte cells for later use". [START_REF] Mcmahon | An institutional examination of the implications of the unitary patent package for the morality provisions: a fragmented future too far?[END_REF] Although individual steps of freezing and thawing were well known, the Court recognized that at the time of the invention, it was believed that a single round of freezing severely damaged hepatocyte cells and resulted in lower cell viability and therefore the prior art actually taught away from multiple freeze-thaw cycles. As such, the Court concluded that the claimed method which "[r]epeating a step that the art taught should be performed only once can hardly be considered routine or conventional". 52 Specifically, the Court looked into whether the end result of the method was directed to a patent-ineligible concept. The Court said no because the end result was "not simply an observation or detection of the ability of hepatocytes to survive multiple freeze-thaw cycles"; rather, the claims recited a number of steps that manipulate the hepatocytes in accordance with their ability to survive multiple freezethaw cycles to achieve the "desired preparation". 53 (Sanzo, 2017; United States Patent and Trademark Office, 2016b) More §101 rejections Impacts of Myriad and Mayo to the biotech and pharmaceutical industries are farreaching and tremendous. At the USPTO in the service responsible for biotechnology and organic chemistry (i.e., Patent Technology Centre 1600), it was estimated that the percentage of Office Actions with a U.S.C. §101 rejection regarding ineligible subject matters in May 2015 increased by nearly two folds than two months before the March 2012 Mayo decision (11.86% vs 6.81%) (Sachs, 2015). Patent Technology Centre 1630 designated for molecular biology and nucleic acids related inventions was profoundly affected, having the §101 rejection rate boosted from 16.8% to 52.3% (Sachs, 2015). The total rejection rate in Technology Centre 1600 was rising from 10.4% before Alice to 13.1% in December 2015, followed by a notable drop to 10.9% in July 2016 and 10.0% in May 2016 (Sachs, 2016). While it is too early to conclude that the situation becomes less adverse to the patentees, it may signal that the stringent situation has started to relax while uncertainties remain (Leung, 2015). C) Main challenges for stem cells patents Major hurdles in stem cell patenting are the expanded scope of ineligible subject matters and obviousness rejections. Patent eligibility Product claims Usefulness of patents is limited if the patented inventions cannot be commercialized. The applicant has to resolve the dilemma: non-native features would weigh toward the eligibility of stem cells already existing in nature, however stem cells for medicinal uses should be native enough to be as safe and effective as the natural stem cells. Myriad held that purification or isolation is not an act of invention. Stem cells, be they ESCs or adult stem cells, or produced by a new, ground-breaking method, are not patentable absent any distinctive structural, functional or other properties from the natural cells in our body. 54 iPSCs obtained using exogenous genes are more likely to survive for their artificial nature; however, iPSCs that are produced otherwise may be ineligible if the cells are indistinguishable from a naturally-occurring stem cells except the production method. On the opposite, regenerated tissues and organs would typically be patentable because they are usually not exactly the same as the actual tissues and organs (Tran, 2015). Intrinsic properties of stem cells add no weight to the eligibility, hence stem cells identified by natural biomarkers are likely to be construed as a product of nature and deemed not patentable. In contrast, new traits resulted from inventor's efforts such as extended lifespan, higher self-renewal ability and expression of new biomarkers may open up patentability for isolated stem cells. For example, U.S. Patent 9,175,264 claims an isolated population of human postnatal deciduous dental pulp multipotent stem cells expanded ex vivo. This application was initially rejected under U.S.C. §101 because the Examiner opined that the claimed cell is a product of nature; but was later allowed after the applicant limited the cell to express CD146 that is absent in the natural counterpart. 55 Lastly, it is worth noting that a product-by-process claim is examined based on the product itself not the manufacturing process, hence stem cells pursued under a product-by-process claim still need to abide on the "markedly different characteristics" requirement for natural products (United States Patent and Trademark Office, 2014a). Method claims Method claims may be more favorable given the unclear prospect of stem cell patents. The USPTO's records indicate that methods for stem cell production, maintenance or differentiation remain patentable post-Mayo and post-Myriad. Methods of producing iPSCs by reprograming somatic cells are likely patentable (e.g. U.S. Patent 9,234,179), but differentiation methods that make no difference from the natural differentiation processes could be unpatentable. As discussed, under the "significantly more" requirement, applications of a natural phenomenon must possess additional features that transform the claims into some eligible processes that amount to more than the natural phenomenon itself. Thus, for a method of differentiating stem cells using components of a signaling pathway (e.g. basic fibroblast growth factor (bFGF) and epidermal growth factor (EGF) for neural differentiation) reciting no additional feature that amounts to "significantly more", the examiner may regard the method as a general application of the natural phenomenon and hence unpatentable (Morad, 2012). Methods of identifying or selecting stem cells based on the detection of natural biomarkers may be rejected if the claims only recite routine and conventional techniques to detect the biomarkers (Chan et al., 2014). As implied in Ariosa, patentability is not justified albeit the inventor newly discovered the presence of biomarkers in these cells. Diagnostic methods hinging on the detection of natural biomarkers may be likewise rejected if specified at a high level of generality. As exemplified by the USPTO, a diagnostic method relying on the detection of a natural human biomarker in a plasma sample by an antibody is patent eligible if the antibody has not been routinely or conventionally used for detecting human proteins (United States Patent and Trademark Office, 2016c). Notably, the Office suggests that it is feasible to limit the claim to the detection of a biomarker without reciting any step of diagnosis of the disease or analysis of the results such that the claim would not be regarded as ineligible for describing a natural correlation between the presence of the biomarker and the presence of the disease (United States Patent and Trademark Office, 2016c); yet, the author takes the position that this is contradictory to Ariosa which ruled that methods of detecting cffDNA in maternal serum are not eligible. On the other hand, as learnt from Rapid Litigation Management, for claims which are based on a natural law or natural phenomenon, it may be useful to focus on the end result of the claims and emphasize that the claims are directed to the manipulation of something (e.g. a pool of mesenchymal stem cells) to achieve a desired end result (e.g. exhibits specific therapeutic functions) to argue that the claims are not directed to a patent-ineligible concept. When it comes to methods for treatment or screening compounds using natural stem cells, they are eligible if the methods per se are specific enough such that they do not preempt the use of the natural cells. The USPTO exemplified that using a natural purified amazonic acid compound for treating breast or colon cancer (United States Patent and Trademark Office, 2014a) and that using an antibody against tumor necrosis factor (TNF) for treating julitis are both patent eligible. [START_REF]Julitis" is a hypothetic autoimmune disease given by the USPTO (United States Patent and Trademark Office[END_REF] As an analogy, a method of treating leukemia by administering an effective amount of natural hematopoietic stem cells to a leukemia patient is likely patentable. Perspectives Many practitioners have urged the Congress to make clear whether the ruling of Myriad applies beyond nucleic acids, and at what extent should Mayo be applied to the field of diagnostics. The USPTO is continuously seeking public comment to its latest update in May 2016 (United States Patent and Trademark Office, 2016d) and may issue new update that may better sort out how Myriad and Mayo are applied to various disciplines. The Office held two roundtables in late 2016 to solicit public views respectively on its subject matter eligibility guidances and larger questions concerning the legal contours of eligible subject matter in the U.S. patent system (United States Patent and Trademark Office, 2016e, 2017a, 2017b). The life science industry and also supporters from computer-related industry are calling for new legislation to replace the Mayo/Alice test with a technological or useful art test, and to clearly define exceptions to eligibility or clearly separating eligibility from other patentability requirements (United States Patent and Trademark Office, 2017c). While law and policy could change depending on the subsequent measures of the USPTO and Congress, it is beneficial to pursue both product claims (stem cells) and method claims (producing processes and applications), although the former are likely rejected for their native nature. Inventors may concentrate on non- §101 rejections such as obviousness and enablement while postponing the subject matter arguments to buy time to get a clearer picture from additional guidance or court decisions (Gaudry et al., 2015). Inventors may also look into the prosecution history of patented applications to learn what is eligible and vice versa, and the rationale behind so as to enhance their chance to survive under §101. Notably, patent eligibility could highly depend on how the claim is structured (Smith, 2014). Inventors should examine and describe in the application any distinctive features between the isolated stem cells and their natural forms, and include these characteristics in the claims when necessary. It is also useful to emphasize the association of human effort with these characteristics. Inventors may also concurrently pursue cultural media or system, compositions or treatment kits comprising stem cells and non-natural components and so on for multiple levels of protection. While for applications of stem cells or natural principles, the fact that the inventor discovers a new and useful natural product or law does not weigh toward patent eligibility of their uses. Although the framework of Mayo and Alice appears not to have been changed or reshaped by Sequenom and Rapid Litigation Management, the two cases did provide additional guidance on eligible subject matters concerning natural matters in the field of life sciences. The general advice is, method claims should not merely read on the natural products or laws and should be scrutinized on any preemptive effect for the natural matters; and specific and inventive steps should be added to the claims to reduce the level of generality of the methods. It is undisputed that simply using the word "apply" does not avail, the standard of "significantly more" is general yet unclear. It is not easy to interpret whether the additional step would be treated as a "well-understood, routine and conventional activity practiced in the field", or an element capable of transforming the natural matter into something eligible. Importantly, the USPTO noted that a technique that is known (or even has been used by a few scientists) "does not necessarily show that an element is well-understood, routine and conventional activity" practiced in the field; rather, the evaluation turns on whether the use of that particular known technique was a well-understood, routine and conventional activity previously engaged in by scientists in the relevant field (United States Patent and Trademark Office, 2016a). Hence, applicants may argue that the recited step was not a technique prevalently used in the field at the time the application was filed to overcome the rejection. Obviousness Obviousness is a big challenge to stem cell patenting common in the four regions. The four regions share a similar framework in determining obviousness but the U.S. appears to adopt a higher standard than the other three regions. The authors chose to use the U.S. system to illustrate this topic for two reasons: 1) the readers may be more benefited if we discuss the topic at a higher standard; 2) the U.S. has a large volume of case law and administrative decisions touching upon this topic including some useful examples in the areas of stem cells. The U.S. in general adopts a similar approach for determining obviousness as in other jurisdictions, i.e., determining the scope and content of the prior art, ascertaining the difference between the invention and the prior art, and resolving the level of ordinary skill in the art (the "Graham" factors). [START_REF]Manual of Patent Examining Procedure (MPEP) §2141[END_REF] "Teaching, suggestion, or motivation" test (the "TSM test") has long been the standard for obviousness determination, under which a claim would only be proved obvious if some motivation or suggestion to combine the prior art teachings could be found in the prior art, the nature of the problem, or the knowledge of a person having ordinary skill in the art (Davidson and Myles, 2008). [START_REF][END_REF] However in 2007, KSR held that precise teachings in the prior art are not required to prove obviousness; rather, an invention is obvious if one skilled in the art has good reasons to combine the prior art elements to arrive at the claimed invention with an anticipated success. 59 Since then, more rationales can be used to prove obviousness thus significantly lowering the obviousness threshold. The USPTO has provided a non-exhaustive list of rationales for supporting an obviousness conclusion (Dorsey, 2008), 60 the examples include "combining prior art elements according to known methods to yield predictable results", "obvious to try -choosing from a finite number of identified, predictable solutions, with a reasonable expectation of success" and the TSM test (United States Patent and Trademark Office, 2010). As compared to the TSM test, the "obvious to try" rationale is more subjective, thus leaving uncertainty in winning the obviousness battle. Seemingly, an invention that is "obvious to try" with a "reasonable expectation of success" would be deemed obvious and hence not patentable. Nonetheless, the examiner must provide articulated reasoning to support the obviousness conclusion; mere conclusion statements are not acceptable. [START_REF]Manual of Patent Examining Procedure (MPEP) §2142[END_REF] One of the feasible approaches to overcome an obviousness rejection is to carefully study the entirety of the prior art and the examiner's rationale, and rebut by pointing out the insufficiency of the examiner's reasoning. For example, one may refute that teaching away exists in the prior art, the examiner failed to address every claimed element with sound reasons, or the conclusions are hindsight bias. Another approach is to identify all possible differences between the claim and the prior art to locate any feature unique to the invention but absent in the references, or any unexpected effect achieved by the claimed invention to show that the combination of the prior art elements would not be able to arrive at the claimed invention (Davidson and Myles, 2008). Declaration under 37 C.F.R. §1.132 could be an effective tool to traverse the obviousness rejection by virtue of objective expert's opinion and/or evidence (Messinger and Horn, 2010). The declaration can be, for example, an expert's opinion on the inoperability of references or their combination, or an expert's justification of the level of ordinary skill in the art at the time of invention. Secondary considerations such as lacking a solution for a long-felt need and failure of others, unpredictable results and commercial success can also be testified using a declaration. [START_REF]Manual of Patent Examining Procedure (MPEP) §716[END_REF] For example, direct comparative results of the claimed invention and the closest prior art may be presented to show unpredictable results of the invention, while series of patents and publications attempted to solve the same problem but unsuccessful may be used to show a long-felt but unsolved need. The power of substantiated expert's opinions and objective evidence on addressing obviousness may be illustrated by the reexamination proceeding of U.S. Patent No 7,029,913 (here after the '913 patent'). Claiming an in vitro culture of hESCs and granted to Dr. James Thomson, the '913 patent' was challenged in 2006. One of the debates is whether the claimed hESCs would have been obvious in view of several references which teach, among other things, the isolation or derivation of ESCs from mouse and the culture of these murine ESCs using feeder cell layers. The proceeding took more than 6.5 years for the appellant Board to affirm the patentability after minor claim amendment. [START_REF]The Foundation For Taxpayer & Consumer Rights v[END_REF] During the reopened prosecution, the patentee submitted a declaration from an expert in the field of mouse embryogenesis and stem cells, along with scientific publications, to testify that the prior art method of isolating stem cells without feeder cells was not enabled to produce hESCs, and that no one could derive stem cells from rats till 27 years after the first isolation of murine ESCs despite mice and rats are closely related. One of the strong proof of non-obviousness is a research paper which reported the failure to isolate a replicating in vitro cell culture of pluripotent hESCs following the same prior art method. The declaration also pointed out that the invention was widely recognized as a breakthrough and highly praised by scientists in the field. The Board finally concluded that the above presented strong evidences of non-obviousness and thereby affirmed the patentability of the claims. Lastly, while the '913 patent' concerns a post-grant proceeding, it is very important that a §1.132 declaration be timely submitted before the final Office Action for consideration in the prosecution and appeal procedures, and be justly supported more than an attorney's assertion. In sum, the laws governing the subject matter eligibility are evolving and clearly unsettled, the actual impacts to the stem cell landscape remain to be seen but applications directed to natural stem cells and their applications have been rejected. Case law implicates that the claim language could be determining to patent eligibility. Stakeholders are advised to learn from court cases and seek advice from practitioners with biotech expertise to overcome the high patentability hurdles. III -China A) Background The State Intellectual Property Office of China (SIPO) is the administrative agency overseeing patents under the Patent Law and its implementing regulations. In addition to design patents that are not used in the case of stem cells, two types of patents, namely an invention patent and a utility model, are available (Zhang and Yu, 2008; Chen, 2010). An invention patent protects new technical solutions for a product, a process or their improvement; while a utility model is exclusive to products, protecting the shape, structure or their combination of a product. [START_REF]Patent Law of the People's Republic of China[END_REF] Hence, both types of patents may protect a device. A utility model is significantly different from an invention patent in that the former is generally not examined for inventiveness [START_REF]Section I, Chapter II. 66 Patent Law of the People's Republic of China[END_REF] and only grants a 10year patent term as compared to 20 years by an invention patent. 66 Invention patent is more preferable for protecting stem cell inventions than utility model, hence the following sections refer to invention patents unless specified otherwise. China adopts the common patentability requirements of novelty, inventiveness and industrial applicability, 67 and likewise requires an enabling description that clearly and fully describes the invention. 68 Further, direct and original sources of genetic resources on which the invention relies upon must be identified. 69 Embracing the absolute novelty standard, a six-month grace period for public disclosure is possible only for three very limited circumstances: (1) The invention is exhibited for the first time at an international exhibition sponsored or recognized by the Chinese Government; (2) The invention is published for the first time at a specified academic or technological conference; and (3) The contents are divulged by others without the consent of the applicant. 70 B) The exclusions of patentability Similar to Europe and Japan, China denies patents on moral grounds and precludes patents on therapeutic and diagnostic methods. Moral exclusion The moral exception under Article 5 of the Patent Law forbids patents on inventions that violate the laws or social ethics, or harm the public interest. Also, inventions that are accomplished relying on genetic resources obtained or used in violation of laws and administrative regulations are also prohibited. 71 The authority made clear that any industrial or commercial use of human embryos is contrary to social ethics and should not be patented, thus hESCs and their production methods are not patentable. [START_REF]Guidelines for Patent Examination[END_REF][START_REF]Guidelines for Patent Examination[END_REF] Furthermore, human body at various forms and developmental stages including germ cells, fertilized eggs, embryos and individuals are also prohibited from patenting for moral reasons. 74 Ineligible subject matters Article 25 of the Patent Law explicitly excludes patents for six subject matters: 1) scientific discoveries; 2) rules and methods for mental activities; 3) methods for the diagnosis or treatment of diseases; 4) animal or plant varieties; 5) substances obtained by means of nuclear transformation; and 6) designs that are mainly used for marking the pattern, color or the combination of the two of prints. 75 (Chen, 2002) Animal varieties are interpreted to exclude human but include whole animals, animal ESCs, germ cells, fertilized eggs and embryos, thus all of the above cannot be patented. [START_REF]Guidelines for Patent Examination[END_REF][START_REF]9.1.2. However, animal somatic cells, animal organs and tissues are still[END_REF] Notwithstanding, methods for producing an animal or plant variety are allowed, 78 and natural genes and microorganisms in their isolated forms are patent eligible. 79 Methods for the treatment or diagnosis of diseases practiced on living human or animals are strictly prohibited, nonetheless devices for practicing the treatment or diagnosis, or materials used in these methods are patentable. [START_REF]Guidelines for Patent Examination[END_REF] Similar to the European framework, treatment and diagnostic methods that are practiced on dead bodies are patentable. 81 (Chen, 2002) As for diagnostic methods, even if the tested item is a sample isolated from a living subject, the method is not patentable if it has the immediate intention of obtaining diagnostic results for a disease or a health condition of the same subject. 82 For instance, tests based on genetic screenings or prognosis of disease susceptibility are interpreted to be diagnostic methods and hence patent ineligible. 83 Treatment methods encompass methods for the prevention of disease and for immunization. Notably, although surgical methods practiced on living human or animals that are not for therapeutic purposes are not forbidden under Article 25, they could not be used (or used for production) industrially and hence are not patentable for lacking industrial applicability. 84 C) Main challenges for stem cells patents The SIPO has granted patents on stem cells and methods for producing stem cells that do not involve human embryos or hESCs. The major barriers in stem cell patenting are the moral exclusion and the treatment exclusion. Moral exclusion The scope of moral exclusion is largely unclear because terminologies including "social ethics", "human embryo" and "industrial or commercial use" are not explicitly defined. Claims can be rejected for moral reasons even if they are not directed to human embryos or hESCs. We may infer from Chinese application no. 03816184.2 how SIPO exercises the moral provision to exclude patents that may involve an industrial or commercial use of human embryos. Directed to a production of glial cells using undifferentiated primate pluripotent stem (pPS) cells, CN application no. 03816184.2 was rejected for violating social ethics and lacking industrial applicability in 2011. Upon reexamination, the Patent Reexamination Board reversed all the rejections. [START_REF][END_REF] (Tao and Duan, 2013) This application claimed an in vitro system for producing glial cells that comprises an established cell lineage of undifferentiated pPS cells and a population of cells differentiated from the pPS cell lineage. The claims also covered, among other related inventions, the population of cells differentiated from the pPS cell lineage, and producing methods and uses of the glial cells. Despite all the claims were limited to established cell linages of either primate pluripotent stem cells or hESCs, the Examiner opined that these established cell lineages have to be obtained from human embryos (given that pPS cells could include hESCs), therefore the claims are directed to the industrial or commercial use of human embryos which is prohibited by the Patent Law. Further, the Examiner rejected all the claims for lacking industrial applicability because, in the case where the claimed pluripotent stem cells are derived from non-embryonic tissues, human or animal bone marrow or other tissues must first be obtained using non-therapeutic surgical methods. Thus, the invention pertains to non-therapeutic surgical methods applied on living human and cannot be used industrially, and hence should be rejected. During prosecution, the applicant set forth the followings to address the objections: the invention relies upon established cell lineages of hESCs that are readily available before the filing date of the application; the acquisition of established hESCs lineages does not necessarily violate social ethics; and the use of established hESCs lineages is not an industrial or commercial use of human embryos. The applicant further amended the description to delete descriptions involving acquisition of hESCs; however the case was finally rejected. Upon appeal, the claims were amended to explicitly exclude pPS cells or hESCs that are directly disaggregated from human embryos or blastocysts. The applicant further provided evidence showing that the initial cell lineages derived from human embryos have been widely employed and patented, and cell lineages H1 and H7 used in the examples were commercially available before the priority date, therefore the invention does not require the use of human embryos. 86 In May 2012, the Board ruled in favor of the applicant, concluding that the invention does not violate the moral provision. The Board reasoned that the description and claims have already precluded the direct use or disaggregation of human embryos and blastocysts, and that hESCs lineages are available in public depositories, thus apparently the invention uses established and commercialized cell lineages and does not pertain to an industrial or commercial use of human embryos. In response to the Examiner's proposition that the established cell lineages H1 and H7 must be obtained through the destruction of human embryos, the Board enjoined that it is inappropriate to incessantly trace the acquisition of the established cell lineages to their initial origin (i.e., human embryos), given that cell lineages H1 and H7 have been publicly available and can be indefinitely proliferated in vitro and obtained with known techniques. 87 While for industrial applicability, the Board noted that none of the claims are directed to the isolation of pluripotent stem cells from non-embryonic tissues, and that the claims are limited to established lineages of pPS cells or hESCs, hence nontherapeutic surgical methods for the isolation of the pluripotent stem cells are not compulsory for practicing the invention, hence reversed the rejection for lacking industrial applicability. 88 Perspectives In practice, the examiners tended to strictly adopt the moral exclusion to exclude inventions which read on hESCs (e.g. hESCs per se and preparing methods thereof), and also inventions which are not direct to but related to hESCs. However, it appears that China has loosened its restrictions as indicated by a plurality of the Board's decisions in upholding patents over hESCs downstream technology (Peng, 2016). Although these Board's decisions are not binding, they illustrate that an invention which does not indispensably use human embryos could traverse the moral exclusion. Inventions in which the devastation or use of human embryos or fertilized eggs is not requisite, such as those on somatic cell nuclear transfer (SCNT) (e.g. CN1280412C and CN1209457C) and iPSC (e.g. CN103429732B) would have a higher chance of success. It is thus advised to explicitly exclude the use of human embryos in the claims and the description, and to make necessary clarification to avoid rejections on moral grounds. All class of inventions including stem cells and differentiated cells, and manufacturing methods and uses of these cells should be put on guard. Exclusion of therapeutic methods Patents on pharmaceutical compositions made of stem cells and their manufacturing methods are patentable but therapeutic methods using stem cells are prohibited. As in Japan, applicant may pursue Swiss-type claims in the format of "use of a composition in the preparation of a medicament (or kit) for treating a disease" to protect medicinal uses of stem cell products (Chen and Feng, 2002). In short, morality is the biggest consideration in granting a patent to stem cell inventions and claims that read on hESCs are likely rejected. While SIPO has started to relax its rules and allowed patenting inventions which involve yet not directed to hESCs, it is essential to show with objective evidence why the claimed invention is free of ethical issues to have a higher chance of success. Inventiveness China also takes into account secondary considerations for justifying the inventiveness of an invention. Provided by the patent examination guideline, China considers long-felt but unsolved needs, unexpected results and commercial success. [START_REF]Guidelines for Patent Examination[END_REF] IV -Japan A) Background The Japanese Patent office (JPO) examines patents under the Patent Act and Utility Model Act. Inventions that can be protected by a patent, as defined, are "highly advanced creation[s] of technical ideas utilizing the laws of nature". 90 (Borowski, 1999; Kariyawasam et al., 2015) Generally, subject matters eligible for a patent can be a product, a device or a process. In contrast, a utility model is designed to protect a device that is related to the shape or structure of an article or the combination of articles which is industrially applicable; 91 it does not go through the substantive examination as a patent does and confers a 10-year patent right. [START_REF] Patent | [END_REF] Sharing similar rules as other jurisdictions, novelty, inventiveness, industrial applicability, [START_REF][END_REF] and a sufficient and enabling description 94 are the basic patentability requirements in Japan. Japan provides a six-month grace period which is stricter than the U.S. but more lenient than China. Inventions that were tested, 95 were disclosed through presentations in printed publications or electric telecommunication line, 96 or disclosed against the will of the person having the right to obtain a patent 97 are eligible for the grace period if the application is a direct national application, or is an international patent application under the Patent Corporation Treaty (PCT) designating Japan. Notably, the international filing date would be interpreted as the filing date for cutting off the six-month grace period for PCT application. [START_REF]Examination Guidelines for Patent and Utility Model in Japan. Part III[END_REF] Except for third-party disclosures, a proof document to identify the relevant disclosure has to be submitted within 30 days from the filing date of the application. 99,100 B) The exclusions of patentability Moral exclusion Codified at Article 32 of the Patent Act, Japan has an explicit moral provision that excludes inventions liable to injure public order, morality or public health from patent protection. 101 (Borowski, 1999) One example of such inventions is human produced through genetic manipulation. 102 However, whether a human body of various stages of its development is interpreted to be a human is not specified in the Patent Act and Examination Guidelines. Although the Patent Act and Examination Guideline do not specify that inventions involving human embryos are not patentable, applications involving a step of destroying human embryos have been rejected under Article 32 (Sugimura and Chen, 2013). Ineligible subject matters A statutory invention must be a creation of a technical idea utilizing a law of nature. Thus, laws of nature, discoveries per se (e.g. natural products and phenomenon), inventions contrary to the laws of nature, and inventions that are not using the laws of nature (e.g. economic laws, mathematical methods and mental activities) are not regarded as an invention. 103 However, natural products and microorganisms that are artificially isolated from their surroundings are patentable. 104 Similar to Europe and China, Japan interprets methods for surgery, treatment, or diagnosis practiced on humans are incapable of industrial application and therefore could not be patented (Sato, 2011). 105 However, patents are possible if these methods are applied on animals and explicitly exclude human. Materials that are used in these methods, and products of these methods are patentable. 106 Notably, any method that processes or analyzes a sample taken from a human body is not patentable unless the sample is not supposed to be returned to the same body. 107 While for diagnostic methods, Japan adopts a similar approach as China, defining that any method for the judgment of physical or mental conditions of a human body is a diagnostic method and hence not patentable. 108 Further, methods designed for the purpose of prescription, treatment or surgery plans are regarded as diagnosis of human and hence disallowed. 109 Thus, in a general sense, methods for extracting or analyzing a sample, or gathering data from a human body which are not for judging physical or mental conditions, or for the planning of drug prescription, treatment or surgery are patent eligible. The Examination Guideline sets forth some examples of medical activity that are patent eligible. 110 For example, methods of determining susceptibility to a disease by determining and comparing the gene sequence with a standard can be patented. 111 There are certain exceptions to methods involving a "sample extracted from a human body and presumed to be returned to the same body". Specifically to the stem cell area, methods for manufacturing a medicinal product or material using raw materials from a human body is patent eligible. 112 Thus, method for preparing a cell or artificial skin sheet is patentable even if these articles are intended to be returned to the same person. Methods for differentiating or purifying a cell using raw materials from a human body, or analyzing medicinal products or materials using raw materials from a human body are also eligible. 113 C) Main challenges for stem cells patents Japan appears to be less restrictive than Europe and China in granting stem cell patents despite Japan has comparable stances on moral violation and industrial applicability of medicinal activity in its patent framework. Japan has issued patents on stem cell lines, manufacturing methods, uses of stem cells for drug production and so on, and is the pioneer granting patents on iPSC (Simon et al., 2010). Moral exclusion Although the JPO sets no explicit rule on human embryos and hESCs, it is likely that inventions relied on human embryos will be rejected. Therefore, it is advised to take an approach similar to that proposed for the Chinese landscape; that is to preclude the possibility of destruction of human embryos for practicing the invention (supra). Following the JPO guidance which exemplifies that methods of differentiating stem cells are patentable, methods for producing stem cells based on established embryonic stem cells lines are likely allowed (Sugimura and Chen, 2013). For example, JP 5,862,061 claims a method of culturing hESCs, and JP 5,841,926 was granted on a method of producing ESCs using blastomere and compositions comprising the ESCs, of which the description specifies that the cells can be derived without embryo destruction. Exclusion of therapeutic and diagnostic methods Treatment methods are generally not patentable. While methods for manufacturing medicinal products (e.g. vaccine or cells) or artificial substitutes using raw materials collected from a human body is patentable, uses of these medicinal products on human are likely regarded as treatment methods and hence not patentable. Testing or assaying methods should devoid of any step pertaining an evaluation or determination of a physical or mental condition of a human, such that the method would not be interpreted as a diagnosis practiced on human. Methods for the collection of data and/or comparison of data with a control do not correspond to a diagnostic method and hence is likely patentable. Devices for practicing a treatment or diagnosis, as well as methods for controlling the operation of these devices are largely patentable as long as the function of the medical device itself is represented as a method. 114 JPO specifies that a method for controlling the operation of a medical device is not a method of surgery, treatment or diagnosis of human, 115 given that the method does not involve a step with an action of a physician on the human body or a step with an influence on the human body by the device (e.g. incision and excision of patient's body by an irradiated device). 116 Thus, it may be feasible to redraft the forbidden therapeutic or diagnostic claims to methods for controlling the operation of the therapeutic or diagnostic devices or systems without any steps involving a physician's action on the human body or steps affecting the human body by the device. For instance, "a method for irradiating X-rays onto the human body by changing the tube voltage and the tube current of the X-ray generator each time the generator rotates one lap inside the gantry" is considered to be a method of surgery, therapy or diagnosis of human while "a method for controlling the X-ray generator by control means of the X-ray device; wherein the control means change the tube voltage and the tube current of the said X-ray generator each time the generator rotates one lap inside the gantry" would be patent eligible. 117 In short, Japan is more liberal than Europe and China in granting stem cell patents provided that the claimed invention does not involve the destruction of human embryos. The Japanese Examination Guidelines helpfully provide a lot of examples of eligible and ineligible claims, stakeholders are advised to read through these examples and craft the claims accordingly. Inventiveness As in the other three regions, secondary considerations can be used in Japan for justifying the inventiveness of an invention. For example, commercial success and long-felt need may be considered provided that these are contributed by the technical features of the claimed inventions and supported by the applicant's arguments and evidences. 118 V -Discussion Patent systems in each jurisdiction are standalone yet have similarities. Coherent with the above discussion, the most notable difference in the landscape of stem cell patents is that the U.S. neither establishes a moral exclusion nor excludes inventions that involve the destruction of human embryos. Furthermore, despite patentability requirements in the four regions are similar in the broadest sense, they are subject to disparate interpretations and standards. That is why it is not uncommon that an invention is awarded a patent in one region but not in another one. Seeking patent protection for the same invention in multiple jurisdictions is commonplace. Very often, patent applications filed in different jurisdictions share the same or substantially the same disclosure while claims could be tailor-made in order to comply with the local rules and meet the interests of the stakeholders. Hence, to set up a favourable global and regional strategy for patent protection, it would be advantageous to look into the aspect of patent procurement in each of the jurisdictions of interest, and to deal with issues that may disfavour patent protection at the very beginning. The aspect of patent enforcement should not be overlooked but goes beyond the scope of this chapter. Focusing on patent procurement, the following section will highlight the main similarities and differences in patentability issues between the four regions that may be worthy of attention. While a side-by-side comparison and analysis are not feasible due to limited space here, we summarize a few general and specific aspects of the four systems for a quick and easier comparison (Table 1-Comparison of stem cell patent systems). A) The unique territorial patent system in Europe Europe is a region that includes both national patent laws and European patent laws, whereas the U.S., China and Japan are sovereign countries that have a single national patent law. The co-existence of national and European patent law systems brings inherent complexity which should not be overlooked. As discussed, there are several possibilities to obtain patents in Europe: a European patent at the EPO, national patents at each national patent offices, and in the future a European patent with unitary effect in all the Member States of the European Union at the EPO. It is especially complex in the evolving scientific and technical field of stem cells as inferred from the definition of human embryos by the Court of Justice of the European Union. Although the EPO and the EU have been generally successful in providing a quite uniform patent law that overpasses national heterogeneities, small national divergences with potentially high consequences can always appear as it has been shown by the different national implementations of the Brüstle case, especially on whether or not it should be proved that hESCs have been obtained without previous destruction of human embryos (Mahalatchimy, 2014). B) Utility vs industrially applicability Among the five general criteria for patentability, the utility or industrially applicability requirement appears to be most distinctive. Different from the industrially applicability requirement of Europe, China and Japan, the U.S. does not specify that an invention must be susceptible of industrial application. Rather, it requires the invention must have a specific and substantial utility. While the utility requirement is not usually an issue for stem cells patents in the U.S., it is totally different for the industrial application criterion which precludes certain types of methods that are not industrially applicable from patentability. As discussed, therapeutic, diagnostic and surgery methods practiced on human are mostly not patentable in Europe, China and Japan. C) Moral exclusion Absent a moral exclusion, the U.S. appears to be the most liberal among the four regions in granting human stem cell patents while Europe is the strictest. Europe appears as the region that places the strongest emphasis on moral exclusion as evidenced by the extensive coverage and specific examples of the moral exclusions in the patent law and rules. Firstly, Europe is the region where a definition of the human embryo has been provided in the field of patents. Secondly, Europe has explicitly defined that uses of human embryos for industrial or commercial purposes fall within the moral exclusion and provided the most extensive interpretation of the exclusion: to cover the destruction of human embryos whenever it takes place, not only de novo destruction. China is the closest to Europe (Farrand, 2016) but more lenient regarding the interpretation of the uses of human embryos for commercial or industrial purposes. Although the decisions of the Chinese Patent Reexamination Board are not necessarily binding, the Board considers the exclusion is limited to the de novo destruction of human embryo. Thus, the use of hESC from publicly available cells lines deposited in biobanks does not prevent the grant of patent. It also provides a clearer answer than the European Courts as it specified that it is inappropriate to incessantly trace the acquisition of the established cell lines to their initial origin as long as they are publicly available. While the European courts adopted the opposite view, they did not clarify whether the non-destruction of human embryos should be proved in the claims and by whom. Japan has placed a moral exclusion in the patent law but does not specify whether the involvement of human embryos would render an invention injuring the public order, morality or public health. It has consequently been considered as an attractive country with more liberal policy to stem cell patents (Kariyawasam et al., 2015). On the face, stem cell products are largely patent eligible in China and Japan provided that the de novo destruction of human embryos is excluded from the claimed invention in view of the description. D) Limited eligibility for natural products, laws and phenomenon in the U.S. The recent changes in the interpretation of patent eligibility in the U.S. has imposed a unique and huge challenge to the stem cell arena. As discussed, a natural product be it synthetic or isolated from natural sources, is not patentable absent any "markedly different characteristics" from the naturally-occurring product. Hence, a purified population of stem cells may be patent eligible in the other three regions but not in the U.S. if it is essentially the same as the cells in the human body. Diagnostic methods using stem cells albeit are not excluded for lacking industrial applicability, the prospect of getting a patent is unclear unless more solid criteria of the "significantly more" standard are provided. Finally, as the U.S. has significantly narrowed the scope of eligible subject matter, one can foresee the convergence of consequences of different US and EU laws regarding stem cell patent eligibility. (Davey et al, 2015). E) Patent -A double-edged sword? Undoubtedly patents are awards granted by the government to innovators for their intelligent efforts by conferring them an exclusive right in their innovation; however, in essence, the primary objective of the patent system is to promote innovation and economic development through encouragement of information exchange among the community. On the one hand, patent protects the interests of the innovators, allowing them to generate revenue and gain capital to foster their research and business. Market exclusivity including patent right and data exclusivity are particularly important to the pharmaceutical and medical devices industries to offset the huge yet disproportionate risks and investment in the development of drugs, diagnostic kits or medical devices. Such risks and difficulty for finding investment are particularly true in the field of stem cells patent. Monopoly status, even if it only lasts for a limited period of time, is crucial for the industries and venture capitalists to invest into the development and commercialization of innovations. First and foremost, patents help to minimise the risk of infringement. Secondly, patents can effectively suppress competition and permit firms to earn revenue as a return on their investment. This second point as well as the public impact as a result of the suppression of competition by patents can be illustrated by the well-known story of Myriad. Before the Supreme Court decision which invalidated Myriad's claims over the natural BRCA genes, genetic tests for breast cancer which based on the evaluation of BRCA1 and BRCA2 genes costed about $3,000-4,000 in the U.S. (Cartwright-Smith, 2014). Holding the patents which claimed the natural sequence of BRCA genes, Myriad was the sole company that could administer the BRCA1/2 tests. Laboratories which provided BRCA tests were forced to terminate their services after Myriad alleged them for patent infringement, thereby barring patients from obtaining a second diagnostic opinion from an independent laboratory. The cost of the BRCA tests soon fell to around $1,000-2,300 after the Supreme Court decision (Cartwright-Smith, 2014). As for academia, patent protection also plays a significant role insofar as commercialization is concerned. No matter the technology is to be licensed to a third party or commercialized by the researchers (e.g. in the form of spin-off), patents could provide some level of comfort to the potential licensees and investors in favour of the deal. Capital investment and revenue generated from royalties and licensing fees received by the universities or companies can be used in subsequent research and thereby promote innovation. It also gives renown to academia and may facilitate the publications of papers; the latter being the main way of assessment for research and consequently proof of its good quality. On the other hand, although patent information is in open access, the public is prohibited by law from engaging the patented products or methods in research without the consent of the right holders. Nonetheless, in addition to anti-competition or similar laws, patent system itself does provide certain filtering mechanisms to prevent overly broad patents. The first filtering mechanism is the exclusion of patentability of common goods. Scientific discoveries, natural phenomenon and natural products per se are generally barred from patent protection in most if not all jurisdictions, but applications of these natural matters are still patentable provided that they meet other patentability requirements. While preclusion of patentability is an effective means to prevent the preemption of uses of natural goods, it is noteworthy that decisions of Mayo and Myriad have been heavily criticized by the industries and the patent practitioners for hindering the development of biotechnology especially the diagnostic field since these decisions have effectively excluded many biotech and diagnostic inventions from patent protection. The second filtering mechanism is imposing restrictions on patent rights. Patent law precludes certain activities from patent infringement and thereby waives one's liability of patent infringement as a result of these exempted activities. Examples include prior use defense, research exemption and regulatory review exemption (so-called the "Bolar exemption" which is an exemption of patent infringement for use of patented products in experiments for the purpose of obtaining regulatory approval for drugs, this is established to enhance public access to generic drugs) (Misati and Adachi, 2010; Kappos and Rea, 2012). Therefore, patents can be seen as a double-edge sword which simultaneously promotes innovations and suppresses competitions. Availability of new information and the incentives given by patent may promote research, yet existing patent rights may discourage people from developing basic research into commercial embodiments that are practical and beneficial to the community. Doubts as to whether a patent system promotes or suppresses innovation and of what magnitude always exist. Particularly to the arena of stem cells, any monopoly over the use of natural human stem cells likely inhibits the research and development of stem cells-based technologies. Yet, the first filtering mechanism may operate to prevent a party from tying up the natural stem cells at some level, for example stem cells obtained directly through the destruction of the human embryo (e.g. in China, Europe and Japan) and stem cells isolated from natural sources (in the U.S.). While for patented technologies, the research exemption does leave room for research using stem cells technologies covered by patents, although the permissible scope may be narrowly limited to noncommercial research. While the authors do not have an affirmative answer as to whether patents promote research of stem cells or not, we believe that patent itself is a very useful tool to promote the exchange of information. Free flow of information is essential to enrich our knowledge, invoke our creativity and prompt the emergence of ground-breaking or disruptive technologies. Indeed, issues of patent infringement may arise when a research involving the use of patented technology matures to some sort of commercial activity; however, negotiations or collaborations between the owners/ exclusive licensees of the patented technology and innovators of the follow-on inventions are often possible to allow commercialization of the follow-on inventions without patent infringement. It should be highlighted that patent policy is more than the issue of free market and academic freedom. As with other areas of law, public policy reasons always play a role in formulating patent laws and rules. It is for the executive branch and the legislature to strike a balance between the general public and individual rights and liberties and adjust the laws and policies to achieve the best overall interest. VI -Conclusion Definitely, human stem cells have a great potential in the fields of regenerative medicine and personal medicine. Stem cell technologies must be timely and comprehensively protected by all means of intellectual property particularly patents. Even though the field of human stem cells have been the object of clarifications by Courts or guidelines in each region, the complexity and uncertainties remain (Schwartz and Minssen, 2015). How the ISSC case on the destruction of human embryos is applied in Europe? How Myriad and Mayo on natural genes and diagnostic methods are applied in the U.S.? Will the non-binding decisions of the Chinese Patent Reexamination Board upholding the patentability of invention that uses hESC from publicly available cells lines deposited in biobanks be followed in the process of patent examination and various patent proceedings? Will Japan provide more explanation and examples on its moral exclusion? All these uncertainties, among other challenges such as the regulatory requirements, could be of utmost importance to the commercialization and success of stem cell inventions. Practitioners should closely follow the development in the patent landscape from patent law to court decisions with a comparative view, and researchers and the industry should adjust their strategy to strive for success through the challenges. patentable and an invention (such as a substance found in nature that is shown to produce a technical effect) that is patentable. 8 Regarding sequences and partial sequences of genes, the industrial application requirement has specific form: "The industrial application of a sequence or a partial sequence of a gene must be disclosed in the patent application. " Article 5. 3 Directive 98/44/EC and Rule 29 (3) EPC. Both the Court of Justice of the European Union (ECJ, gr. ch., July 6, 2010, Monsanto Technology LLC v. Cefetra BV and a., C-428/08, Rec. I-06765) and the EPO (Board of Appeal of the EPO, The University of Utah Research Foundation v. Institut Curie, Assistance Publique-Hôpitaux de Paris, Institut Gustave Roussy-IGR, Vereniging van Stichtingen Klinische Genetica, et al., De Staat der Nederlanden, Greenpeace e.V., November 13, T 0666/05 (2008)) have specified the patent protection is limited to the function for which the sequences and partial sequences of genes have been patented. 9 Article 5. 2 Directive 98/44/EC and Rule 29 (2) EPC. 10 Rule 27 EPC; Article 4.3 Directive 98/44/EC. 11 The list also includes "processes for modifying the genetic identity of animals which are likely to cause them suffering without any substantial medical benefit to man or animal, and also animals resulting from such processes." Rule 28 of the implementing regulations to the European Patent Convention and Article 6(2) of Directive 98/44/EC. 12 Article 53 (c) of the European Patent Convention and Recital (35) of Directive 98/44/EC. 13 Enlarged Board of Appeal of the EPO, Wisconsin Alumni Research Foundation (WARF), November 25, G02/06 (2008). 14 Court of Justice of the European Union, Grand Chamber, Brüstle v Greenpeace eV, October 18, C-34/10 (2011). 15 Court of Justice of the European Union, Grand Chamber, International Stem Cell Corporation v Comptroller General of Patents, Designs and Trade Marks, 18 December 2014, Case C-364/13 (2014). 16 European Patent Office, Enlarged Board of Appeal,Enlarged Board of Appeal of the EPO, Medi-Physics Inc., February 15, 2010, G 1/07. 17 Ibid. 18 European Patent Office, Technical board of appeal, March 27, 1986, E.I. du Pont de Nemours and Company, T 0144/83. 19 Ibid. 20 European Patent Office, Board of Appeal, T2221/10 Technion Research and Development Foundation Ltd, February 4, 2014; UK Intellectual Property Office, International Stem Cell Corporation, August 16, 2012, BL O/316/12. 21 German Federal Court, BGH, Urteil vom November 27, 2012, Az. X ZR 58/07, (German Federal Court case law, 2012; http://openjur.de/u/596870.html). 22 Court of Justice of the European Union (2014), Grand Chamber, International Stem Cell Corporation v Comptroller General of Patents, Designs and Trade Marks, 18 December 2014, Case C-364/13, op. cit. 23 Article 52 (1) (a) EPC; recitals (13), (16), (34) of Directive 98/44/EC. The criterion of inventiveness has been detailed by the EPO with clarifications regarding the state of the art, the non-evident concept and the definition of the person skilled in the art notably; the obviousness being included within the inventive step criterion. EPO, Part G Chapter VII, Guidelines for examination. 24 EPO, Part G Chapter VII, 10.3, Guidelines for Examination. United States Patent and Trademark Office (2016e) 7 This is to be understood regarding the distinction between a mere discovery (such as the finding of a previously unrecognized substance occurring in nature) that is not
90,961
[ "18844" ]
[ "239063" ]
01757046
en
[ "shs" ]
2024/03/05 22:32:10
2018
https://shs.hal.science/halshs-01757046/file/WP%202018%20-%20Nr%2009.pdf
Fadia Al Hajj email: fadiaelhajj@hotmail.com Gilles Dufrénot email: gilles.dufrenot@univ-amu.fr Benjamin Keddad email: b.keddad@gmail.com Exchange Rate Policy and External Vulnerabilities in Sub-Saharan Africa: Nominal, Real or Mixed Targeting? $ Keywords: African Countries, Exchange Rate Policy, External Vulnerabilities, Regime-Switching Model JEL classification: C32, F31, O24 This paper discusses the theoretical choice of exchange rate anchors in Sub-Saharan African countries that are facing external vulnerabilities. To reduce instability, policymakers choose among promoting external competitiveness using a real anchor, lowering the burden of external debt using a nominal anchor or using a policy mix of both anchors. We observe that these countries tend to adopt mixed anchor policies. We solve a state space model to explain the determinants of and the strategy behind this policy. We find that the choice of policy mix is a two-step strategy: First, authorities choose the degree of nominal exchange rate flexibility according to the velocity of money, trade openness, foreign debt, degree of exchange rate pass-through and exchange rate target zone. Second, authorities seek to stabilize the real exchange rate depending on the degree of trade integration with the rest of world and the degree of foreign exchange interventions. We conclude with regime-switching estimations to provide empirical evidence of how these economic fundamentals influence exchange rate policy in Sub-Saharan Africa. Introduction This paper discusses a new exchange rate policy issue in Sub-Saharan African (SSA) countries. A large number of governments have, in the past years, adopted an exchange rate anchor based on a mix between a real and a nominal target. This means that they have sought to simultaneously achieve stable real and nominal exchange rates. Few papers have discussed such a strategy, as discussions of exchange rate regimes have focused on the choice between so-called corner solutions (pure floating or fixed exchange rates) and intermediate regimes (see, for instance, [START_REF] Qureshi | Hard or soft pegs? Choice of exchange rate regime and trade in Africa[END_REF][START_REF] Harrigan | Time to Exchange the Exchange-Rate Regime: Are Hard Pegs the Best Option for Low-Income Countries?[END_REF][START_REF] Husain | Exchange rate regime durability and performance in developing versus advanced economies[END_REF][START_REF] Calvo | The mirage of exchange rate regimes for emerging market countries[END_REF]. However, SSA countries face two challenges that can explain the new stylized facts that we highlight. First, they seek external competitiveness to achieve trade surpluses or limit their trade deficits (see UNCTAD , 2016a;[START_REF] Allen | The Effects of the financial crisis on Sub-Saharan Africa[END_REF]. Second, they seek to alleviate the burden of their external debt, a significant part of which is denominated in foreign currencies (see UNCTAD , 2016b). We propose a theoretical model that brings to light the factors that influence a policymaker's anchor strategy because many SSA countries face balance of payment crises due to high trade dependence (measured as the ratio of imports to GDP) and a lack of export diversification (see Nicita abd Rollo , 2015;[START_REF] Iwanow | Trade Facilitation and Manufactured Exports: Is Africa Different[END_REF]. This imbalance has fueled the growth of foreign indebtedness. To reduce the pre-eminence of imported goods in the household's consumption basket, policymakers can adopt trade controls by either raising customs taxes or restricting imports. Alternatively, they can target the internal real exchange rate to influence the consumer's trade-off between locally produced and imported goods. Targeting the real exchange rate by controlling fluctuations in the nominal exchange rate can be achieved through various types of intermediate exchange rate regimes that lie between free floating and strict fixed exchange rate regimes: hard and soft pegs, basket pegs, target zones, crawling bands, etc. Second, we hold that policymakers are also concerned about stabilizing international reserves. Foreign reserve are needed for three purposes. One is to respond to changes in the current account balance and to stabilize the nominal exchange rate. The second is to meet the country's foreign liabilities by servicing external debt. The third motivation is to maintain access to foreign borrowing, which might be difficult in cases when reserves are depleted (credibility and reputation). Regardless of the motivation, the accumulation of foreign reserves in the African countries has served as insurance against recurrent balance of payment crises [START_REF] Pina | The recent growth of international reserves in developing economies: A monetary perspectives[END_REF]. The determination of both the real exchange rate and foreign reserves results from consumers' decisions in the good markets (i.e., traded versus non-traded goods, demand for money and foreign borrowing) and from macroeconomic constraints such as interest rates on debt service or changes in foreign exchange rates. In this paper, we consider a deterministic version of the choice of an exchange rate regime, in the sense that we do not consider the exchange rate regime as an optimal response to a shock. Thus, the determination of the exchange rate regime is presented as the solution in a "state-space" model in which the policymaker's decision is conditioned by the state of the economy captured by the agent's decisions and the macroeconomic environment. With regards to the existing literature on the choice of exchange rate regimes, our proposed model is based upon three stands of the literature on exchange rates to which we add several new assumptions. First, the exchange rate policy is described by a target zone model, but unlike the usual literature, the exchange rate is derived neither from the assumption of monetary determination of the exchange rate nor from the assumption of purchasing power parity (PPP). We instead consider a general equilibrium-based model with tradable and non-tradable goods. This allows us to examine how monetary authority interventions in foreign exchange markets can influence the real and nominal exchange rates. Second, unlike many papers in the literature on exchange rate regimes, we do not consider inflation or output stabilization as the final goals of monetary policy. This is not to say that their roles are negligible. We develop our argument for given levels of inflation and economic growth. We focus instead on external imbalances considering that heavy indebtedness has become crucial in many African countries. We introduce debt into our model through the consumer's budget constraint by assuming that they hold two assets: money and an IOU asset indicating how much money is owed to foreign lenders. Third, many models of currency basket pegs assume that the assumption of uncovered interest rate parity (UIP) holds, which seems inconsistent for SSA countries (see [START_REF] Thomas | Exchange rate and foreign interest rate linkages for Sub-Saharan Africa floaters[END_REF]. Contrary to the usual UIP assumption, we add a premium to the interest rate incurred on foreign debt. This premium, by making debt service more costly, may limit indebtedness, which could in turn reduce the pressure on and the depletion of foreign reserves that are used to maintain the peg. The above assumptions are considered in a microeconomic-based model in which the real and nominal exchange rates are determined endogenously, taking into account some of the specific characteristic of the African countries: high levels of external debt, shallow domestic financial markets that do not allow capital mobility with developed countries, sharp price competition between locally produced goods and imported goods, and the existence of enclaves in some rent sectors (e.g., oil, minerals). We propose an empirical extrapolation of our findings to assess which domestic factors influence the decision to increase nominal exchange rate stability. Our econometric framework is based on a Markov switching model to allow regime-dependent dynamics of the exchange rate (flexibility versus stability). The remainder of the paper proceeds as follows. Section 2 presents general anchoring behavior in Sub-Saharan Africa, Section 3 presents the determinants of anchoring to both nominal and real exchange rates. Section 4 proceeds with the empirical application. Section 5 concludes. Preliminary assessment Policy mix: some evidence A country's exchange rate policy can be defined according to the nature of its peg. Two extreme cases are those in which a country fully fixes either the nominal or the real exchange rate. The in-between cases are defined as a policy mix in which a country combines both objectives by weighting the real and nominal pegs, reflecting a trade-off between decreasing the cost of external debt and enhancing external trade competition (see, for instance, [START_REF] Gervais | Current account dynamics, real exchange rate adjustment, and the exchange rate regime in emerging-market economies[END_REF][START_REF] Staveley-O'carroll | Exchange rate targeting in presence of foreign debt obligations[END_REF]. The fully nominal peg applies when authorities want to maintain nominal stability of the currency. This objective can be reached by fixing the nominal exchange rate against a single currency or a currency basket. The third option is to control exchange rate variation explicitly or implicitly within a specific margin. The monetary authority chooses to stabilize the variation of reserves in order to stabilize or decrease the cost of external debt. The fully real peg applies when authorities target variations in the real effective exchange rate (REER). This can be done through a free floating nominal exchange rate or through a managed float. By choosing a real peg, policymakers seek to increase external competition or fight inflation. Figure 1 describes these two choices by plotting couples of changes in the REER and foreign reserves for SSA countries over the period 2000-2015 (on a monthly average basis). 3The x axis displays changes in foreign reserves, while the y axis describes changes in the real exchange rate. Any point located on the latter means that the policymaker has adopted a pure real peg (since there are no foreign exchange interventions). The exchange rate is defined in such a way that a positive change in the real exchange rate reflects real depreciation, and vice-versa for real appreciation. Any point on the x axis illustrates a pure nominal peg policy, since the central bank uses foreign reserves to stabilize the nominal exchange rate. A policy mix of real and nominal pegs is illustrated by any point located in one of the four areas labeled I, II, III and IV (including the area delineated by the dotted lines). The results are displayed in Table 1. In area I, the policymaker favors an objective of external competitiveness over foreign debt cost reduction. Indeed, it seeks to depreciate the real exchange rate through nominal depreciation (by buying foreign currency). In area II, the policymakers faces a trade-off between external competitiveness and the cost of external debt. By allowing nominal exchange rate appreciation, it allows for the reduction of debt service cost. Appreciation of the nominal exchange rate leads to depreciation of the real exchange rate if the income effect of the changes in relative prices on income outweighs the substitution effect. In area III, the reduction of the cost of foreign debt is preferred and achieved through nominal appreciation of the currency, although this reduces competitiveness. Area IV illustrates both an increase in the cost of debt (due to nominal depreciation of the domestic currency, as reflected by an increase in foreign exchange reserves) and a loss of competitiveness as a consequence of real appreciation of the domestic currency. Finally, the dotted lines delineate small variations in the real exchange rate (within a margin of ±1 percent) and illustrate situations where a nominal peg was preferred by policymakers (through foreign exchange interventions). This strategy seems to characterize more than half of the country sample, including the WAEMU and CEMAC countries, as well as Botswana, Gambia, Malawi, Mauritius, Mozambique, Namibia, Swaziland, Tanzania and Uganda. Moreover, many of these countries are in area I and area IV, suggesting that their exchange rate regimes are very costly in terms of foreign debt cost and represent significant burdens in terms of their external stability. As a whole, Figure 1 shows that nominal pegs have detrimental effects on the external competitiveness and debt of many SSA countries (area IV), whereas others have adopted policy mixes with higher weights assigned to real exchange rate targeting and, thus, trade competitiveness (area I). When countries favor real depreciation through nominal depreciation of the domestic currency, they must then bear the costs of this strategy in terms of higher foreign debt servicing due to higher inflation and higher disbursements in the domestic currency. I II III IV Variation of the international reserves Variation of the real effective exchange rate In this sub-section, we propose a simple approach to assessing African exchange rate policy over the period 2000-2017. Real depreciation (+) real appreciation (--) nominal depreciation (+) nominal appreciation (--) Our aim is to illustrate the trade-offs that many African countries are facing. That is, they must tighten their nominal pegs given transaction costs (especially foreign debt transactions) associated with greater exchange rate volatility or loosen their peg in the face of balance of payments and trade competitiveness issues. Here, we are interested in estimating the way African countries peg their currencies, which can provide important insight into the mixed peg strategy. As described in the previous sections, for a typical small open economy, a managed float (or any other managed arrangement) might be interpreted as the desire of policymakers to reduce downward pressure on their currency. In this view, the nominal exchange rate is considered a tool with which to control the real exchange rate and, therefore, trade competitiveness. As explained above, this strategy should imply an increase in foreign debt cost. The evidence regarding a de facto mixed peg policy (real and nominal) can be illustrated using an econometric approach allowing diversity in exchange rate regimes: pure floating, pegged to a basket of currencies and managed float. Our empirical investigation is based on an extended version of the following Frankel-Wei model: ∆e t = α + β 1 ∆e US D t + β 2 ∆e EUR t + β 3 ∆e GBP t + β 4 ∆e Y EN t + ε t (1) where ∆e is the first difference of the natural logarithm of the respective exchange rates (US dollar, euro, British pound, yen and the dependent currency) against an independent numeraire (i.e., the Swiss franc). Estimates of β reflect the respective weight of the right-hand side currencies in the implicit basket peg of the left-hand side currency. For instance, a β 1 coefficient close to one implies that fluctuations in the exchange rate are mainly explained by movements in the USD. In this case, the USD can be qualified as the main anchor currency. Obviously, many peg configurations are possible, depending on the exchange rate policy of each country. We propose a structural version of the [START_REF] Frankel | Yen bloc or dollar bloc? Exchange rate policies of the East Asian economies[END_REF] model to estimate the degree of nominal exchange rate flexibility as well as the weights on the USD, EUR GBP and CNY in the implicit basket peg of each African country. 4 To assess the relative importance of shocks from the USD, EUR, GBP and CNY, we estimate the following VAR model for each African currency: Y t = µ 0 + P k=1 Φ k Y t-k + ε t (2) where Y t represents the vector of variables (∆e US D ,∆e EUR ,∆e GBP ?∆e CNY ,∆e AFC ), Φ k is a 5 × 5 matrix, and µ 0 is a vector of constants.5 Specifically, we simulate shocks to the USD, EUR, GBP, CNY and the domestic currency to determine the respective weights of their innovations in the fluctuation of each African currency. 6 The share of home currency shocks is then interpreted as the degree of flexibility because it represents demand shocks to the currency that the authorities allow to be partially reflected in the exchange rate [START_REF] Frankel | Assessing China's exchange rate regime[END_REF]. Consequently, the higher this share, the lower the share of major currencies, suggesting that the home currency fluctuates more freely. The dataset obtained from the IMF IFS database covers monthly nominal exchange rates over the period from January 2000 to October 2017 for a sample of 23 African countries: Angola, Botswana, Burundi, Eritrea, Ethiopia, The Gambia, Ghana, Guinea, Kenya, Liberia, Madagascar, Malawi, Mauritius, Mozambique, Nigeria, Rwanda, São Tomé, Seychelles, Sierra Leone, South Africa, Uganda, and Zambia. 7 Our sample was constrained by data availability, but we took care to reflect the various characteristics of exchange rate policies within the region (see Table 1). 8 The estimations are carried Table 2 displays the variance decomposition for each African currency over the two sub-periods. For each sub-sample, each column gives the percentage of forecast error variance due to currency innovations. Accordingly, each row adds up to 100. The first four columns show the de facto implicit basket weights and the degree of exchange rate flexibility of each African currency before the 2008-2009 financial crisis. Angola, for instance, is mainly driven by country-specific shocks, which account for more than 57% of Angola's exchange rate variations. USD shocks explain more than 26% of Angola's exchange rate variations, while the euro, pound sterling and yuan have a limited impact (less than 10%). We can conclude that Angola's monetary authority manages, to some extent, the nominal exchange rate against the USD, although it allows a high degree of flexibility. As a whole, the GBP and CNY shares are both negligible, amounting to less than 9% in all African countries. The euro share is also quite low, except in Cabo Verde (43%), which has adopted a soft peg to a currency basket composed mainly of the euro and the USD. Furthermore, we find that the USD accounts for an important percentage of African currency variations. Indeed, the role of the USD was particularly high (more than 40%) during the 2000s in Eritrea, Ethiopia, Ghana, Kenya, Mauritius, Nigeria, Rwanda, São Tomé, Sierra Leone and Uganda. In the second sub-sample, we note that the USD plays an increasing role in many countries. For instance, the USD weight has more than doubled in seven countries (Angola, Botswana, Burundi, Guinea, Liberia, Madagascar and Sierra Leone), while for 10 countries, the weight has increased by at least 37%. The USD weight is now higher than 50% for 12 countries compared to 8 countries in the first sub-sample. Conversely, the USD weight has decreased significantly for only four countries (Ethiopia, Malawi, Nigeria, São Tomé). These higher USD shares have resulted in smaller shares of countryspecific shocks, although the degree of exchange rate flexibility remains significant (higher than 40%) for 10 countries and higher than 70% for 3 countries (Gambia, Zambia and Malawi). The euro displays a share higher than 50% for Cabo Verde and Mauritius. As a whole, the CNY, GBP and EUR shares remain weak. This evidence is in line with the fear of floating phenomenon. As African countries move towards floating exchange rate regimes, they are strongly affected by commodity shocks. Accordingly, they tend to avoid currency appreciation by pegging their currencies to the commodity currency, i.e., the USD [START_REF] Chan | Commodity currencies[END_REF]. Furthermore, African countries tend to favor managed arrangements to minimize transaction costs associated with greater exchange rate flexibility, such debt securities transactions. As transaction costs increase with exchange rate volatility, it is optimal for countries to tighten their pegs on the USD. However, they tend to preserve a degree of exchange rate stability in order to address balance of payments issues and depreciation pressure and, thus, avoid real exchange rate misalignment (see, [START_REF] Gnimassoun | The importance of the exchange rate regime in limiting current account imbalances in Sub-Saharan African countries[END_REF]. This duality in exchange rate behavior clearly shows that African countries face a trade-off between foreign debt costs and trade competitiveness. In the next section, we propose a theoretical framework that explains how African countries can cope with these conflicting objectives. Notes: The optimal lag lengths were selected according to the Akaike criterion. The lag length ranges are 1 and 3, depending the country and the sample period. Innovations ε US D ε EUR ε GBP ε CNY ε AFC ε US D ε EUR ε GBP ε CNY ε AFC Angola 26, Theoretical model General features To represent these African economies, we consider a small open economy (the domestic country) vis-à-vis the rest of the world. The latter is divided into two areas: a euro zone and a dollar zone. The domestic country may choose to peg its currency to the euro and the US dollar. Although we limit the basket peg to two currencies for simplicity, our arguments are valid for a higher number of currencies in the basket.?? The central bank chooses from among a variety of exchange rate regimes. In the case of an intermediate regime, we consider a soft peg in the sense that the policymaker allows the exchange rate to fluctuate around a central parity within a given band. Our goal is to examine the main motivations that drive an African country to adopt such an intermediate regime rather than a corner solution (pure float or hard peg). An essential feature of the model is that the domestic country issues debt denominated in foreign currencies and therefore accumulates foreign liabilities. The central bank decides the optimal weights on the euro and the US dollar in the basket and chooses the degree of flexibility of the exchange rate. It does so while seeking to minimize the variance of the internal real exchange rate and international reserves. The model is built upon several assumptions that characterize the situations of developing or emerging African countries: i) Their foreign debt is high and a pure float regime may overweight the debt burden if the domestic currency depreciates. For this reason, limiting exchange rate fluctuations may be advantageous. ii) In many articles, the optimal design of basket-peg arrangements relies on the UIP assumption to relate the domestic interest rate to foreign interest rates. This assumption, however, is at odds with empirically low African capital market integration with international financial markets. One reason for deviation from UIP is the existence of additional lending costs due to the scarcity of funds in these domestic financial markets. We use this assumption in our model. iii) In addition to debt problems, we examine the role of the real exchange rate as a factor in external imbalances when people consume both locally produced (non-tradable) goods and imported goods (tradable) that are imperfect substitutes. We consider an economy populated with households that are identical and own firms. They consume locally produced goods (non-tradable goods and services) and tradable goods imported from abroad. In the domestic country, there are three categories of firms: i) firms in the export sector sell commodity goods abroad whose prices and quantities sold are exogenous and fixed by the rest of the world; ii) firms in the tradable imported good sector import commodity goods and sell them in the domestic market; and iii) finally, in the non-tradable goods sector, firms hire workers to produce domestic goods and supply domestic services. In addition to the real sector, the model includes financial markets and a monetary sector. Demand for money is introduced through a cash-in-advance constraint. Moreover, we assume that the exchange rate between the domestic currency and the foreign countries is not given by the UIP condition (which amounts to assuming that there is not perfect capital mobility between the domestic country and its foreign partners). The central parity of the domestic currency is described by a peg to a two-currency basket and since it is a small country, foreign interest rates are exogenous. Net borrowing is assumed to be positive, so that consumers hold neither foreign assets nor domestic assets. They issue domestic assets that are held by foreign countries. The model The households We consider a cash and credit economy. The representative household gains utility from the consumption of a nontradable good C Nt and a tradable (imported) good C T t .9 The consumer-worker obtains disutility from working (l t is the labor supply) and maximizes the present discounted value of utility: 10 max E 0 [Σ ∞ τ=0 β t+τ U(C Nt+τ , C T t+τ , l t+τ )], (3a) U(C Nt , C T t , l t ) = (1 -α)lnC Nt + αlnC T t -γl t , (3b) where β = (1 + ρ) -1 is the discount factor, ρ is the time preference rate, and we assume that 0 < α < 1 and γ > 0. The non-tradable good is purchased using cash m t . The tradable good is purchased on credit. The households hold two assets: money and IOU bonds. They borrow money from abroad for a one-period maturity and in turn hold a bond indicating the amount of debt owed. At time t, the amount owed in foreign currency is -B t+1 (with B t+1 > 0). We use the index t + 1 to indicate that this is the amount borrowed at time t to be reimbursed at time t + 1. The equivalent amount in terms of the domestic currency is calculated by taking into account the "composite" nominal exchange rate prevailing at the time of reimbursement -e a t+1 B t+1 . e t is the price of the domestic composite exchange in terms of the foreign currencies to be defined below (an increase in e t means depreciation of the domestic currency). e a t+1 is the expectation at time t of the nominal exchange rate that will prevail at time t + 1. The term "composite" is used because there are multiple foreign lenders (here, the US and European countries) and the domestic country may adopt a basket peg regime. Finally, we assume that the IOU asset costs r t , the interest rate paid on foreign debt between time t -1 and time t. The household faces cash-in-advance and budget constraints for t = 1, . . . , ∞: P Nt C Nt ≤ m t -(1 + r t )e t B t + e a t+1 B t+1 , (4) P Nt C Nt + P T t C T t -e a t+1 B t+1 + m t+1 ≤ w t l t + [m t -e t B t (1 + r t )] + Π t + Ω t , (5) where Ω t = P xt X t , Π t = (pro f Nt + pro f T t ). P Nt is the unit price of the non-tradable good at time t. P T t is the unit price of the tradable good. We assume that the households own the firms. w t is the wage rate. The household's income has several components: labor income w t l t , domestic profits Π t , and export revenues Ω t . pro f Nt and pro f T t (defined below) are the profits from activities in the non-tradable goods sector and the tradable goods sector, respectively. P Xt X t is the value of commodity exports invoiced in the domestic currency. P Xt is the unit export price, and X t is the volume of exports. Exports refers here to a commodity good extracted from the soil and directly sold abroad (e.g., primary goods, oil, minerals). The household optimization problem is to choose a sequence C T t , C Nt , m t , l t and B t to maximize Eq. (3a) subject to constraints (4) and ( 5), taking the composite exchange rate e t , prices P Xt , P T t , P Nt , w t , profits pro f Nt and pro f T t , volume of exports X t , initial stock of debt B 0 and cash m 0 as given. We define ϕ t = ((P Nt )/(P T t )) as the real exchange rate at time t. From standard first-order conditions, we obtain the following relationships. The shares of non-tradable and tradable consumptions of total consumption are given by: (C Nt /(C Nt + C T t )) = (1 -α)/((1 -α) + αϕ t (1 + r t )), (C T t /(C Nt + C T t )) = (α(1 + r t )ϕ t )/((1 -α) + αϕ t (1 + r t )). (6) A higher interest rate makes the current tradable good less expensive relative to the non-traded good (because debt becomes more costly) and, thus, increases the current consumption of this good (a substitution effect). At the same time, the increase in the interest rate reduces the consumer's total income, which leads to less consumption of both goods (an income effect). Because there is a distinction in the model between cash and credit goods, any change in a given price has an impact on the credit good through both substitution and income effects, while the impact on the cash good comes only from income effects. As a consequence, the non-tradable good share of total consumption diminishes. Similarly, an increase in the real exchange rate reduces the non-tradable good share of total consumption. Here, P t Yt = P T t C T t +P Nt C Nt denotes total consumption spending on cash and credit goods, and noting that the only money held will finance cash goods (m t = P Nt C Nt ), the velocity of money is v t = P t Y t /m t = 1 + (1/ρ t )(C T t /C Nt ) = 1 + (α/(1 -α))(1 + r t ). (7) Eq. ( 7) implies that real money demand varies negatively with the interest rate (for instance, a decrease makes holding the IOU asset or, equivalently, selling domestic assets abroad more attractive than holding money). Firms in the non-tradable goods sector We assume that the production function in the competitive non-tradable goods sector is linear in labor demand: Y Nt = L t . (8) Capital is fixed (and normalized to 1), and a similar assumption applies to productivity. The representative firm maximizes the following profit with respect to Y Nt (L t is labor demand): pro f Nt = P Nt Y Nt -w t L t . (9) Profit maximization implies P Nt = w t . (10) At the competitive equilibrium, profits equal zero. Firms in the tradable goods sector The tradable goods sector consists of two representative firms. Firm 1 produces and exports commodity goods at a price that is set by international markets P Xt . The volume of exports X t is also determined by foreign demand and is exogenous. We assume that export activities are performed by self-employed people who are paid the income from their exports P Xt X t . Firm 2 imports goods and sells them in the domestic markets. People in these firms are also self-employed. Self-employment activities are common in the tradable goods sector in Africa, since small firms in this sector belong to diaspora communities (e.g., Lebanese, Indian or, more recently, Chinese merchants). The import sector is described by a representative monopolistic firm. The latter sells a good imported from outside in the domestic market. Profits are: pro f T t = (P T t -e t P Mt )Y T t , Y T t = C T t . (11) The firm in the import sector faces a demand for traded goods and sets its price P T t . P Mt is the price of imports denominated in foreign currencies. Exchange rate pass-through is the way in which the nominal exchange rate affects import prices and the prices of tradable goods depends upon the exchange rate regime. In a hard peg regime (e t is fixed), there is full pass-through. In a floating exchange rate (partial or free float) regime, nominal exchange rate fluctuations can dampen the effects of changes in import prices. The monopoly chooses to maximize Eq. ( 11) subject to the demand function for traded goods, resulting in an optimal price: P T t = (1 + µ t )e t P Mt , µ t = 1/(| t | -1), t = (P T t (∂C T t )/(∂P T t ))/C T t = -(α/γ)ϕ t Y Nt , t < -1. (12) We assume that t < -1, since traded and non-traded goods are imperfect substitutes. Exchange rate regimes SSA countries may use some forms of intermediate exchange rate regimes (soft pegs or managed floats), because their capital accounts are still not completely open to international capital flows. They borrow money in international capital markets, but their own domestic financial markets are still shallow and illiquid. An intermediate regime (between the corner solutions of a hard peg or a free float) can be a first step towards deeper financial integration with the rest of the world through a mechanism similar to the former European Exchange Rate Mechanism (ERM) (fixed exchange rate currency margins or a semi-pegged regime). Exchange rate movements depend upon the current account. The balance of payments and foreign reserves are the sum of trade balance, debt service and foreign liabilities (positive official transfers): -r t e t B t + e a t+1 B t+1 = -∆RES t , where ∆RES t denotes changes in net international reserves at time t. Moreover, the rate of nominal exchange rate adjustment is limited by the central bank's interventions in the exchange rate market. The resulting changes in foreign reserves are as follows: 14) is useful for studying policymakers' choices in several situations. ∆RES t = -θ 1 λ t + ∆L t + ∆U t , λ t = e t -e c t . (14) • θ 1 = 0 and ∆L t = ∆U t = 0. This case corresponds to a free floating regime. The central bank does not intervene in exchange rate market to stabilize the nominal exchange rate. In this case, λ t → ±∞ (the exchange rate is allowed to deviate from the central parity with no limit). • θ 1 = 0 and ∆L t 0, ∆U t 0. This case corresponds to a managed float regime. Foreign reserves are used to monitor exchange rate movements within a band [e min , e max ]. The central bank sets upper and lower limits to the exchange rate changes by defining a target band. This target remains unchanged (θ 1 = 0 means that there are no intra-band interventions; therefore, ∆L t = ∆L < ∞, ∆U t = ∆U < ∞). • θ 1 → ∞ and ∆L t → ∞, ∆U t → ∞. This case describes a hard peg, a regime characterized by unlimited central bank interventions to maintain the fixed exchange rate (in this case, λ t = 0). • 0 < θ 1 < ∞ (crawling band). This case is similar to the managed float regime, but this time, ∆L t and ∆U t are time varying and remain bounded. Interest rates We now present our assumptions about the determination of the domestic interest rate r t . Since they are constrained in their domestic markets and must borrow money from abroad, households have to incur an additional cost to service their debt. This additional cost represents an amount they have to pay to foreign lenders for making foreign funds available. This additional cost represents an interest rate margin for the lenders over the interest rate that would correspond to UIP. Using ξ as the interest rate differential, we write ξ us t = (r tr us t ) -(E t e us t+1e us t ), ξ us t > 0, (15a) ξ euro t = (r t -r euro t ) -(E t e euro t+1 -e euro t ), ξ euro t > 0, ( 15b ) where ξ us t and ξ euro t would equal zero under the UIP assumption, which is not our assumption. One could also imagine that funds borrowed from abroad are not given directly to the households but to a bank (a subsidiary of an international financial company could borrow the funds at an interest rate corresponding to that of the UIP and lend them at a higher interest rate). In that case, UIP would not be satisfied because domestic financial markets are thin, illiquid, and shallow, so the domestic interest is higher than foreign interest rates in international capital markets. The rate is so high that even with appreciation of the foreign currency, the expected rate of appreciation would not be enough to offset the savings accrued from the positive interest rate differential. E t e us t+1 and E t e euro t+1 are the expectations at time t of the domestic exchange rate relative to the US dollar and the euro, respectively, at time t + 1. Assuming perfect capital mobility between the US and the euro area implies that ξ us t = ξ euro t = ξ t . (16) This means that there is no borrowing-cost arbitrage by borrowers between the US and European financial markets. Otherwise, at a given period t, the households would choose to issue foreign debt in the country with the lowest ξ t . We thus write r t = where ξ t can be thought of as either a finance premium that compensates the foreign lender for bearing verification costs due to informational asymmetries or as a premium due to the possibility of default on the debt. Ranking the exchange rate regimes The choice of the exchange rate regime Policymakers are concerned about stabilizing fluctuations in the real exchange rate and in foreign reserves. We solve the model and obtain the following expressions for the real exchange rate and foreign reserves (see the details in Appendix (A)): ϕ t = D t 1 e t = D t θ 1 -C t A t + θ 1 λ 1 e us t + (1 -λ 1 )e euro t , (18) and ∆RES t = θ 1 [A t + C t λ 1 e us t + (1 -λ 1 )C t e euro t ] + X t [θ 1 -C t ] [θ 1 -C t ] , (19) where A t , D t , C t and X t are defined by: A t = αw t Y Nt Y T t δ(1 -α)(Y t -Y Nt ) + (1 + δ)P Xt X t + (1 -δω)P Nt Y Nt -m t+1 + X t , (20a) D t = w t P Mt (1-α) γ δ (Y t -Y Nt ) , ( 20b ) C t = B t -P Mt Y T t (1 -Y Nt )(1 + δω), (20c) X t = ∆L t + ∆U t , (20d) δ = 1 1 + r t , ω = 1 + 1 -α 1 + r t . ( 20e ) We now consider the main determinants of A t , D t , C t and X t . They describe the macroeconomic environment that influences the policymaker's optimal choice to be studied in the next sections. The first component of A t can be rewritten as: αw t Y Nt Y T t (1-α) (1+r t ) (Y t -Y N t) = γ 2 µ t 1 + µ t e t P Mt Y T t . (21) The key variable is µ t 1+µ t , a proxy for the degree of pass-through from the import price to the price of tradable goods. If µ t = 0, the demand for traded goods is price inelastic, and firms in the traded goods sector can fully pass-through changes in import prices to the prices of the tradable goods. If |µ t | 0, pass-through is incomplete. The ratio µ t 1+µ t can be thought of as a measure of the degree of trade integration with the rest of the world (an indicator of competition in the domestic goods market). Equation (18) shows that a higher A t stemming from this first component is consistent with a lower real exchange rate ϕ t . The remainder in Eq. ( 20a) can be written as follows: P Xt X t + P t Y t 1 + r t open t - α 1 -α P Nt Y Nt ν t ν t -1 + 1 + 1 -α (1 + r t ) 2 P Nt Y Nt -m t+1 + X t , ( 22 ) where open t is the degree of trade openness defined as the sum of exports P Xt X t and imports P T t Y T t as a share of GDP P t Y t . One expects higher openness to be associated with lower prices of non-tradable goods and therefore with a lower real exchange rate. The ratio ν t /(ν t -1) is an index of the velocity of money. Insofar as non-traded goods are cash goods, high money demand (high ν t ) indicates a preference for these goods relative to traded goods. Conversely, when ν t → 0, low money demand reflects a preference for tradable goods. An increase in ν t is associated with a higher real exchange rate ϕ t , since higher money demand increase the relative price of non-traded goods. A higher ν t means a lower A t and, by Eq. ( 18), this implies a higher ϕ t . The nominal interest rate represents the cost of debt servicing. An increase implies a decrease in the demand for the credit (tradable) good and therefore a higher real exchange rate. From Eq. ( 17), we deduce that r t depends upon the nominal foreign interest rates, the financial risk premium and the expected changes in the nominal exchange rate of the domestic currency against the US dollar (or the euro). As for the other variables, C t has a one-to-one relationship with the value of debt denominated in a foreign currency. Higher debt resulting from greater borrowing allows the greater consumption of traded goods (credit goods), thereby implying an increase in the relative price of this good and therefore a decrease in the real exchange rate. X t is the band width of the "composite" nominal exchange rate e t . Narrowing the band for permissible exchange rate fluctuations can prevent excessive nominal exchange rate appreciation and depreciation. We therefore expect any change in X t to be negatively correlated with changes in e t . From Eq. ( 18), we expect a positive correlation with any change in the real exchange rate. Using Eq. ( 20b), it can be shown that D t = 2(1 + µ t ) α γ C Nt . ( 23 ) D t is therefore a function of the price elasticity of the demand for the traded good. Its impact on the real exchange rate is similar to that of the ratio µ t /(1 + µ t ) examined above. Solving the model for the exchange rate and foreign reserves From Eqs. ( 18) and ( 19), we obtain ϕ t = D t X t -∆RES t + θ 1 λ 1 e us t + (1 -λ 1 )e euro t . (24) This expression of the real effective exchange rate (ϕ t in level) explains the choice of the country's anchor regime according to two important variables: a = D t and b = X t + θ 1 λ 1 e us t + (1 -λ 1 )e euro t . D t is a function of the monopolistic power in the traded goods market and can be considered a proxy for the degree of trade integration with the rest of the world (see Eq. ( 23)). A positive value of D t indicates a relatively high degree of integration reflected by lower monopolistic power in the traded goods market. Conversely, a low degree of trade integration is captured by a negative value of D t . Regarding b, in the denominator, the expression captures the degree of exchange market intervention. Frequent interventions, suggesting a harder peg, are associated with positive values of b, while infrequent interventions are associated with negative values. Figure 2 shows appreciation or depreciation of the real effective exchange rate with regards to changes in reserves and, thus, the nominal exchange rate. The increase in changes in reserves reflects nominal exchange rate depreciation. The dotted lines show how the curve moves after an increase in b. The upper two cases show countries with high integration (a > 0). These types of countries and the two lower cases show countries with low integration (a < 0). When countries have low integration with the rest of the world (a < 0), the curve shows that an increase in changes in reserves (or slowing of the decrease in changes in reserves) would cause appreciation of the real effective exchange rate. As changes in reserves increase, the curve decreases, reflecting the policy mix anchor. Indeed, in a real anchor policy, the monetary authority would depreciate the rate sufficiently to cause real depreciation. This is reflected in the case where countries have high integration with the rest of the world (a > 0). Therefore, the mixed policy case is constrained by the integration degree and the amount of local monopolistic power. The lower world trade integration is and the higher monopolistic power, the higher the probability that a country chooses a policy mix anchor. Finally, the sign of b does not impact the choice of anchor, but it does impact the degree of the anchor. For a given sign of a, the different signs and values of b are reflected by the same curves. Since b reflects the degree of floating or hardness of a peg in a given country, this explains that the choice of exchange rate regime does not affect the choice of the policy mix anchor. This confirms what we have found in Figure 1, where multiple countries undertake a policy mix disregarding their exchange rate regime. In sum, our model explains that the choice of policy anchor is made separately and not as suggested in the previous literature (see, for instance, [START_REF] Savvides | Real exchange rate variability and the choice of exchange rate regime by developing countries[END_REF]. Authorities start by choosing the nominal anchor policy according to the country's specificities: the degree of pass-through, money velocity, openness, debt denominated in foreign currency, interest rate and the targeted exchange rate band. The stronger the impacts of openness, velocity, exchange rate pass-through, debt and interest rate, the more likely authorities are to choose a harder peg. Once the nominal anchor is set, authorities choose the real anchor with regards to the degree of trade integration and the degree of intervention on the exchange market. In the next section, we explore whether these domestic fundamentals impact exchange rate stability. at which factors are statistically significant in the first regime. For instance, assuming that the coefficients associated with external debt are positive and significant in the first regime implies that an increase in external debt leads to greater exchange rate stability against the anchor currency. Negative coefficients would suggest that a decrease in external debt is associated with a tighter peg. Finally, a non-significant coefficient implies that the variable does not affect exchange rate stability. The estimations are reported in Tables 3 and4, and the plots of regime probabilities are shown in Figures 3 and4. When the smoothed probability of state 1 is greater than 0.5, the exchange rate is considered to be in the low flexibility regime (i.e., episodes during which the monetary authority increases exchange rate stability against the USD). First, we compute a likelihood ratio to check whether exchange rate volatility evolves according to a non-linear process. The constrained model is an OLS version of Eq.( 25), where all coefficients are linear. The results clearly indicate that exchange rates evolve through two distinct regimes in which the degree of floating differs significantly. This confirms our previous empirical findings of the trade-off between tightening and loosening the nominal currency peg. Moreover, the estimates provide evidence that the influence of the explanatory variables is regime dependent. We find that the constant terms are lower in the first regime (i.e., α 1 < α 2 ), implying that regime 1 corresponds to episodes where exchange rate fluctuations against the USD are weaker. Concerning global factors, we find that non-energy price volatility is significant in regime 1 and positively impacts exchange rate volatility for only four counties (Botswana, Cabo Verde, Ghana and South Africa). This means that higher non-energy price volatility strengthens the incentives of these countries to stabilize their exchange rates against the USD. One explanation is that lower exchange flexibility allows these countries to reduce transaction costs linked to exchange rate fluctuations. Accordingly, increasing exchange rate stability against the USD allows them to preserve revenues for export firms. For other countries, a non-significant effect could be explained by the fact that the commodities they export are under-weighted in the index. 15 However, we find that energy price index volatility is significant for most of these countries. The negative sign suggests that lower energy price volatility leads to a decrease in exchange rate volatility against the USD. This finding is in line with [START_REF] Cashin | Commodity currencies and the real exchange rate[END_REF], who identify co-movement between world commodity prices and exchange rates among commodity-exporting countries. In the same way, when the VIX volatility index decreases, fluctuations of African currencies against the USD are weaker. In regime 2, we find that the index is positive, implying that during episodes of financial stress, exchange rates become more volatile. For the interest rate differential in regime 1, we find a positive and significant coefficient for all countries except Cabo Verde. A possible explanation for this is that when domestic rates increase faster than US rates, domestic borrowing becomes more costly. This induces a rise in external debt and provides a stronger incentive to stabilize the exchange rate. INSERT TABLE 3 Regarding domestic factors, we find that many coefficients are significant in regime 1 but non-significant in regime 2. This is strong evidence that domestic factors are important drivers of exchange rate stability, as suggested by our theoretical results. Starting with the external debt-to-GDP ratio, previous studies demonstrate that debt levels would have different effects on countries (see, for instance, [START_REF] Meissner | Why do countries peg the way they peg? The determinants of anchor currency choice[END_REF]. On the one hand, a large body of literature explains that since these countries are severely indebted, an increase in the debt level would increase debt service payments, which exert pressure on budgetary resources and crowd out domestic investment. This causes a decline in economic growth and puts downward pressure on the currency, which eventually increases the external debt burden. Intuitively, the more liabilities are denominated in a foreign currency, the greater is propensity to peg to that currency. On the other hand, an increase in debt can induce investment productivity, leading to an increase in GDP and currency appreciation. Our results suggest that the first view seems more intuitive in the African context. Indeed, we find that external debt is significant in regime 1 for almost all countries (except Cabo Verde and Sierra Leone). This suggests that any increase in external debt implies that African countries are more likely to peg their currency to the USD. For Burundi, the negative sign can be explained by the fact that external debt has sharply decreased since 2004. This has allowed Burundi to reduce downward pressure on its currency and, thus, reduce exchange rate volatility against the USD. For the velocity of money, quantitative money theory suggests that any increase in velocity would put some upward pressure on the value of the currency (Agbeyege et al. , 2004;[START_REF] Bleaney | The impact of terms of trade and real exchange rate volatility on investment and growth in sub-Saharan Africa[END_REF]. With an increase of the velocity of money, agents will choose to increase their investments and to spend more on consumption, which will lead to higher economic growth and inflationary pressure. For African countries, the results are mixed, as the velocity of money is significant in about half of the cases. One potential explanation is the greater financial development of some African countries, such as Botswana, South Africa and Cabo Verde (IMF , 2016b). Any increase in the velocity of money would positively impact investments, growth and inflation, which would encourage authorities to contain upward pressure on the exchange rate. The more financially developed the country, the greater the impact. INSERT TABLE 4 For the degree of openness indicator, we expect that the more integrated these countries are with the rest of the world, the more they accumulate foreign reserves (for net exporters), thus putting upward pressure on their currencies. In such a case, authorities face a trade-off between letting the currency appreciate to alleviate the external debt burden (at the expense of the real exchange rate and trade competitiveness) and intervening on the foreign exchange market to stabilize the exchange rate. For net importers, the dilemma is reversed. We observe that the degree of openness significantly explains exchange rate stability for all countries except Liberia and South Africa. In the latter cases, this is explained by the high South African exchange rate and its limited foreign exchange intervention. However, a positive and significant coefficient implies that when the degree of openness increases, exchange rate stability increases as well because authorities are more willing to intervene in the foreign exchange market to offset pressure on the domestic currency (Burundi, Cabo Verde, Ethiopia, Ghana, Guinea Kenya and Madagascar). When the sign is negative, any decrease in the degree of openness contributes to increased exchange rate stability without requiring authorities to intervene in the foreign exchange market. This is particularly true for Guinea and Nigeria for which the degree of exchange rate flexibility was found to be large. Finally, a higher degree of ERPT would induce higher imported inflation, thus putting downward pressure on the domestic currency. Accordingly, monetary authorities are more likely to intervene in the foreign exchange market to avoid currency depreciation and an increase in external debt. If trade competitiveness prevails, authorities would let the currency depreciate. The latter argument seems consistent for Burundi Ethiopia, Madagascar and Mauritius, since the degree of ERPT does not impact exchange rate stability. A positive sign implies that monetary authorities will tighten their nominal peg when the degree of ERPT increases to contain inflationary pressure (Botswana, Liberia and Nigeria). For the other countries, the degree of ERPT is negatively linked to exchange stability. Conclusion In conclusion, when policymakers choose to anchor the exchange rate it is essential to consider whether they are anchoring real, nominal or both rates. Anchoring the nominal exchange rate occurs by stabilizing variation in reserves in order to 1) respond to current account imbalances, 2) meet foreign liabilities by servicing external debt 3) and maintain access to foreign borrowing. Anchoring the real exchange rate through controls of the nominal exchange rate has the objectives of influencing the consumer trade-off between locally produced and imported goods and promoting external competitiveness. Having a policy mix anchor would satisfy simultaneously these objectives while promoting both internal and external stability. Our results suggest that states follow a two-step strategy. Policymakers choose their exchange rate regime (nominal target) with regards to the degree of openness, the degree of debt to GDP denominated in foreign currencies and the fluctuation margin band of the exchange rate. Then, authorities choose the real anchor according to the extent of trade integration with the rest of the world and the degree of intervention in the exchange market. The impact of the strategy therefore depends on the behavior of the real anchor with regards to the nominal anchor constrained by the amount of monopolistic power. We find that the strategic behavior of Sub-Saharan African countries is not efficient in terms of reducing external imbalances and that they will always face a trade-off between objectives. The main cause of such behavior is a high degree of monopolistic power that is explained either by diaspora communities controlling some services sectors or institutions controlling their natural resources. To pursue a more efficient strategy in terms of reducing external imbalances, Sub-Saharan African countries need to create more competitive markets. where (using the consumer's budget constraint and Eqs. ( 6) and ( 7) to compute the demand for money): e a t+1 B t+1 = 0. (32) Setting: 33) and assuming that the discriminant is null, we have . a = 1 1 + r t , b = 1 + 1 -α 1 + r t ( (34) Defining: A t = αw t Y Nt Y T t a(1 -α)(Y t -Y Nt ) + (1 + a)P Xt X t + (1 -ab)P Nt Y Nt -m t+1 + X t , (35) C t = B t + P Mt Y T t (1 -Y Nt )(1 + ab), (36) We rewrite e t = A t + θ 1 [λ 1 e us t + (1 -λ 1 )e euro t ] (θ 1 -C t ) . Setting: D t = w t P Mt (1-α) γ a (Y t -Y Nt ) . ( 38 ) We obtain: ϕ t = out for the following sub-samples: 2000:01-2008:09 (sub-sample 1) and 2010:01-2017:10 (sub-sample 2) to reduce the effects of the 2008-2009 financial crisis. Figure 2 : 2 Figure 2: Policy behavior choices (Yt -Y Nt ) -(1+ 1 (1+rt ) )PXt Xt+ 1 (1+rt ) (1+ 1-α (1+rt ) ))PNtYNt+mt+1-wtYNt-θ1[λ1e us t +(1-λ1)e euro t ]-∆Lt-∆Ut] 2[(θ1-Bt)+PMtYTt(1-YNt)(1+ab)] Table 1 : 1 Policy mix and de facto exchange rate arrangements in SSA(IMF , 2016a) Policy mix characteristics Countries Case I: Nominal depreciation is accompanied by Floating: Madagascar, Mauritius, Kenya, Uganda, Zambia, Tan- real depreciation. zania and Sierra Leone Country promotes external competitive- Conventional peg: Guinea-Bissau (WAEMU), Central African ness over lower cost of foreign debt. Rep., Chad, Rep. of Congo, Gabon (CEMAC), Cabo Verde, São Tomé and Eritrea Other managed arrangement: Angola, Liberia, Rwanda Crawl-like arrangement: Ethiopia Stabilized arrangement: Burundi, Nigeria Case II: Nominal appreciation is accompanied by Other managed arrangement: Guinea real depreciation. Country achieves both objectives, simulta- neously lowering the cost of debt and pro- moting external competitiveness Case III: Nominal appreciation is accompanied by Conventional peg: Senegal (WAEMU) real appreciation. Country promotes lowering the cost of for- eign debt over external competitiveness. Case IV: Nominal depreciation is accompanied by Floating: Ghana, Malawi, Mozambique, South Africa, Sey- real appreciation. chelles Country disregards both the cost of foreign Conventional peg: Benin, Burkina Faso, Ivory Cost, Niger, Togo debt and external competitiveness. (WAEMU), Cameroon (CEMAC), Comoros, Lesotho, Namibia and Swaziland Crawling peg: Botswana, the Gambia 2.2. Exchange rate policy in SSA: a simple empirical model Table 2 : 2 Variance decomposition of forecast errors as a % of the total variance of African exchange rates. 2000:01-2008:09 2010:01-2017:10 Table 3 : 3 Estimates of regime-dependent correlations among exchange rate, domestic and global factors (sample 1) Botswana Burundi Cabo Verde Ethiopia Ghana Guinea Kenya Notes: *,**,*** denote significance at the 10%, 5% and 1% levels, respectively. The standard errors of parameters are reported in parentheses (.), while p-values are displayed in brackets [.] . The LR aims to test whether the Markov switching model outperforms the simple linear regression model. The LR test statistic is computed as follows: LR = 2 × [LL MS (Θ) -LL OLS (Θ)], where Θ indicates the parameters of the model. The null hypothesis is that the MS model does not fit significantly better than the OLS model. The estimates DEBT , OD, PT , V, V IX, NE, EN and I correspond to the correlations associated with external debt, trade openness, ERPT, velocity of money, VIX, non-energy price index, energy price index, and interest rate differential, respectively. Blank cells indicate that data are not available. Table 4 : 4 Estimates of regime-dependent correlations among exchange rate, domestic and global factors (sample 2)Notes: *,**,*** denote significance at the 10%, 5% and 1% levels, respectively. The standard errors of parameters are reported in parentheses (.), while the p-values are displayed in brackets[.]. The LR aims to test whether the Markov switching model outperforms the simple linear regression model. The LR test statistic is computed as follows:LR = 2 × [LL MS (Θ) -LL OLS (Θ)],where Θ indicates the parameters of the model. The null hypothesis is that the MS model does not fit significantly better than the OLS model. The estimates DEBT , OD, PT , V, V IX, NE, EN and I correspond to correlations associated with external debt, trade openness, ERPT, velocity of money, VIX, non-energy price index, energy price index, and interest rate differential, respectively. Blank cells indicate that data are not available. Figure 3: Smooth probabilities of being in the low-volatility regime (sample 1) = λ 1 e us t + λ 2 e euro t + λ t and r t = r us t + (E t e us t+1e us t ) + ξ t . Moreover, from Eq. 13, we obtain e t = ∆RES t + P Xt X t + e a t+1 B t+1 r t B t + [1 -( α γ )ϕ t Y Nt ]P Mt Y T t Liberia Madagascar Mauritius Nigeria Sierra Leone South Africa = (1 + r t )e t B t -) , P t Y t = P T t Y T t + P Nt Y Nt .Changes in foreign reserves are given by the money supply equation:∆RES t = -θ 1 (e t -λ 1 e us t -(1 -λ 1 )e euroSolving Eqs. (26b), (27) and (29) through Eq. (31), we obtain:e 2 t [(θ 1 -B t ) + P Mt Y T t (1 -Y Nt )(1 + 1 1 + r t (1 + (1 -α) (1 + r t ) )] + e t [ -αw t Y Nt Y T t ( (1-α) (1+r t ) )(Y t -Y Nt ) -(1 + 1 1 + r t )P Xt X t + 1 1 + r t (1 + (1 -α) (1 + r t ) )P Nt Y Nt + m t+1 -w t Y Nt -θ 1 [λ 1 e us t + (1 -λ 1 )e euro t ] -∆L t -∆U t ] + (P Mt w t Y Nt Y T t (1-α) (1+r t ) (Y t -Y Nt ) 1 1 + r t {(P t Y t -(m t+1 ) -m t )} -m t = 1 (1 + r t ) (1 -α)P t Y t w t Y Nt -P Xt X t + [( (1 + r t (28) α γ )ϕ t Y Nt Y T t ] , (29) Solving the system of equations, our steady state is: e t = -[ -αwt Y Nt Y T t (1-α) (1+rt ) (Yt -Y Nt ) -(1+ 1 (1+rt ) )PXt Xt+ 1 (1+rt ) (1+ 1-α (1+rt ) ))PNtYNt+mt+1-wtYNt-θ1[λ1e us t +(1-λ1)e euro t 2[(θ1-Bt)+PMtYTt(1-YNt)(1+ab)] ]-∆Lt-∆Ut] , (30) e a t+1 B t+1 = (1 + r t )e t B t --P Xt X t + [( α γ )ϕ t Y Nt Y T t ] + (1 -( 1 1 + r t  P Nt Y Nt (1 + α γ )ϕ t Y Nt )e t P Mt (1 + (1 -α) -m t+1 + w t Y Nt (1 + r t )) (1 + r t ) ). (1 -α) (31) t ) + ∆L t + ∆U t . The country sample includes the following countries: Madagascar, Mauritius, Kenya, Uganda, Zambia, Tanzania, Sierra Leone, Guinea-Bissau, Central African Rep., Chad, Rep. of Congo, Gabon, Cabo Verde, São Tomé and Eritrea, Angola, Liberia, Rwanda, Ethiopia, Burundi, Nigeria, Guinea, Senegal, Ghana, Malawi, Mozambique, South Africa, Seychelles, Benin, Burkina Faso, Ivory Cost, Niger, Togo, Cameroon, Comoros, Lesotho, Namibia and Swaziland, Botswana and Gambia. Mali, Sudan, Zimbabwe and Equatorial Guinea are excluded due to missing data. We include the CNY given China's increasing share of total African trade and financial transactions. This causal ordering reflects their level of exogeneity, with the assumption that the USD is exogenous to contemporaneous shocks on the CNY. This allows us to avoid simultaneity bias since the CNY and African currencies can be simultaneously affected by the USD. Note that alternative causal orderings have been used, with ∆e EUR exogenous to ∆e US D . These estimation results, which are not presented here but are available upon request, do not lead to different conclusions. These innovations are orthogonalized using the Cholesky decomposition, while the weight computations are done through VAR-based variance decomposition of forecast errors. We use monthly data because higher frequency data are not available for all countries over the study period. Moreover, we use the Swiss franc (CHF) as an independent numeraire to measure exchange rate movements. Special Drawing Rights (SDR) and the Australian dollar have been also considered but no significant changes in the results have been observed. To avoid collinearity issues with the euro, we exclude from the sample the African Countries belonging to the following African monetary unions: the West African Economic and Monetary Union (WAEMU), the Economic and Monetary Community of Central Africa (EMCCA) and the Common Monetary Area (CMA). For, Namibia, Lesotho and Swaziland (CMA), which are pegged to the South African rand, the results are logically identical to those of South Africa. In what follows, we shall use the terms "non-tradable" and "non-traded" for locally produced goods and "tradable" or "non-traded" for imported goods. In the model, consumption includes both private and public consumption, which means that we do not distinguish the government from the households. An agent's debt is therefore equivalent to the domestic country's sovereign debt. We choose a logarithmic function for consumption for convenience. For instance, more than 36% of Liberia's exports are composed of iron ore and rubber, but these two commodities represent less than 10% of the index. Regime 1 (s t = 1): Constant 1 -0,117*** -0,115*** -0,146*** -0,174*** -0,116*** -0,201*** -0,063*** (0,040) (0,018) (0,023) (0,016) (0,019) (0,022) (0,016) DEBT 1 0,296** -0,131*** -0,075 -0,405*** 0,069** 0,071** 0,144*** (0,128) (0,044) (0,055) (0,062) (0,034) (0,029) (0,036) OD 1 -0,041** 0,1*** 0,05*** 0,096*** 0,06** -0,053** 0,052** (0,017) (0,038) (0,012) (0,026) (0,028) (0,021) (0,023) PT 1 2,25** 0,048 -0,385*** -1,355*** -0,547 0,599* (1,141) (0,336) (0,107) (0,259) (0,447) (0,389) V 1 0,188** 0,223* 0,396*** 0,049 -0,078 (0,093) (0,115) (0,084) (0,090) (0,049) V IX 1 -0,295* 0,036 -0,075 -0,14 -0,485** -0,521*** -0,087 (0,226) (0,058) (0,063) (0,096) (0,201) (0,142) (0,087) NE 1 0,424*** 0,066 0,924*** 0,099 0,304** 0,11 -0,001 (0,117) (0,263) (0,175) (0,192) (0,148) (0,127) (0,090) EN 1 -0,068* 0,072 -0,097 -0,487*** 0,121 -0,22 0,121 (0,173) (0,151) (0,115) (0,095) (0,175) (0,141) (0,093) I 1 0,513* 3,577*** -0,137 (0,759) (1,107) (0,610) Regime 2 (s t = 2): Constant 2 -0,079*** -0,084*** -0,075*** -0,107*** -0,07*** -0,051*** -0,038** (0,012) (0,007) (0,011) (0,007) 51* 0,797*** 0,053 0,002 -0,11 -0,119 (0,600) (0,227) (0,107) (0,117) (0,298) (0,333) V 2 0,004* -0,023 -0,034 0,017 0,13** (0,057) (0,020) (0,028) (0,036) (0,057) V IX 2 0,031*** 0,87*** 0,62** 0,412** 0,01 -0,012 0,256** (0,087) (0,294) (0,297) (0,163) (0,084) (0,065) (0,111) NE 2 -0,057 -0,004 0,291*** -0,045 0,011 -0,094 -0,084 (0,110) (0,059) (0,072) (0,058) (0,069) (0,073) (0,123) EN 2 0,046 0,076 0,008 0,056 0,169** 0,004 0,243* (0,099) (0,063) (0,069) (0,087) (0,082) (0,063) (0,134) I 2 -2,716*** 0,015 1,061* (1,045) (0,119) (0,599) Common: σ -4,155*** -4,279*** -4,138*** -4,084*** -4,152*** -4,232*** -4,238*** (0,094) (0,072) (0,069) (0,080) (0,063) (0,078) (0,087) Transition matrix θ 11 -13,712*** -19,484 0,3 -14,333** -1,54* -0,453 -0,284 (4,134) (603,393) (0,655) (6,111) (0,835) (0,707) (0,482) θ 21 -0,979* -1,76*** -2,738*** -1,907*** -1,656*** -1,641*** 0,792 (0,695) (0,329) (0,458) (0,343) (0,451) (0,390) (0,754) Regime 1 (s t = 1): Constant 1 -0,139*** -0,126*** -0,133*** -0,143*** -0,207*** -0,129*** (0,019) (0,016) (0,014) (0,037) (0,019) (0,017) DEBT 1 0,026*** 0,221*** -0,027 0,568*** 0,008 0,109* (0,007) (0,055) (0,021) (0,095) (0,020) (0,060) OD 1 -0,002 0,055*** -0,033 -0,408*** 0,02* 0,131 (0,001) (0,013) (0,027) (0,128) (0,011) (0,096) PT 1 2,78*** 1,505 -0,68 3,069*** -0,168*** -3,253*** (0,992) (1,276) (0,577) (1,160) (0,062) (1,172) V 1 -0,099 0,077 0,164 0,405*** 0,26*** (0,091) (0,049) (0,122) (0,124) (0,068) V IX 1 -0,302 -0,312*** -0,021 -0,153 -0,448* -0,788*** (0,202) (0,116) (0,071) (0,175) (0,232) (0,166) NE 1 0,237 0,157 -0,144 0,271 -0,337 0,478*** (0,191) (0,176) (0,138) (0,460) (0,240) (0,133) EN 1 -0,542** -0,212** -0,192** -0,527*** -0,471** -0,383*** (0,232) (0,101) (0,093) (0,183) (0,197) (0,124) I 1 1,134** 0,938 1,82*** 3,621*** (0,518) (0,759) (0,399) (1,931) Regime 2 (s t = 2): Constant 2 -0,09*** -0,082*** -0,095*** -0,1*** -0,081*** -0,071*** (0,009) (0,013) (0,009) (0,010) (0,008) (0,010) DEBT 2 -0,003 0,032*** 0,009 0,019 -0,02* 0,04 (0,002) (0,012) (0,016) (0,034) (0,011) (0,039) OD 2 0,001 0,006 0,015** -0,001 0,013 0,013 (0,001) (0,013) (0,006) (0,042) (0,009) (0,022) PT 2 -0,017 -1,246** -0,362 -0,498 -0,074** 0,465 (0,190) (0,610) (0,301) (0,381) (0,034) (0,667) V 2 0,066 0,039 0,007 0,004 -0,054 (0,058) (0,035) (0,022) (0,027) (0,075) V IX 2 0,009 -0,003 0,362** 0,043 0,053 0,07 (0,074) (0,086) (0,167) (0,101) (0,079) (0,052) NE 2 -0,018 0,048 -0,023 -0,013 0,048 -0,045 (0,067) (0,083) (0,075) (0,094) (0,068) (0,055) EN 2 -0,014 -0,034 0,01 -0,055 0,006 -0,028 (0,069) (0,095) (0,086) (0,095) (0,068) (0,055) I 2 0,601* -0,081 0,317 -0,151 (0,364) (0,217) (0,374) (0,614) σ -4,107*** -4,3*** -4,172*** -3,912*** -4,234*** -4,276*** (0,062) (0,125) (0,073) (0,103) (0,068) (0,081) Transition matrix θ 11 -0,288 -0,455 1,487*** -1,932 -0,905 -14,83*** (0,919) (0,703) (0,487) (2,459) (0,777) (1,241) θ 21 -2,275*** -1,543*** -2,266*** -1,969*** -1,661*** -2,026*** (0,555) (0,518) (0,463) (0,629) (0,424) (0,628) Determinants of SSA currency pegs In the last stage, we empirically explore whether the domestic factors that we highlighted in the theoretical model provide incentives for African countries to tighten their nominal pegs. Our aim is to explain the way the African countries peg their currencies. We focus on a sample of 13 African countries: Botswana, Burundi, Cabo Verde, Ethiopia, Ghana, Guinea, Kenya, Liberia, Madagascar, Mauritius, Nigeria, Sierra Leone and South Africa. Our choice is based on data availability and quality. 11 We consider the following Markov switching (MS) model augmented with a set of economic variables: The estimation procedure is detailed in Appendix (B). The estimates are carried out for the period January 2004-November 2016. The dependent variable is defined as the exchange rate volatility between the African currency and the main anchor currency. 12 Note that the latter corresponds to the currency with the higher estimated weight according to the estimation procedure described in Section 2.2 (in most cases, the US dollar). The factors that are likely to affect the degree of the nominal anchor are divided into domestic and global factors. The variables vel, deb, pt, do are the domestic factors and correspond to the velocity of money (computed as the ratio between base money and GDP), the total external debt-to-GDP ratio, the evolving degree of exchange rate pass-through (ERPT), and the evolving degree of trade openness (computed as the ratio between GDP and total trade), respectively. 13 We take the first difference of these five variables to ensure stationarity. Moreover, all exogenous variables are lagged by one period to avoid endogeneity issues. All data are extracted from the World Bank, IMF DOTS and IMF IFS databases. Moreover, we control for the effects of global factors: energy and non-energy price volatility (World Bank Commodity Price Data), the CBOE Volatility Index (VIX) and the interest rate differential (computed as the difference between the policy rates of the domestic country and the United States), which are denoted by en, nen, vix and i, respectively. We use the log of squared returns of energy and non-energy prices as a proxy for volatility. We take advantage of the flexible parametric structure of the MS model to allow exchange rate volatility to evolve across two distinct regimes (peggingversusfloating). 14 Under the condition that α 1 < α 2 , the first regime is defined by low flexibility against the the anchor currency; the second, by high flexibility. Moreover, the correlation coefficients are regime dependent, implying that the influence of economic factors differ across exchange rate regimes. The main advantage is that this allows us to identify the determinants of the decision to tighten a currency peg. In this view, we need to look 11 We exclude countries belonging to monetary unions, since the degree of exchange rate flexibility is zero. 12 As a proxy for volatility, we take the log volatility computed as the log of squared returns of the exchange rates. 13 We estimate a dynamic regression model for the ERPT to African consumer prices. We follow the literature by augmenting the bivariate relationship between the NEER and domestic inflation with trade-weighted foreign prices and domestic real GDP. We take the year-to-year differences of the variables expressed in logarithmic terms. Rolling estimation of the ERPT is performed for each African country over the period from March 1998 to January 2016 with a window size of 72 observations and a step size equal to one. Accordingly, for each African country, we obtain 155 coefficients that represent the dynamic ERPT. The NEER and trade-weighted foreign prices are computed with trade data from the IMF DOTS database. The results, which are not presented here, are available upon request. 14 We favor time series over panel data for two main reasons. First, we may think that the role of economic determinants can vary from one country to another. Detecting this heterogeneity is important given the prospects for deeper monetary and financial integration in Africa. Second, the MS model allows us to discriminate between two regimes according to different time series properties (here, the mean of the volatility) and to then observe specific schemes or patterns in a given regime. Appendix A: Solving for nominal and real exchange rates The export sector is an enclave in the sense that the domestic country is assumed to export minerals, natural resources, oil, cocoa, coffee, etc., directly to foreign countries. Equilibrium in the non-tradable and tradable markets implies that Using one of these equalities and the expressions in (6), one finds the following expression of the real exchange rate: We have: Appendix B This appendix briefly details the estimation procedure for the general Markov switching model. The maximum likelihood method is employed to provide estimates of the parameters, and the BFGS algorithm is used to perform non-linear optimization. The parameters of Eq. ( 25) depend upon a hidden variable s t ∈ {1, 2} representing a particular state of exchange rate volatility. Since the states are unobservable, the inference of s t takes the form of a probability given observations on σe AFC t . The state-generating process is an ergodic two-regime Markov chain of order 1 with the following transition probabilities: with, The conditional likelihood function for the observed data is defined as: where ξ t = (y t , y t-1 , . . . , y 1 ), Ω t = (X t , X t-1 , . . . , X 1 ) denotes the vector containing observations through date t, and Θ t is the vector of model parameters. Considering the normality assumption, the regime-dependent densities are defined as: where Φ is the standard logistic cumulative distribution function and φ is the standard normal probability density function. The model is estimated using a maximum likelihood estimator for mixtures of Gaussian distributions, which provides efficient and consistent estimates under the normality assumption (see, e.g., [START_REF] Kim | Estimation of Markov regime-switching regression models with endogenous switching[END_REF]. Applying Bayes' rule, the weighting probabilities are computed recursively: P(s t = i, s t-1 = j|Ω t , ξ t-1 ; Θ) = P(s t = i, s t-1 = j|z t ; Θ)P(s t-1 = j|Ω t , ξ t-1 ; Θ) = P i j (z t )P(s t-1 = j|Ω t , ξ t-1 ; Θ), P(s t = i|Ω t+1 , ξ t ; Θ) = j f (y t |s t = i, s t-1 = j, Ω t , ξ t-1 ; Θ)P(s t = i, s t-1 = j|Ω t , ξ t-1 ; Θ) f (y t |Ω t , ξ t-1 ; Θ) . (46)
74,519
[ "945438", "945439" ]
[ "526949", "526949", "531465" ]
01757060
en
[ "shs" ]
2024/03/05 22:32:10
2000
https://insep.hal.science//hal-01757060/file/127-%20Local%20muscular%20fatigue%20and.pdf
Marie-Françoise Devienne Michel Audiffren Hubert Ripoll Jean-François Stein Local muscular fatigue and attentional processes in a fencing task Local muscular fatigue and attentional processes in a fencing task Marie-Françoise Devienne 1 , Michel Audiffren 2 , Hubert Ripoll 3 , Jean-François Stein 1 1 Mission Recherche, INSEP Paris, France 2 Faculté des Sciences du Sport, Université de Poitiers, France 3 Faculté des Sciences du Sport, Université de la Méditerranée, France Summary Study of the effects of brief exercise on mental processes by [START_REF] Tomporowski | Effects of exercise on cognitive processes: a review[END_REF] has shown that moderate muscular tension improves cognitive performance while low or high tension does not. Improvements in performance induced by exercise are commonly associated with increase in arousal, while impairments are generally attributed to the effects of muscular or central fatigue. To test two hypotheses, that (1) submaximal muscular exercise would decrease premotor time and increase motor time in a subsequent choice-RT task and (2) that submaximal muscular exercise would increase the attentional and preparatory effects observed in premotor time 9 men, aged 20 to 30 years, performed an isometric test at 50% of their maximum voluntary contraction between blocks of a 3-choice reaction-time fencing task. Analysis showed (1) physical exercise did not improve postexercise premotor rime, (2) muscular fatigue induced by isometric contractions did not increase motor time, (3) there was no effect of exercise on attentional and preparatory processes involved in the postexercise choice-RT task. The invalidation of hypotheses was mainly explained by disparity in directional effects across subjects and by use of an exercise that was not really fatiguing. Study of the effects of brief exercise on mental processes have shown that moderate muscular exercise improves cognitive performance while low or high muscular exercise neither improves nor impairs it (for a review, see [START_REF] Tomporowski | Effects of exercise on cognitive processes: a review[END_REF]Brisswalter & Legros, 1996). Improvement is commonly associated with increased arousal and activation, while impairment is generally attributed to the effects of muscular or central fatigue. The present aim was to study the influence of repeated isometric contractions on cognitive performance. To study the differentiated effects of local muscular fatigue and of increasing arousal and activation, we used the fractionated reaction-time technique [START_REF] Botwinick | Premotor and motor components of reaction time[END_REF]. According to the discrete stage model of [START_REF] Sternberg | The discover of processing stages: extensions of Donders' method[END_REF], we hypothesized that a factor which selectively influences the central stages of information processing would affect premotor time but not motor time. Thus, increased arousal, induced by a physical exercise, would decrease premotor time without affecting motor time. In other respects, [START_REF] Stull | Effects of variable fatigue levels on reaction-time components[END_REF] showed motor time increased with fatigue. Thus, muscular fatigue should negatively influence motor time without affecting premotor time. In addition, we hypothesize that an increase in arousal and activation induced by exercise enhances the orienting of visual attention and of motor preparation involved in a priming procedure [START_REF] Posner | Orienting of attention[END_REF][START_REF] Rosenbaum | A priming method for investigating the selection of motor responses[END_REF]. In this procedure, information (prime), related to the forthcoming location of a response signal and to the forthcoming movement, is presented to the subject before the response signal. This information can allow the subject to set, during the preparatory period, visual attention and motor preparation processes with the aim of reducing the reaction time. The preliminary information can be valid or invalid, that is to say, it can bring exact or erroneous information on location of the response signal and parameter values of the forthcoming movement. The difference between invalid and valid primes has been described in terms of the mental operations involving engagement, disengagement, switching, and reengagement of attention [START_REF] Posner | Orienting of attention[END_REF] or the programming, deprogramming, and reprogramming of movement [START_REF] Rosenbaum | A priming method for investigating the selection of motor responses[END_REF]. We expected that an increase in arousal and activation would increase availability of attentional and preparatory resources, and of differences between performances recorded for valid and invalid priming. Method Nine male volunteers (M age = 25 yr., SD= 3.8) participated. Subjects confronted a 3-choice reaction time task which consisted of reaching a target with a sword when a light-emitting diode was illuminated. The task was executed using ARVIMEX [START_REF] Nougier | Covert orienting of attention and motor preparation processes as a factor of success in fencing[END_REF]) which analyzes visuomotor reactions in fencing. The fatigue of the triceps brachii was measured by the median frequency obtained from electromyography [START_REF] De Luca | Myoelectrical manifestations of localized muscular fatigue in humans[END_REF]. The experiment was composed of a baseline and an exercise session. The baseline session was composed of five blocks of 90 trials with a rest period between blocks. The exercise session was composed of five blocks of 90 trials with an isometric contraction between blocks performed at 50% of the maximal voluntary contraction. The order of the two sessions was randomized. To obtain attentional effects, each block of 90 trials included 24 trials with a valid prime and six trials with an invalid prime for each of the three targets. Results and discussion We predicted that physical exercise would induce two different effects on the choice-reaction time task. On the one hand, the muscular fatigue should impair motor time without affecting premotor time. On the other hand, an increase in arousal and activation induced by physical exercise should decrease premotor time without affecting motor time. These two effects might decrease premotor time and increase motor time. The shift of the median frequency to a low frequency was not significant, although the exhaustion times in executing a 50% isometric contraction with the triceps brachii decreased significantly between the pre-(M=86 sec., SD =26) and posttest conditions (M = 61 sec., SD = 14). Contrary to our expectations and to the results of a previous study [START_REF] Stull | Effects of variable fatigue levels on reaction-time components[END_REF], no deterioration in motor time was observed in the last block of trials after the isometric contraction. As the sample of subjects was composed of well conditioned athletes, the 50% isometric fatigue task was probably not severe enough relative to fatigue. Studies of the effect of brief exercise on mental processes have shown that moderate muscular tension improves cognitive performance whereas high or low tension has no effect [START_REF] Tomporowski | Effects of exercise on cognitive processes: a review[END_REF]. It is important to emphasize that in all these studies mental tasks were carried out simultaneously with muscular contractions. The only study of postexercise effects showed an increase in visual threshold after an isometric contraction (Krus, Wapner, & Werner, 1958). Despite the choice of moderate physical exercise, we have observed no variation in performance on the reaction-time task. This lack of positive effect on premotor time could reflect greater variability in the directional effect across subjects or lack of effect for all subjects. An analysis of individual records showed that during the first trial block, premotor time increased for six subjects whereas the premotor time decreased for three subjects. Such an analysis clearly shows that the lack of a significant effect of exercise reflected between-subjects variability. Perhaps initially, arousal varied with their different personality traits, motivation, or time of day in which they participated. In accordance with [START_REF] Posner | Orienting of attention[END_REF] and [START_REF] Rosenbaum | A priming method for investigating the selection of motor responses[END_REF], voluntary orienting of attention and preparation induced by a valid prime before the signal response facilitated subjects' reaction time, premotor time, and movement time (F 1,8 = 83.2, 12.5, and 55.6, respectively). This result could be interpreted in two ways. From Posner's perspective (1980), the engagement of attention during stimulus onset asynchrony, given a valid prime, involves decreased reaction time. Given an invalid prime, disengagement, switching, and reengagement of attention increases reaction time. In contrast, [START_REF] Rosenbaum | A priming method for investigating the selection of motor responses[END_REF] would argue that the programming of movement during the foreperiod, given a valid prime, facilitates reaction time. Deprogramming and reprogramming of movement that takes place, given an invalid prime, impairs reaction time. The priming effect did not interact with block or session. Consequently, our second hypothesis that a submaximal muscular exercise would increase attentional and preparatory effects observed in premotor time was not validated. As we expected, motor time which reflects the muscular electromechanical transduction time was not affected by the nature of the prime.
9,767
[ "175856" ]
[ "441096", "460297", "5033", "441096" ]
01757186
en
[ "chim", "spi", "phys" ]
2024/03/05 22:32:10
2018
https://univ-lyon1.hal.science/hal-01757186/file/TiTiC_COMETTi_insituXRD_v20.pdf
Jérôme Andrieux email: jerome.andrieux@univ-lyon1.fr Bruno Gardiola Olivier Dezellus Synthesis of Ti matrix composites reinforced with TiC particles: in-situ synchrotron X-ray diffraction and modeling Keywords: Metal matrix composites (MMCs), Solid state reaction, Interface Interdiffusion, Synchrotron diffraction, Theory and modeling (kinetics, transport, diffusion) published or not. The documents may come Synthesis of Ti matrix composites reinforced with TiC particles: in situ synchrotron X-ray diffraction and modeling Introduction The high specific mechanical properties of Metal Matrix Composite (MMC) lead to much considerations over the past decades for structural lightening in aerospace applications or wear resistance in car breaks. Among the different materials used as reinforcement for Titanium based MMC, titanium carbide (TiC) was selected for its excellent chemical compatibility with the matrix [START_REF] Clyne | An introduction to Metal Matrix Composites[END_REF][START_REF] Vk | Recent advances in metal matrix composites[END_REF][START_REF] Miracle | Metal matrix composites -From science to technological significance[END_REF][START_REF] Huang | In situ TiC particles reinforced Ti6Al4V matrix composite with a network reinforcement architecture[END_REF]. The powder metallurgy route, that is widely used to produced Ti-based MMC [START_REF] Liu | Design of powder metallurgy titanium alloys and composites[END_REF], leads to the preparation of green compacts where Ti and stoichiometric TiC are not in thermodynamic equilibrium before the consolidation of the MMC. During the heating step, an interfacial reaction between Ti and TiC will occur as already shown and studied in the literature [START_REF] Quinn | Solid-State Reaction Between Titanium Carbide and Titanium Metal[END_REF][START_REF] Wanjara | Evidence for stable stoichiometric Ti2C at the interface in TiC particulate reinforced Ti alloy composites[END_REF]. More precisely, a preceding paper gave details of the general scenario of chemical interaction between the Ti matrix and the TiC particles during the consolidation step at high temperature [START_REF] Roger | Synthesis of Ti matrix composites reinforced with TiC particles: thermodynamic equilibrium and change in microstructure[END_REF]. Four key steps were proposed starting with dissolution of the smallest particles to reach C saturation in the Ti matrix, followed by a change in the TiC stoichiometry in order to reach thermodynamic equilibrium between matrix and reinforcement following reaction (1): y TiC + (1-y) Ti -> TiC y [START_REF] Clyne | An introduction to Metal Matrix Composites[END_REF] According to the assessment of the C-Ti binary system by Dumitrescu et al., the phase equilibrium is characterized by a y value ranging from 0.55 to 0.595 at 1000°C and 600°C respectively [START_REF] Dumitrescu | A reassessment of Ti-C-N based on a critical review of available assessments of Ti-N and Ti-C[END_REF]. Reaction 1 corresponds to incorporation of Ti from the metallic matrix to the To cite this paper : J. Andrieux, B. Gardiola, O. Dezellus, Synthesis of Ti matrix composites reinforced with TiC particles: in-situ synchrotron X ray diffraction and modeling, Journal of Materials Science 2018, Accepted for publication, doi 10.1007/s10853-018-2258-8. 3 carbide phase. The reaction is therefore associated with two major phenomena: an increase in the total amount of reinforcement in the composite material and an increase in the mean particle radius by a factor of 1.19 [START_REF] Roger | Synthesis of Ti matrix composites reinforced with TiC particles: thermodynamic equilibrium and change in microstructure[END_REF]. As a consequence, the average distance between the particles decreases during the course of the reaction and contact between individual particles occurs in the most reinforced domain of MMC. This initiates an aggregation phenomenon that is responsible for the high growth rate of particles observed for heat treatment times lower than 1h. Finally, Ostwald ripening is responsible for the change in particles for longer heat treatment times [START_REF] Roger | Synthesis of Ti matrix composites reinforced with TiC particles: thermodynamic equilibrium and change in microstructure[END_REF]. The present work is focused on the two first steps, i.e. C saturation of the Ti matrix and the change in TiC stoichiometry resulting in the Ti-TiC composite material tending towards its thermodynamic equilibrium. The main objectives are an experimental determination of the kinetics of these two first steps, supported by a modeling of the diffusion phenomena occurring at the interface between a particle and the matrix. For this purpose, the reaction was investigated by in-situ synchrotron X-ray Diffraction (XRD) with a high-energy monochromatic Xray beam at the European Synchrotron Radiation Facility (ESRF -ID15B). Nowadays, it is quite common in metallurgy to perform simulations of phase formation and transformations by using software packages such as DICTRA [START_REF] Andersson | Thermo-Calc & DICTRA, computational tools for materials science[END_REF]. Such calculations require the use of two databases. The first is a thermodynamic database that can provide a good description of the system; from the point of view of both phase equilibria, and energetic description of the different phases of the system. The second is a mobility database that is needed to calculate the diffusion coefficient inside the main phases of the system. To the authors' knowledge, despite the great success of such calculations in describing metallurgical processes, the same approach has not been used for the synthesis of Metal Matrix Composites, despite the fact that the two types of materials can be considered as very similar. In the present study, the change in the Ti matrix and its TiC reinforcement over several heat treatment periods is simulated with the DICTRA package in order to compare the results with the experimental study by in-situ X-ray diffraction. As usual, if the simulations are able to reproduce experimental results obtained in wellcontrolled conditions, then they can also be used to extrapolate and predict microstructure changes in much more severe conditions. To cite this paper : J. Experimental procedures Sample preparation The starting materials used to prepare powder compacts were commercial TiC 0.96 and Ti powders (see Tables 1 to 3 for their compositions The small cylinders of powder compacts were inserted under argon in a graphite crucible previously filled with a layer of Ti powder that was used as a getter for gaseous impurities such as oxygen or nitrogen. The graphite crucible was heated by an induction coil under vacuum (~10 -4 mbar) using an induction furnace available at ESRF through the Sample Environment Support Service (SESS). The temperature was measured and controlled using a K-type thermocouple placed in the crucible and connected to a 2408i Eurotherm controller. In step mode, the heating rate was measured at 60K.s -1 with temperature stabilization of ±1K at 1273K (1000°C). The starting time of the experiment, t=0, corresponds to the beginning of the heating step. Synchrotron XRD Transmission X-Ray Diffraction (XRD) experiments were performed on beamline ID15B at ESRF (Grenoble) using a high-energy X-ray monochromatic beam (300x300 µm 2 ) with a wavelength of 0.14370 Å (~86.3 keV) and a ∆E/E~10 -4 . To follow small variations in the diffraction peak position and improve the angular resolution (∆(2θ)=0.003 °(2θ)), the sampledetector distance was increased to 4.358 meters, leading to a cell parameter resolution of ∆a=0.002 Å. In addition, the detector was off-centered by 200 mm compared to the direct beam to select a 2θ range from 2.8 to 5°(2θ) and to focus the acquisition on the main TiC diffraction peaks, i.e. the (111)-3.295°2θ and the (200)-3.806°2θ peaks. Continuous acquisitions were performed using a MAR345 detector with an exposure time of 90s. Thus, diffraction rings were radially integrated using Fit2D software [START_REF] Hammersley | Two-dimensional detector software: From real detector to idealised image or two-theta scan[END_REF]. An example of a diffraction pattern collected on the sample at room temperature with peak identification is given in Figure 1. To cite this paper : J. The structural study was performed by sequential Rietveld refinement using the FullProf Suite Software with the WinPlotr graphical interface [START_REF] Roisnel | WinPLOTR: A windows tool for powder diffraction pattern analysis[END_REF][START_REF] Rodríguez-Carvajal | Recent advances in magnetic structure determination by neutron powder diffraction[END_REF]. The profile function 7 in Full Prof, a Pseudo-Voigt function with axial divergence correction, was used. The instrument resolution file was defined using LaB 6 as standard. Following the reduced 2θ range of acquisition due to the experimental configuration and the aim of the present paper, the Rietveld refinement was only performed on TiC diffraction peaks. For each refinement process, the refinement weighting factors were found at maximum R p <15%, R wp <20%, R exp <10%, χ 2 <5. More details in the sequential Rietveld refinement procedure are given in Section 3.1. Experimental Results Rietveld refinement Titanium carbide has a halite structure (two face-centered cubic (fcc) sublattices with Ti and C, respectively). It displays deviations in stoichiometry over a wide range of homogeneity, from TiC 0.47 to TiC 0.99 , because of the presence of vacancies in the carbon sublattice [START_REF] Seifert | Thermodynamic optimization of the Ti-C system[END_REF]. Consequently, this phase is usually modeled in Calphad assessments by using the compound energy formalism with two sublattices having the same number of sites, one totally filled by Ti atoms, the other filled by a random mixture of C atoms and vacancies [START_REF] Dumitrescu | A reassessment of Ti-C-N based on a critical review of available assessments of Ti-N and Ti-C[END_REF][START_REF] Jonsson | Assessment of the Ti-C system[END_REF][START_REF] Frisk | A revised thermodynamic description of the Ti-C system[END_REF] leading to a chemical formula that can be expressed as TiC y in order to illustrate both the stoichiometry range and the location of vacancies on the C sublattice. Experimentally, the number of vacancies has a conspicuous effect on the lattice parameter. In a review of the Group 4a carbides, Storms [START_REF] Storms | A critical review of refractories. Part I. Selected properties of Group 4a, 5a and 6a carbides[END_REF] reported that the lattice parameter expands for 0.99 > y > 0.85 and shrinks for 0.85 > y > 0.5, the total change in lattice parameter being less than 1%, from 4.330nm to 4.298nm respectively [START_REF] Bittner | Magnetische untersuchungen der carbide TiC, ZrC, HfC, VC, NbC und TaC[END_REF][START_REF] Norton | Properties of non-stoichiometric metallic carbides[END_REF][START_REF] Rudy | Ternary phase equilibria in transition metal-boroncarbon-silicon systems[END_REF][START_REF] Storms | The refractory carbides[END_REF][START_REF] Ramqvist | Variation of Lattice Parameter and Hardness with Carbon Content of Group 4 B Metal Carbides[END_REF][START_REF] Vicens | Contribution to study of system Titanium-Carbon-Oxygen[END_REF][START_REF] Kiparisov | Effect of titanium carbide composition on the properties of titanium carbide-steel materials[END_REF][START_REF] Frage | Iron-titanium-carbon system. II. Microstructure of titanium carbide (TiCx) of various stoichiometries infiltrated with iron-carbon alloy[END_REF][START_REF] Fernandes | Characterisation of solar-synthesised TiCx (x=0.50, 0.625, 0.75, 0.85, 0.90 and 1.0by X-ray diffraction, density and Vickers microhardness[END_REF]. The unusual presence of a maximum in the lattice parameter vs. composition curve, corresponding to anomalous volume behavior of TiC at small vacancy concentration, was recently explained by Hugosson as resulting from an effect of local relaxation of the atoms surrounding the vacancy sites [START_REF] Hugosson | Phase stabilities and structural relaxations in substoichiometric TiC1-x[END_REF]. Consequently, changes in y of the TiC y phase are associated with a slight modification in both the lattice parameter (about 1%) and the peak intensities because of variations in the number of vacancies in the C sublattice. Thanks to the improved angular resolution of the present in-situ XRD study (∆a=0.002 Å), these variations were captured and analyzed by Rietveld refinement to follow in situ the pathway to equilibrium during heat treatment of a Ti-TiC composite material. The TiC 0.96 particles used as starting material present a continuous particle size distribution ranging over three decades from 15-20 nm for the smallest to 7-10 µm for the biggest [START_REF] Roger | Synthesis of Ti matrix composites reinforced with TiC particles: thermodynamic equilibrium and change in microstructure[END_REF], leading to different crystallite sizes, with constant starting stoichiometry (Table 2) and lattice parameters. Diffraction peaks of such a population of particles are characterized by a convolution of different contributions due to the different crystallite sizes at the same 2θ position: a sharp and intense peak due to the biggest particles (i.e. a mean diameter value of about 1 µm) and a broader peak due to the smallest particles (a few tenths of nm), leading to a broadening at the bottom of the TiC diffraction peak, as evidenced on Figure 2.a. This specific peak shape was fitted during Rietveld refinement by using two populations of TiC particles: the first, labeled "TiC 0.96 _SC", presents a small crystallite size and is related to the smallest particles whereas the second, labeled "TiC 0.96 _BC", is associated with the biggest particles. An example of a Rietveld refinement result of (200) TiC peak is given in Rietveld refinement, the chemical composition of these two populations corresponding to the starting material (i.e. C and Ti site occupancies) was fixed and kept constant whereas the intensity scale factor, the peak shape profile parameters and the lattice parameters were refined independently. In terms of temperature, the diffraction peaks associated with TiC 0.96 _BC and TiC 0.96 _SC shifted to lower 2θ values due to thermal expansion. In addition, new diffraction peaks were observed for a higher value of 2θ and associated with the formation of the substoichiometric TiC y phase, labeled "TiC y ". During the sequential Rietveld refinement, the composition, the intensity scale factor, the peak shape profile parameters and the lattice parameters of the TiC y phase were refined. An example of Rietveld refinement of (200) TiC diffraction peaks acquired at 900°C is given in Figure 2.b. Fig. To cite this paper : J. Andrieux The time-dependent change in cell parameter is given on Figure 4. Note that t=0 on Figure 4 corresponds to the beginning of the heating step. First of all, this figure confirms the presence during the whole process of only two populations of TiC, characterized by two distinct values of lattice parameters. From a lattice parameter a RT =4.330Å at room temperature, the initial population corresponding to TiC 0.96 has a parameter of a 800°C =4.353 Å at 800°C and this cell parameter remains stable during the course of reaction 1. It may be deduced that the cell expansion for the TiC 0.96 due to the temperature increase is ∆a=0.023 Å. Concerning the TiC y population that forms at high temperature, Figure 4 shows an almost constant lattice parameter of a TiCy@800°C =4.336Å, the variation being less than 0.02% during the experiment. Assuming that the thermal expansion is the same for TiC 0.96 and TiC y , the lattice parameter of the TiC y population at RT can be estimated to be equal to a TiCy@RT =4.314Å. Following the correlation between the cell parameter and the stoichiometry of TiC y reported by Storms [START_REF] Storms | A critical review of refractories. Part I. Selected properties of Group 4a, 5a and 6a carbides[END_REF], the composition of the TiC y population is found to be TiC 0.57 . This experimental determination of the TiC y composition that forms at high temperature is in good agreement with the value To cite this paper : J. Andrieux expected when thermodynamic equilibrium is reached between the carbide and Ti phase at 1073K (800°C) [START_REF] Roger | Synthesis of Ti matrix composites reinforced with TiC particles: thermodynamic equilibrium and change in microstructure[END_REF][START_REF] Dumitrescu | A reassessment of Ti-C-N based on a critical review of available assessments of Ti-N and Ti-C[END_REF]. The formation of a sub-stoichiometric TiC y phase is confirmed at the interface between TiC 0.96 particles and the Titanium matrix as illustrated on STEM micrograph of a selected zone of the MMC microstructure (Figure 5). It leads to a core-shell structure of the particles, with a stoichiometric TiC core and a TiC y shell, in accordance with bright field mode contrasts. To cite this paper : J. Andrieux 13 The total mass fraction of reinforced TiC in the composite, calculated using equation ( 1), based on the experimental results obtained during the isothermal experiment at 1073K (800°C), is presented in Figure 7 (filled circles). The calculated mass fraction of reinforced TiC in the composite is of major concern as it is a key parameter governing the expected mechanical properties of the MMC. As the in-situ experiment time was set to 3600s, final equilibrium was not reached but extrapolation based on total conversion of the initial stoichiometric population to the substoichiometric composition determined by XRD provides a means of estimating the equilibrium mass fraction of TiC that is found to be equal to 0.2303 (dashed-dotted horizontal line on Figure 7). This value is in good agreement with that expected after thermodynamic equilibrium is reached (0.2306). To cite this paper : J. Andrieux Modeling Simulation conditions In order to model the changes that take place in the Ti-TiC metal matrix composite during high temperature heat treatment, the DICTRA software package [START_REF] Andersson | Computer simulation of multicomponent diffusional transformations in steel[END_REF] was used. DICTRA stands for diffusion-controlled transformation and is based on a numerical solution of the multicomponent diffusion equations and local thermodynamic equilibrium at the phase interfaces. The program is suitable for treating phenomena such as growth [START_REF] Borgenstam | DICTRA, a tool for simulation of diffusional transformations in alloys[END_REF], dissolution [START_REF] Agren | Kinetics of Carbide Dissolution[END_REF] and coarsening of particles in a matrix phase [START_REF] Gustafson | Coarsening of TiC in austenitic stainless steel -experiments and simulations in comparison[END_REF]. The DICTRA software package is built with databases for thermodynamics and diffusion. The present simulations were performed using the MOB2 mobility database provided by the ThermoCalc company and the thermodynamic assessment of the C-Ti system published by Dumitrescu et al. [START_REF] Dumitrescu | A reassessment of Ti-C-N based on a critical review of available assessments of Ti-N and Ti-C[END_REF]. The simulations were performed considering a spherical geometry, representative of reinforcing particles embedded in a metallic matrix. The carbide particle was defined as being spherical with an initial radius R TiC at the center of a spherical cell of radius R Mat defined by the surrounding metallic matrix (see Figure 8). the composite powder after the ball-milling step, before any high temperature treatment, is estimated to be D TiC = 128 nm. However, the residual presence of a tail corresponding to micron-sized particles must be taken into account. Note that the volume of matter corresponding to a 1µm particle is equivalent to 1000 particles or 8000 particles of respectively 100 and 50nm in size. It is difficult to obtain the real size distribution by counting particles from SEM observations because the quantity of biggest particles will be systematically overestimated. Therefore, the size distribution will be used as a guide giving the main trends, but exact numerical values cannot be used for modeling. Fig. 9 Diameter distribution of TiC0.96 particles removed from the composite powder by selective acid etching after a ball-milling step and before any high temperature treatment. As a consequence, the initial size of TiC 0.96 particles in the model was estimated from Figure 6 and more precisely from the volume ratio, r, between untransformed and transformed populations of TiC, respectively TiC 0.96 and TiC 0.57 populations. During the heat treatment, each particle can be considered as a core-shell structure with the residual TiC 0.96 composition as a core of radius R and the modified TiC 0.57 stoichiometry as a shell of thickness e. Given that the transformation occurs by a solid state diffusion process through the external shell, its thickness e depends only on the time of interaction and can be estimated by equation ( 2) regardless of the initial particle size (D inter is the interdiffusion coefficient in TiC 0.57 ). To cite this paper : J. Andrieux, B. Gardiola, O. Dezellus, Synthesis of Ti matrix composites reinforced with TiC particles: in-situ synchrotron X ray diffraction and modeling, Journal of Materials Science 2018, Accepted for publication, doi 10.1007/s10853-018-2258-8. 16 Eq. 2 The ratio r of the core to shell volumes for a spherical particle, corresponding to the advancement of the reaction transforming the particle, is then given by equation 3: Eq. 3 The radius of the residual inner TiC 0.96 core can then be estimated for a given diffusion time t and advancement of reaction r following equation ( 4): Eq. 4 This equation can be used to estimate, for a given interaction time t, the radius of the residual TiC 0.96 core, leading to a given advancement of reaction r. According to Figure 6, the residual amount of the initial TiC 0.96 population is about 50% (r = 0.5) after t = 400s at 1073K (800°C). The interdiffusion coefficient reported by van Loo et al. at this temperature for TiC 0.57 is equal to .s -1 [START_REF] Van Loo | On the diffusion of carbon in titanium carbide[END_REF]. Therefore, numerical applications lead to a shell thickness e = 10nm and a core radius R = 23 nm. From these simple calculations, it may be concluded that, considering a single particle size distribution, the very fast transformation rate of the initial TiC 0.96 composition can only be obtained for particles with an initial radius smaller than the typical value of about 30 nm. As a consequence, an initial TiC particle radius of 30nm is used for modeling (diameter equal to 60nm). In the model, the thickness of the metallic shell in which the carbide particle is embedded is fixed in order to obtain a massive amount of reinforcement equal to 16.23 mass%, i.e. a value corresponding to the composite material experimentally studied. In the case of carbide particles with a diameter of 60 nm, this consideration leads to a Ti matrix shell thickness of 29 nm. To model the isothermal treatment at 1073K (800°C), the matrix and particle domains are considered as being respectively the HCP_A3 phase with an initial C mass fraction of 10 -6 (see table 2), and the carbide TiC 0.96 phase with a starting C composition of 19.74 mass%C (Table 3). To cite this paper : J. Andrieux (800°C) of about 1 s. This result should be compared with Figure 6 where the Rietveld refinement of diffraction peaks shows the disappearance of the population of smallest particles after about 100 s. Obviously, some discrepancies are to be expected for such short times as the experimental heating rate is finite (i.e. 60K.s -1 ), even if a step function toward 1073K (800°C) was used for the induction heater and for calculations. Therefore, the typical times of the experimental results are expected to be slightly higher than those obtained from the kinetics calculations. More precisely, a careful analysis of Figure 4 reveals that the thermal expansion of the initial population of stoichiometric TiC 0.96 particles is observed experimentally in the center of the sample a few hundreds of seconds after starting the induction heater. Considering these limitations in the comparison of the shortest characteristic times for experiments and calculations, the agreement is relatively good and illustrates that chemical exchanges and solid state diffusion of C in the Ti matrix are extremely fast. The solid line in Figure 7 presents the calculated change in the TiC mass fraction as a function of treatment time at 800°C in the case of TiC 0.96 particles with a unique size of 60 nm diameter. The experimental mass fraction of TiC determined from the in-situ experiments is also reported (filled circles). The model correctly reproduces the fast increase in the TiC mass fraction observed by XRD in-situ measurements at the beginning of heat treatment. However, with a unique size of 60 nm, the final equilibrium is almost reached after only 1000 s whereas, experimentally, equilibrium is still not obtained after 3600s. As previously noted, the presence after milling of residual micron-sized particles is responsible for the experimentally observed two-step change: fast conversion of the smallest particles and much slower conversion of the biggest particles. Given that diffusion processes control the change in the amount of TiC particles, this change is therefore limited by the presence of the few big particles with the initial TiC 0.96 composition remaining in the TiC population after milling. Therefore, in order to improve the modeling process and to illustrate the drastic influence of the residual presence of big particles, the same calculation was performed with a 1µm size particle and associated with the preceding results by To cite this paper : J. Andrieux 7 with a dashed line. As expected, the presence of one micron-sized particle, in association with 6000 particles with a diameter of 60nm, allows the capture of both the initial transformation rate and the asymptotic trend towards the equilibrium state. Note that just one micron-sized particle represents 38% of the total volume of particles in the sample. Finally, to illustrate the potential of thermokinetic modeling, a third class is added with an intermediate size of 160nm in diameter and the results are presented in Figure 7 with a dotted line (population in numbers are 6000, 200 and 2 for 60, 160 and 1000nm sized respectively). Discussion The present results highlight the fast kinetics of the trend towards thermodynamic equilibrium of a Ti-based matrix composite reinforced by TiC 0.96 particles. During isothermal treatment at 1073K (800°C), the dissolution of the smallest TiC 0.96 particles to reach saturation of the Ti matrix is obtained after a few tenths of a second, while the formation of the equilibrium composition of the carbide phase, TiC 0.57 , concomitantly increases sharply. Moreover, after only 6min of isothermal treatment at 1073K (800°C), 50% of the conversion of TiC particles from their initial TiC 0.96 composition towards the equilibrium value, TiC 0.57 , is achieved. It is to be expected that such high reaction rates will have major consequences on the MMC synthesis process. First of all, reaching the C saturation of the Ti matrix induces the dissolution of about 10% of the initial TiC 0.96 particles (Figure 10.b). This dissolution process affects the smallest particles, which means that most of the effort made during milling to decrease the size of the initial particles is cancelled out. Next, the change in initial TiC composition (TiC 0.96 ) towards the equilibrium value (TiC 0.57 ) is achieved by partial conversion of the Ti matrix into carbide phase. This process therefore leads to an increase in the total amount of carbide in the composite from 16mass% to about 19mass% after 6min at 1073K (800°C) (Figure 7) and to an increase in particle size [START_REF] Roger | Synthesis of Ti matrix composites reinforced with TiC particles: thermodynamic equilibrium and change in microstructure[END_REF]. These changes induce a trend towards the formation of particle clusters that might have detrimental effects on the mechanical properties because of local embrittlement. To cite this paper : J. 20 These two main processes, i.e. dissolution of the smallest particles and increase in total number of particles, occur during the industrial high temperature consolidation step of a Ti-based matrix composite reinforced by TiC particles. It is difficult to avoid them for two reasons: the driving force is thermodynamic and their kinetics is very fast. As a consequence, they have to be considered upstream in the process design: for example by reducing the initial quantity of TiC particles. The reliability of thermokinetic modeling to simulate the change in TiC reinforcement in Ti-based composites has been demonstrated for the case of isothermal treatment (Figure 7). It can thus now be used to predict the change in mass fraction of reinforcement during the heat treatment that is performed prior to the consolidation step. Figure 11 presents a typical industrial heat treatment that involves inserting a billet, containing the cold compacted composite powder, inside a furnace heated to 1173 (900°C) for 1h. According to temperature measurements inside the consolidation furnace used for the experiment, the duration of the heating step is considered to be 10 min while the isothermal holding time is 50 min. The calculated change in TiC mass fraction, considering three particle size classes (as in Figure 7), is reported in Figure 11, as well as the experimental value determined just after the consolidation step. Once again, this figure highlights the fast kinetics of the reaction occurring inside the MMC as most of the transformation of the TiC stoichiometry, associated with the increase in particle mass fraction, is already achieved after a holding time of 10 min. Fig. 11 Change in the calculated TiC mass fraction during heat treatment of a composite powder compact prior to the consolidation step (solid line). Temperature vs time is also reported (dashed line) as well as the experimental mass fraction determined after heat treatment and the consolidation step (filled circle). To cite this paper : J. Andrieux Conclusion The reaction tending towards thermodynamic equilibrium during the synthesis of Ti/TiC MMC prepared by the powder metallurgy route was studied by in-situ synchrotron X-ray diffraction. It was found that the carbide composition changes rapidly from its initial stoichiometric composition TiC 0.96 towards a sub-stoichiometric value (TiC 0.57 ) corresponding to the thermodynamic equilibrium with the C saturated Ti matrix. The reaction is almost complete after only a few minutes at 800°C for the smallest particles, whereas the rate-limiting step is the particle size. Modeling of the diffusion processes in MMCs isothermal heat treatment at 800°C was performed using three particles size classes. First dissolution of the smallest particles (about 10% of the initial TiC 0.96 particles) is expected to be achieved after only 1s at 800°C. Second the change in TiC composition lead to an increase in the total amount of carbide in the composite from 16 mass% to 19 mass%. The consequences on the industrial process of Ti/TiC MMC synthesis have also been considered. Typical industrial heat treatment of a MMC billet, 1h at 900°C, was modeled and the results showing an increase of the total amount of carbide in the composite from 16 mass% to 22 mass% are in rather good agreement with the experimental value (21 mass%). This illustrates the potential benefits of thermodynamic and kinetic modeling, combined to in-situ X-ray diffraction, in understanding and optimizing industrial processes for MMC synthesis. Fig. 1 1 Fig. 1 Example of diffraction pattern collected from the sample at room temperature with peak identification Figure 2 . 2 a. During the sequential To cite this paper : J. Andrieux, B. Gardiola, O. Dezellus, Synthesis of Ti matrix composites reinforced with TiC particles: in-situ synchrotron X ray diffraction and modeling, Journal of Materials Science 2018, Accepted for publication, doi 10.1007/s10853-018-2258-8. 9 3. 2 92 Isothermal heating at 1073K (800°C) for 2h3.2.1. Raw dataThe time-dependent change in the TiC(200) peak during isothermal heating at 1073K (800°C) is shown on Figure3. The colorscale is related to the peak intensity. Note that the darker area around 4500s was due to synchrotron beam refill. It can be clearly seen from Figure3that a new diffraction peak appears at higher 2θ values for the TiC(200) peak, corresponding to the formation of a TiC population with smaller cell parameters, i.e. a sub-stoichiometric composition TiC y . The peak position of the TiC y composition remains constant as a function of time at 1073K (800°C), meaning that the lattice parameter and therefore the stoichiometry of the phase, remain constant. Finally, the intensity of the TiC y diffraction peak increases whereas that of the starting TiC 0.96 population decreases. Note that the same results were observed with the TiC(111) peak. Fig. 3 3 Fig. 3 Time-dependent change in the TiC(200) during isothermal treatment at 1073K (800°C). The colorscale indicates the peak intensity (on-line version). Fig. 4 4 Fig. 4 Time-dependent change in cell parameter during isothermal treatment at 800°C. t=0 corresponds to the beginning of the heating step. Fig. 5 5 Fig. 5 Bright field STEM view of core-shell microstructure of TiC particles in Ti-TiC MMC, obtained after 1min at 900°C. Figure 6 6 Figure6presents the time-dependent variation in the amount of the three identified populations of TiC particles (see Section 3.1) during isothermal annealing at 1073K (800°C). As already observed in Figure3, the quantity of the TiC y population increases whereas that of the initial TiC 0.96 composition decreases. More interestingly, the population of the initial smallest crystallites (TiC 0.96 _SC) is consumed after only 3 min of heat treatment. In addition, 50% conversion of TiC 0.96 _BC into TiC y phase is reached after only 6min of heat treatment. Finally, after 1h30 at 1073K (800°C), a complete reaction is not observed as ~25 % of TiC 0.96 _BC remains in the sample. Fig. 6 6 Fig.[START_REF] Quinn | Solid-State Reaction Between Titanium Carbide and Titanium Metal[END_REF] Changes in the quantity of different populations of TiC calculated using Rietveld refinement during isothermal treatment at 800°C. t=0 corresponds to the beginning of the heating step. Fig. 7 7 Fig. 7 Calculated time-dependent change in the mass fraction of TiC particles in a Ti matrix during isothermal heat treatment at 1073K (800°C). The solid line corresponds to one class of particles with a unique TiC diameter of 60nm. The dashed line shows the results obtained with two classes of TiC particles with different diameters: 6000 particles of 60nm, and one particle of 1µm. The dotted line corresponds to three classes of particles: 6000 particles of 60nm, 200 particles of 160nm and two particles of 1µm. The experimental values and the final asymptotic amount of TiC are also reported (filled circles and dasheddotted horizontal line respectively). Fig. 8 8 Fig. 8 Schematic model of the initial configuration of the system used in the DICTRA simulations 17 17 4. 2 2 Fig. 10 (a) Time-dependent change in the C content at the external Ti matrix interface during isothermal treatment at 1073K (800°C). (b) Time-dependent change in the calculated mass fraction of TiC particles in the Ti matrix at the very beginning of the isothermal treatment, during the dissolution step. Figures 10 . 10 Figures 10.a and b indicate that the dissolution of the smallest particles in order to allow carbon saturation of the Ti matrix is expected to be achieved after a typical isothermal time at 1073K Table 1 1 Chemical analysis of the Titanium powder used as starting material (SCA-CNRS Solaize) ). Small cylinders of powder compacts (3 mm Table 2 2 Chemical analysis of the TiC0.96 powder used as starting material (SCA-CNRS, Solaize) Element concentration (ppm in mass) Fe Cr V Ni Ca Cu, Cl, K, Zn, S, Er, As TiC 0.96 (starting powder) 1200 118 170 116 77 traces Table 3 X 3 2.2 STEM characterization Sample for STEM characterization was placed in a liquid tight graphite crucible and then immersed for 1 min in a liquid aluminium bath hold at 900°C. After 1min, the crucible was quenched in water. STEM characterization was performed on a JEOL-JEM 2100F microscope in Bright field mode, under an accelerating voltage of 200 kV and with a magnification of x40000. -ray fluorescence analysis of the TiC0.96 powder used as starting material (SCA-CNRS, Solaize) To cite this paper : J. Andrieux, B. Gardiola, O. Dezellus, Synthesis of Ti matrix composites reinforced with TiC particles: in-situ synchrotron X ray diffraction and modeling, Journal of Materials Science 2018, Accepted for publication, doi 10.1007/s10853-018-2258-8. Andrieux, B. Gardiola, O. Dezellus, Synthesis of Ti matrix composites reinforced with TiC particles: in-situ synchrotron X ray diffraction and modeling, Journal of Materials Science 2018, Accepted for publication, doi 10.1007/s10853-018-2258-8. , B. Gardiola, O. Dezellus, Synthesis of Ti matrix composites reinforced with TiC particles: in-situ synchrotron X ray diffraction and modeling, Journal of Materials Science 2018, Accepted for publication, doi 10.1007/s10853-018-2258-8. 3.3.2. Rietveld refinement results 10 , B. Gardiola, O. Dezellus, Synthesis of Ti matrix composites reinforced with TiC particles: in-situ synchrotron X ray diffraction and modeling, Journal of Materials Science 2018, Accepted for publication, doi 10.1007/s10853-018-2258-8. , B. Gardiola, O. Dezellus, Synthesis of Ti matrix composites reinforced with TiC particles: in-situ synchrotron X ray diffraction and modeling, Journal of Materials Science 2018, Accepted for publication, doi 10.1007/s10853-018-2258-8. , B. Gardiola, O. Dezellus, Synthesis of Ti matrix composites reinforced with TiC particles: in-situ synchrotron X ray diffraction and modeling, Journal of Materials Science 2018, Accepted for publication, doi 10.1007/s10853-018-2258-8. , B. Gardiola, O. Dezellus, Synthesis of Ti matrix composites reinforced with TiC particles: in-situ synchrotron X ray diffraction and modeling, Journal of Materials Science 2018, Accepted for publication, doi 10.1007/s10853-018-2258-8. , B. Gardiola, O. Dezellus, Synthesis of Ti matrix composites reinforced with TiC particles: in-situ synchrotron X ray diffraction and modeling, Journal of Materials Science 2018, Accepted for publication, doi 10.1007/s10853-018-2258-8.a simple linear combination. The relative amount of the two classes of particles is defined in order to get a 50% volume fraction of the smallest class, a value that should allow the initial fast reaction rate to be captured. Calculation results are reported in Figure Andrieux, B. Gardiola, O. Dezellus, Synthesis of Ti matrix composites reinforced with TiC particles: in-situ synchrotron X ray diffraction and modeling, Journal of Materials Science 2018, Accepted for publication, doi 10.1007/s10853-018-2258-8. , B. Gardiola, O. Dezellus, Synthesis of Ti matrix composites reinforced with TiC particles: in-situ synchrotron X ray diffraction and modeling, Journal of Materials Science 2018, Accepted for publication, doi 10.1007/s10853-018-2258-8. 21 Acknowledgements This work was undertaken in the framework of the COMETTi project funded by the French national research agency (ANR) [grant number ANR-09-MAPR-0021]. O.D. is very grateful to Dr. S. Fries and Pr. I. Steinbach from ICAMS institute at Bochum University (Germany) for allowing DICTRA calculations on their informatics cluster. J.A. thanks ID15B beamline staff for their help during beam time and ESRF for the provision of beamtime through in-house research during his post-doctoral position. The authors thank Gilles RENOU for the STEM observation performed at the "Consortium des Moyens Technologiques Communs" (CMTC, http://cmtc.grenoble-inp.fr). Composite powders were provided by the Mecachrome company (www.mecachrome.fr). To cite this paper : J. Andrieux, B. Gardiola, O. Dezellus, Synthesis of Ti matrix composites reinforced with TiC particles: in-situ synchrotron X ray diffraction and modeling, Journal of Materials Science 2018, Accepted for publication, doi 10.1007/s10853-018-2258-8. Conflicts of interest statement The authors declare that they have no conflict of interest.
41,063
[ "15394", "178005", "2798" ]
[ "752", "752", "752" ]
01757191
en
[ "sdv" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01757191/file/APS-DOAC%20Letter%20to%20the%20editor%20Rheumatology%20_%20REF.pdf
Quentin Scanvion Sandrine Morell-Dubois Cécile M Yelnik Johana Bene Sophie Gautier² Marc Lambert email: marc.lambert@chru-lille.fr Pr Marc Lambert FAILURE OF RIVAROXABAN TO PREVENT THROMBOSIS IN FOUR PATIENTS WITH ANTI-PHOSPHOLIPID SYNDROME Comment on: Failure of rivaroxaban to prevent thrombosis in four patients with anti-phospholipid syndrome Quentin Scanvion, Sandrine Morell-Dubois, Cécile M Yelnik, Johana Bene, Sophie Gautier, Marc Lambert To cite this version: Letter to the Editor Sir, we read with a great interest the letter published recently by Dufrost et al. [START_REF] Dufrost | Failure of rivaroxaban to prevent thrombosis in four patients with anti-phospholipid syndrome[END_REF] on failure of rivaroxaban to prevent thrombosis in patients with APS, as well as the interview between Dr. Hannah Cohen and Prof. Bernard Lauwerys [START_REF]Rivaroxaban versus warfarin to treat patients with thrombotic antiphospholipid syndrome. Dr. Hannah Cohen about the results of the RAPS trial[END_REF]. We think that this subject is important because we have observed growing use of direct oral anticoagulants (DOAC) in APS patients. Rivaroxaban in Anti-Phospholipid Syndrome (RAPS) study is the only randomized controlled trial studying the use of rivaroxaban versus warfarin to prevent thrombosis recurrence [START_REF] Cohen | Rivaroxaban versus warfarin to treat patients with thrombotic antiphospholipid syndrome, with or without systemic lupus erythematosus (RAPS): a randomised, controlled, open-label, phase 2/3, non-inferiority trial[END_REF]. The primary outcome was the percentage change in endogenous thrombin potential, selected because of the low frequency of clinical outcomes and the ability of this particular test to measure biological activity of both drugs according to Cohen's interview. The authors conclude that rivaroxaban could be an effective alternative to vitamin K antagonist (VKA) therapy in APS patients since no thrombotic event was observed. Although we appreciate the randomized controlled nature of the study, we think that author's conclusion is an overstatement because of the short duration of the study follow-up (210 days), and the nature of the study which was not designed to demonstrate clinical non-inferiority. We also think that predicting a drug's efficacy in APS only on the basis of anticoagulation does not reflect the complexity of APS pathophysiology, which involves platelets, endothelial cells, monocytes, and immune and inflammatory mechanisms as well [START_REF] Giannakopoulos | The pathogenesis of the antiphospholipid syndrome[END_REF]. To illustrate, we report two new cases of APS patients; they both relapsed under DOAC, after having been stable under VKA therapy. The first patient is a 66 year-old SLE female diagnosed in 1967 (oral ulcers, polyarthritis, ANA, immune thrombocytopenic purpura). warfarin. An aortic valve replacement by mechanical prosthesis was required. We notice that DOACs are increasingly prescribed to treat APS patients, whatever the expression of their disease. Among the DOACs, rivaroxaban is primarily prescribed for the treatment of APS according to initial RAPS results [START_REF] Cohen | Rivaroxaban versus warfarin to treat patients with thrombotic antiphospholipid syndrome, with or without systemic lupus erythematosus (RAPS): a randomised, controlled, open-label, phase 2/3, non-inferiority trial[END_REF]. However evidence-based medicine inspires caution. The Task Force on APS Treatment Trends of the 15 th International Congress on Antiphospholipid Antibodies stated that there was insufficient evidence to make recommendations at this time regarding the use of these DOACs in the APS, which can only be considered when there is known VKA allergy/intolerance, in patients with only venous APS. Thus VKA remains the mainstay of anticoagulation in thrombotic APS and the non-adherence is not a reason to switch [START_REF] Erkan | 14th International Congress on Antiphospholipid Antibodies Task Force Report on Antiphospholipid Syndrome Treatment Trends[END_REF]. In APS-ACTION registry, an international multicenter prospective database, of 428 thrombosis APS patients, 19 were under DOAC, of whom 6 had relapse during the 2-years follow-up (15.8% annual thrombosis risk), compared to 1.5% risk in VKA-receiving patients [START_REF] Unlu | Antiphospholipid syndrome alliance for clinical,trials,and international networking (APS action) clinical database and repository analysis: direct oral anticoagulant use among among antiphospholipid syndrome patient[END_REF], suggesting that DOAC and VKA do not have similar efficacy. Furthermore Cohen's study included only patients at low risk (neither prior arterial thrombotic event nor previous relapse with INR between 2.0 and 3.0). Because it is difficult to evaluate if "standard intensity" [START_REF]Rivaroxaban versus warfarin to treat patients with thrombotic antiphospholipid syndrome. Dr. Hannah Cohen about the results of the RAPS trial[END_REF] anticoagulation by VKA will be safe before starting, DOAC should be considered only as a second line treatment. Moreover an inaugural venous event does not exclude later arterial thromboses. Our first patient had no prior arterial history before the catastrophic APS. Thus we endorse the conclusion of Dufrost et al. [START_REF] Dufrost | Failure of rivaroxaban to prevent thrombosis in four patients with anti-phospholipid syndrome[END_REF] and do not agree that rivaroxaban offers an effective, safe and convenient alternative to warfarin in APS patients, as Cohen suggested [START_REF]Rivaroxaban versus warfarin to treat patients with thrombotic antiphospholipid syndrome. Dr. Hannah Cohen about the results of the RAPS trial[END_REF]. The manifestations of APS span a heterogeneous clinical spectrum. Awaiting randomized controlled trials with clinical outcome [START_REF] Pengo | Efficacy and safety of rivaroxaban vs warfarin in high-risk patients with antiphospholipid syndrome: Rationale and design of the Trial on Rivaroxaban in AntiPhospholipid Syndrome (TRAPS) trial[END_REF][START_REF] Woller | Apixaban for the Secondary Prevention of Thrombosis Among Patients With Antiphospholipid Syndrome: Study Rationale and Design (ASTRO-APS)[END_REF] and prolonged follow-up to clarify whether DOACs are efficient alternatives to VKAs, cautions and stringent clinical review are especially necessary in known high risk patients (table 1). SLE associated-APS was diagnosed in 2007 after a deep vein thrombosis of the right arm in the context of LA and anti-beta2gp1 positivity. She remained relapse-free under VKA therapy during 8 years and then switched to rivaroxaban (20mg) in 2015. Because of refractory ITP (corticosteroids, rituximab and splenectomy) eltrombopag was started. Six months later (one year after rivaroxaban switching), she experienced chest pain associated with elevated troponin, leading to the diagnosis of myocardial micro-thrombosis on MRI of the heart. Concomitantly, multiple cerebral ischemia and cutaneous arterial micro-thrombosis (confirmed by biopsy) occurred within a week, which led to the diagnosis of catastrophic APS. She improved after corticosteroids, intravenous immunoglobulin, curative heparin therapies, plasmapheresis and rivaroxaban/eltrombopag withdrawal. The second patient is a 44 year-old female diagnosed with primary triple-positive APS in 2007 (stroke with Libman-Sacks endocarditis and obstetrical morbidity) and treated by warfarin. Despite subtherapeutic INR, no relapse was observed on yearly-repeated echocardiography and cerebral MRI during the 7 years follow-up. Rivaroxaban (20mg) was initiated in 2014. The compliance was suboptimal. In May 2017 Libman-Sacks endocarditis relapsed with new ischemic strokes. Remission improved after switching by heparin, then Table 1 : Clinical phenotypes of APS require caution with the use of direct oral anticoagulants. Lack of evidence-based medicine No evidence-based medicine 1 Venous thrombotic APS without previous Arterial and small vessel thrombotic APS relapse with INR between 2.0 and 3.0 Libman-Sacks endocarditis Association with pro-thrombotic treatment Triple-positive aPL profile Poor compliance (no biological monitoring and short half time) Relapsing patient despite anticoagulant. Conflicts of interest: Prof. LAMBERT receives fees from BAYER, BMS-PFIZER and DAICHY-SANKIO. QS, SMD, CMY, JB and SG declare no conflicts of interest. Funding source: No specific funding was received from any bodies in the public, commercial or not-for -profit sectors to carry out the work described in this manuscript.
8,701
[ "20261" ]
[ "425779", "374570", "425779", "374570", "425779", "374570", "523045", "425779", "374570", "425779", "374570", "425779", "374570" ]
01757250
en
[ "spi" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01757250/file/ASME_JMR_2018_Hao_Li_Nayak_Caro_HAL.pdf
Guangbo Hao email: g.hao@ucc.ie Haiyang Li Abhilash Nayak Stéphane Caro Stephane Caro Design of a Compliant Gripper With Multimode Jaws Introduction A recent and significant part of the paradigm shift brought forth by the industrial revolution is the miniaturization of electromechanical devices. Along with the reduction of size, miniaturization reduces the cost, energy and material consumption. On the other hand, the fabrication, manipulation and assembly of miniaturized components is difficult and challenging. Grippers are grasping tools for various objects, which have been extensively used in different fields such as material handling, manufacturing and medical devices [START_REF] Verotti | A Comprehensive Survey on Microgrippers Design: Mechanical Structure[END_REF]. Traditional grippers are usually composed of rigid-body kinematic joints, which have issues associated with friction, wear and clearance/backlash [START_REF] Nordin | Controlling mechanical systems with backlash-a survey[END_REF]. Those issues lead to poor resolution and repeatability of grasping motion, which makes the high-precision manipulation of miniaturized components challenging. In addition to being extremely difficult to grip sub-micrometre objects such as optical fibres and micro lens, traditional grippers are also very hard to grip brittle objects such as powder granular. This is because the minimal incremental motion (i.e. resolution) of the jaw in the traditional gripper is usually larger than the radius of the micro-object or already causes the breaking of the brittle object. Figure 1 shows a parallel-jaw gripper as an example. Although advanced control can be used to improve the gripper's resolution, its effort is trivial compared to the resulting high complexity and increased cost [START_REF] Nordin | Controlling mechanical systems with backlash-a survey[END_REF]. Figure 1: Comparison of traditional parallel-jaw gripper's resolution and size/deformation of objects Although mechanisms are often composed of rigid bodies connected by joints, compliant mechanisms that include flexible elements as kinematic joints can be utilised to transmit a load and/or motion. The advances in compliant mechanisms have provided a new direction to address the above rigid-body problems easily [START_REF] Howell | Compliant Mechanisms[END_REF]. The direct result of eliminating rigid-body kinematic joints removes friction, wear and backlash, enabling very high precision motions. In addition, it can be free of assembly when using compliant mechanisms so that miniaturization and monolithic fabrication are easily obtained. There are mainly two approaches to design compliant grippers. The first one is the optimisation method [START_REF] Zhu | Topology optimization of hinge-free compliant mechanisms with multiple outputs using level set method[END_REF] and the second is the kinematic substitution method [START_REF] Hao | Conceptual designs of multi-degree of freedom compliant parallel manipulators composed of wire-beam based compliant mechanisms[END_REF]. The former optimizes the materials' distribution to meet specified motion requirements, which includes topology optimization, geometry optimization and size optimization. However, the optimization result often leads to sensitive manufacturing, and therefore minor fabrication error can largely change the output motion [START_REF] Wang | Design of Multimaterial Compliant Mechanisms Using Level-Set Methods[END_REF]. Also, the kinematics of the resulting gripper by optimization is not intuitive to engineers. The latter design method is very interesting since it takes advantage of the large number of existing rigid-body mechanisms and their existing knowledge. It renders a very clear kinematic meaning, which is easily used for shortening the design process. Parallel or closed-loop rigid-body architectures gain an upper hand here as their intrinsic properties favour the characteristics of compliant mechanisms like compactness, symmetry to reduce parasitic motions, low stiffness along the desired degrees of freedom (DOF) and high stiffness in other directions. Moreover, compliant mechanisms usually work around a given (mostly singular) position for small range of motions (instantaneous motions). Therefore, parallel singular configurations existing in parallel manipulators may be advantageously exploited [START_REF] Amine | Classification of 3T1R parallel manipulators based on their wrench graph[END_REF][START_REF] Maraje | Operation modes comparison of a reconfigurable 3-PRS parallel manipulator based on kinematic performance[END_REF][START_REF] Nurahmi | Dimensionally homogeneous extended jacobian and condition number[END_REF][START_REF] Rubbert | Using singularities of parallel manipulators for enhancing the rigid-body replacement design method of compliant mechanisms[END_REF][START_REF] Rubbert | Design of a compensation mechanism for an active cardiac stabilizer based on an assembly of planar compliant mechanisms[END_REF][START_REF] Zlatanov | Constraint Singularities as C-Space Singularities[END_REF]. Parallel singularity can be an actuation singularity, constraint singularity or a compound singularity as explained in [START_REF] Amine | Classification of 3T1R parallel manipulators based on their wrench graph[END_REF][START_REF] Maraje | Operation modes comparison of a reconfigurable 3-PRS parallel manipulator based on kinematic performance[END_REF][START_REF] Nurahmi | Dimensionally homogeneous extended jacobian and condition number[END_REF]. Rubbert et al used an actuation singularity to typesynthesize a compliant medical device [START_REF] Rubbert | Using singularities of parallel manipulators for enhancing the rigid-body replacement design method of compliant mechanisms[END_REF][START_REF] Rubbert | Design of a compensation mechanism for an active cardiac stabilizer based on an assembly of planar compliant mechanisms[END_REF]. Another interesting kind of parallel singularity for a parallel manipulator that does not depend on the choice of actuation is a constraint singularity [START_REF] Zlatanov | Constraint Singularities as C-Space Singularities[END_REF]. Constraint singularities may divide the workspace of a parallel manipulator into different operation modes resulting in a reconfigurable mechanism. Algebraic geometry tools have proved to be efficient in performing global analysis of parallel manipulators and recognizing their operation modes leading to mobility-reconfiguration [START_REF] Husty | Algebraic methods in mechanism analysis and synthesis[END_REF][START_REF] Nurahmi | Reconfiguration analysis of a 4-RUU parallel manipulator[END_REF][START_REF] He | Design and Analysis of a New 7R Single-Loop Mechanism with 4R, 6R and 7R Operation Modes[END_REF]. The resulting mobility-reconfiguration can enable different modes of grasping in grippers. Thus, the reconfigurable compliant gripper unveils an ability to grasp a plethora of shapes or adapt to specific requirements unlike other compliant grippers in literature that exhibit only one (mostly parallel mode) of these grasping modes [START_REF] Beroz | Compliant microgripper with parallel straight-line jaw trajectory for nanostructure manipulation[END_REF][START_REF] Hao | Design and static testing of a compact distributed-compliance gripper based on flexure motion[END_REF]. Though there are abundant reconfigurable rigid-body mechanisms in the literature, the study of reconfigurable compliant mechanisms is limited. Hao studied the mobility and structure reconfiguration of compliant mechanisms [START_REF] Hao | Mobility and Structure Re-configurability of Compliant Mechanisms[END_REF] while Hao and Li introduced a position-space based structure reconfiguration approach to the reconfiguration of compliant mechanisms and to minimize parasitic motions [START_REF] Hao | Position space-based compliant mechanism reconfiguration approach and its application in the reduction of parasitic motion[END_REF]. Note that in rigid-body mechanisms, using the underactuated/adaptive grasping method [START_REF] Birglen | Self-adaptive mechanical finger and method[END_REF][START_REF] Laliberté | Underactuation in robotic grasping hands[END_REF], a versatile gripper for adapting to different shapes can be achieved. In this paper, one of the simplest yet ubiquitous parallel mechanisms, a fourbar linkage is considered at a constraint singularity configuration to design a reconfigurable compliant fourbar mechanism and then to construct a reconfigurable compliant gripper. From our best understanding, this is the first piece of work that considers a constraint singularity to design a reconfigurable compliant mechanism with multiple operation modes, also called motion modes. This remaining of this paper is organised as follows. Section 2 describes the design of a multi-mode compliant four-bar mechanism and conducts the associated kinematic analysis. The multi-mode compliant gripper is proposed in Section 3 based on the work presented in Section 2, followed by the analytical kinetostatic modelling. A case study is discussed in Section 4, which shows the analysis of the gripper under different actuation schemes. Section 5 draws the conclusions. Design of a multi-mode compliant four-bar mechanism 2.1 Compliant four-bar mechanism at its singularity position A comprehensive singularity and operation mode analysis of a parallelogram mechanism is reported in [START_REF] Nayak | A Reconfigurable Compliant Four-Bar Mechanism with Multiple Operation Modes[END_REF], using algebraic geometry tools. As a specific case of the parallelogram mechanism, a four-bar linkage with equilateral links as shown in Fig. 2 is used in this paper, where the link length is l. Link AD is fixed, AB and CD are the cranks and BC is the coupler. Origin of the fixed frame, O0 coincides with the centre of link AD while that of the moving frame O1 with the centre of BC. The bar/link BC is designated as the output link with AB/CD as the input link. The location and orientation of the coupler with respect to the fixed frame can be denoted by (a, b, ϕ), where a and b are the Cartesian coordinates of point O1 attached to the coupler, and ϕ is the orientation of the latter about z0-axis, i.e., angle between x0 and x1 axes. The two constraint singularity positions of the equilateral four-bar linkage are identified in Fig. 3 [START_REF] Nayak | A Reconfigurable Compliant Four-Bar Mechanism with Multiple Operation Modes[END_REF]. At a constraint singularity, the mechanism may switch from one operation mode to another. Therefore, in case of the four-bar linkage with equal link lengths, the DOF at a constraint singularity is equal to 2. In this configuration, points A, B, C and D are collinear and the corresponding motion type is a translational motion along the normal to the line ABCD passing through the four points A, B, C and D, combined with a rotation about an axis directed along z0 and passing through the line ABCD. Eventually, it is noteworthy that two actuators are required in order to control the end-effector in those constraint singularities in order to manage the operation mode changing. Based on the constraint singularity configuration of the four-bar rigid-body mechanism represented in Fig. 3, a compliant four-bar mechanism can be designed through kinematically replacing the rigid rotational joints with compliant rotational joints [START_REF] Howell | Compliant Mechanisms[END_REF][START_REF] Hao | Conceptual designs of multi-degree of freedom compliant parallel manipulators composed of wire-beam based compliant mechanisms[END_REF] in the singularity position. Note that, the singularity position (Fig. 3(a)) is the undeformed (home) configuration of the compliant mechanism. Each of the compliant rotational joints can be any type compliant rotational joint such as cross-spring rotational joint, notch rotational joint and cartwheel rotational joint [START_REF] Howell | Compliant Mechanisms[END_REF]. In this section, the cross-spring rotational joint as the rotational/revolute joint (RJ) is employed to synthesize the design of a reconfigurable compliant four-bar mechanism based on the identified singularity (Fig. 4). In Fig. 4, the RJ-0 and RJ-2 are traditional cross-spring rotational joints, while both the RJ-1 and the RJ-3 are double cross-spring joints. Each of the two joints, RJ-1 and RJ-3, consists of two traditional cross-spring rotational joints in series with a common rotational axis and a secondary stage (encircled in Fig. 4). This serial arrangement creates symmetry and allows for greater motion and less stress in the mechanism. It should be mentioned that using these joints can allow large-amplitude motions compared to notch joints, which thus serves for illustrating the present concept easily. Note that these joints are not as compact and simple (with manufacture and precise issues) as circular notch joints. In addition, the parasitic rotational shift of these joints will be minimized if the beams intersect at an appropriate position of their length [START_REF] Henein | The art of flexure mechanism design[END_REF]. We specify that the Bar-0 is fixed to the ground and the Bar-2 is the output motion stage, also named coupler. Bar-1, Bar-2 and Bar-3 correspond to links, CD, BC, and AB, respectively, in Figure 3. The link length can be expressed as l=LB+LR (1) where LB and LR are the lengths of each bar block and each compliant rotational joint, respectively, as indicated in Fig. 4. Like the rigid-body four-bar mechanism shown in Fig. 3(a), the output motion stage (Bar-2) of the compliant four-bar mechanism has multiple operation modes under two rotational actuations (controlled by two input angles α and β for Bar-1 and Bar-3, respectively), as shown in Fig. 4. However, the compliant four-bar mechanism has more operation modes than its rigid counterpart as discussed below. A moving coordinate system (o-xyz) is defined in Fig. 4, which is located on Bar-2 (link BC), which coincides with the fixed frame (O-XYZ) in the singularity position. Based on this assumption, Bar-2's operation modes of the compliant fourbar mechanism are listed below: a) Operation mode I: Rotation in the XY-plane about the Axis-L, when α ≠ 0 and β = 0. b) Operation mode II: Rotation in the XY-plane about the Axis-R when α = 0 and β≠0. In this mode, the cross-spring joints can tolerate this constrained configuration (close enough to the singularity) due to the induced small elastic extension of joints, but do not work as ideal revolute joints anymore. c) Operation mode III: General rotation in the XY-plane about other axes except the Axis-L and Axis-R, when α≠β. Similar to the constraint in operation mode II, the cross-spring joints in this mode are no longer working as ideal revolute joints. d) Operation mode IV: Pure translational motions in the XY-plane along the X-and Y-axes (mainly along the Y-axis), when α=β. It is noted that the rotational axes associated with α and β are both fixed axes (as indicated by solid lines in Fig. 4); while Axis-L and Axis-R (as indicated by dashed lines in Fig. 4) are both mobile axes. The rotational axis of α is the rotational axis of joint RJ-0, and the rotational axis of β is the rotational axis of joint RJ-3. Axis-L is the rotational axis of joint RJ-2, which moves as Bar-3 rotates. Axis-R is the rotational axis of joint RJ-1, which moves as Bar-1 rotates. As shown in Fig. 3, in the initial singular configuration, Axis-L overlaps with the axis of α; and Axis-R lies in the plane spanned by Axis-L and the Axis of β. It should be also pointed out that it is possible for operation modes II and III having large-range motion thanks to the particular use of rotational joints which may not be true for circular notch joints anymore. These operation modes are also highlighted in Fig. 5 with verification through the printed prototype. In order to simplify the analysis, let α and β be non-negative in Fig. 5. The primary motions of output motion stage (Bar-2) are the rotation in the XY plane and the translations along the X-and Y-axes; while the rotations in the XZ and YZ planes and translational motion along the Z-axis are the parasitic motions that are not the interest of this paper. Kinematic models The kinematics of the compliant four-bar mechanism is discussed as follows, under the assumption of small angles (i.e., close to the constraint singularity). According to the definition of the location and orientation of the Bar-2 (link BC) with respect to the fixed frame, we can have the primary displacement of the Bar-2 as: 1) Displacement of the centre of Bar-2 along the Y-axis: b-0=l(sin α+sin β)/2 ≈ l(α+β)/2 if small input angles (2) 2) Rotation of Bar-2 about the Z-axis: ϕ-0= α-β (3) Using the assumption of small angles, the displacement of the centre of Bar-2 along the X-axis is normally in the second order of magnitude of the rotational angles, which is trivial and can be neglected in this paper. Note that this trivial displacement is also affected by the centre drift of the compliant rotational joints [START_REF] Zhao | A novel compliant linear-motion mechanism based on parasitic motion compensation[END_REF]. Design of a multi-mode compliant gripper 3.1 Compliant gripper with multiple modes In this section, a multi-mode compliant gripper using the compliant four-bar mechanism presented in Fig. 4 as a gripper jaw mechanism is proposed (shown in Fig. 7). Instead of the cross-spring joints in the compliant four-bar mechanism, the commonly-used rectangular short beams as the rotational joints (with rotation axis approximately in the centre) are adopted for the final gripper designed in this paper, as shown in Fig. 7(a). The reason for using the rectangular short beams is mainly twofold. Firstly, comparing with the cross-spring joints, the rectangular joints are compact, simple enough, and easy to fabricate. Secondly, the rectangular joints have larger motion range than that of the circular notch joints (as used in Appendix A). In addition, using the rectangular short-beams allow not to work as pure rotational joints as discussed in Section 2.1. In order the make the whole mechanism more compact, the compliant gripper is a two-layer structure with two linear actuators to control the two rotational displacements (α and β) in each jaw. The top layer actuator is for determining β, and the bottom layer is for determining α. The design of the compliant gripper is further explained in Fig. 8, with all dominant geometrical parameters labelled except the identical out-of-plane thickness (u) of each layer, and the gap (g) between the two layers. As can be seen in Figs. Kinetostatic modelling Under the assumption of small rotations, the relationship between the linear actuation and the rotational actuation in the slider-crank mechanism (the left jaw is taken for studying) can be modelled below: bb or / a r a r     ( 8 ) tt or / a r a r     (9) where at and ab represent the input displacements of the top and bottom actuators along the X-axis, respectively. A minus sign means that the positive linear actuation causes a negative rotational actuation (based on the coordinate system illustrated in Fig. 8). Here, r is the lever arm as shown in Fig. 8. The rotational displacement of RJ-4 in the added slider-crank mechanism can be approximately obtained as follows. The rotational displacement of RJ-5 in each layer can be ignored due to the specific configuration of the added slider-crank mechanism as shown in Fig. 8, where the crank parallel to the Y-axis is perpendicular to the coupler so that the coupler is approximately straight over motion under the condition of small rotations. 2)-( 9), the input-output kinematic equations of the compliant gripper can be obtained: tb () 2 l b a a r   ( 12 ) tb aa r    (13) As indicated in Eqs. ( 12) and ( 13), the amplification ratio is a function of design parameter r denoted in Fig. 8. Using the above kinematic equations, the kinetostatic models of the compliant gripper can be derived from the principle of virtual work [START_REF] Howell | Compliant Mechanisms[END_REF], with at and ab being the generalised coordinates. t t b b t b tb d d d d UU F a F a a a aa      (14) where Ft and Fb represent the actuation forces of the top and bottom linear actuators along the X-axis, corresponding to at and ab, respectively. U is the total elastic potential energy of the compliant gripper, which is calculated as below: 2 2 2 2 2 2 2 0 0 1 1 2 2 3 3 4 4t 4 4b p t p b 2 2 2 2 2 2 b t b t t t b 0 1 2 3 4 4 22 p t p b 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 ( ) ( ) ( 2 ) ( ) ( ) ( ) 2 2 2 2 2 2 11 22 U k k k k k k k a k a a a a a a a a k k k k k k r r r r r r r k a k a                       (15) where k0, k1, k2, k3 correspond to the rotational stiffnesses of RJ-0, RJ-1, RJ-2, RJ-3 in the compliant four-bar mechanism, respectively. k4 is the rotational stiffness of the RJ-4 in each layer. kp is the translational stiffness of the prismatic joint in each layer. Note that the reaction forces from gripping objects [START_REF] Hao | Design and static testing of a compact distributed-compliance gripper based on flexure motion[END_REF] can be included in Eq. ( 14), which, however, is not considered in this paper. Combining the results of Eqs. ( 14) and ( 15) with the substitution of Eqs. ( 4)-( 11), we have t b t t t 1 2 3 4 p t 1 2 3 4 2 t p b 2 2 2 2 t t 2 t 1 2 1 1 ( ) ( 2 )( ) ( ) ( ) 2 8 2 2 4 ( 2 ) 2 /2 ( ) / a a a a a k k k k k a r r r r r r r r r kk U F a k k k a k a r r r F rr                  (16) b b t b 0 2 4 p b 0 2 4 2 b p t 2 2 2 2 b b b 1 1 1 ( ) ( 2 ) ( ) 2 2 2 4 ( 2 ) ( / ) 2 /2 a a a a k k k k a r r r r r r r k k k k a k a r r r r U F a F             (17) Equations ( 16) and ( 17) can determine the required forces for given input displacements, which can be rearranged in a matrix form: t 11 12 b 21 2 b 2 t a kk a kk F F                 (18) where the stiffness coefficients of the system associated with the input forces and input displacements are k k k k kk r r r r k kk r k k k kk r r r            Therefore, the input displacements can be represented with regard to the input forces as: FF c c k k FF c c k a a k                                 (19) We can further obtain the following stiffness equations for all compliant joints used in this paper [START_REF] Hao | Designing a monolithic tip-tilt-piston flexure manipulator[END_REF], which can be substituted into Eqs. ( 18) and ( 19) to solve the load-displacement equations. EI Eut k k k ll EI Eut k k k ll EI Eut k ll          ( 20 ) where E is the Young's modulus of the material and I is the second moment of inertia of the cross-section areas. With the help of Eqs. ( 12), ( 13) and ( 19), we can have the required output displacements for given input displacements/forces:                                                   (21) Note that in the above kinetostatic modelling, a linear assumption is made. However, in order to capture accurate load-dependent kinetostatic characteristics for larger range of motion, nonlinear analytical models should be used [START_REF] Ma | Modeling large planar deflections of flexible beams in compliant mechanisms using chained beam-constraint-Model[END_REF]. Case study In this section, a case study with assigned parameters as shown in Finite element analysis (FEA) simulation was carried out to show the four operation modes of the compliant gripper, in comparison to the 3D printed prototype (Fig. 9). Here, Solidworks 2017, with a meshing size of 1.0 mm and other settings, in default is used for FEA. Figure 10 illustrates the comprehensive kinetostatic analysis results of the proposed compliant gripper including the comparison between the analytical modelling and FEA. It can be observed that linear relations of all figures have been revealed where lines in either model are parallel each other. The FEA results have the same changing trends as the analytical models, but deviate from the analytical models in certain degrees. The discrepancy between the two theoretical models may be due to the assumptions used in the analytical modelling such as neglecting centre drift of rotational joints. It should be pointed out that although stress analysis is not the interest of this paper, the maximal stress (29 MPa) was checked in FEA, which is much less than the yield strength of the material during all simulations for the case study. In Fig. 10(a), with the increase of ab, the difference between the two models (analytical and FEA) goes up if at=0, while the difference between the two models decreases if at=0.50 mm and 1.0 mm. This is because of different line slops of two models. Generally speaking, the larger at, the larger the deviation of two models, where the maximal difference in Fig. 10(a) is about 20%. In Fig. 10(b), the line slops of two models are almost same, meaning that the increase of increase of ab has no influence of the discrepancy of two models for any value of at. Also, the larger at, the larger the deviation of two theoretical models. A real prototype made of polycarbonate was fabricated using CNC milling machining, which is shown in Fig. 11(a). Each layer of the compliant gripper (Fig. 7) was made at first and then two layers were assembled together. The gripper prototype was tested in a customised-built testing rig as shown in Fig. 11(b). The singleaxis force loading for both the top layer actuation and bottom layer actuation was implemented. The micrometer loads displacement on the force sensor that is directly contacted with the top layer input or bottom layer input. The force sensor reads the force exerted on one input of the gripper, and the two displacement sensors indicate the displacements of two inputs. Testing results are illustrated in Fig. 12, which are compared with analytical models and FEA results. It is shown that the analytical displacement result is slightly larger than the experimental model, but is slightly lower than the FEA result. The difference among three models is also reasonable, given that FEA always take all parts as elastic bodies (i.e., less rigid system) and testing is imposed on a prototype with fabricated fillets (i.e., more rigid system). Conclusions In this paper, we present the first piece of work that employs a constraint singularity of a planar equilateral four-bar linkage to design a reconfigurable compliant gripper with multiple operation modes. The analytical kinetostatic mode of the multi-mode compliant gripper has been derived, which is verified in the case study. It shows that the FEA results comply with the analytical models with acceptable discrepancy. The proposed gripper is expected to be applied in extensive applications such as grasping a variety of shapes or adapting to specific requirements. The design introduced in this paper uses a two-layer configuration for desirable compactness, which adversely results in non-desired small out-of-plane rotations. However, the out-of-plane rotations can be reduced by optimising currently-used compliant joints or employing different compliant joints with higher out-of-plane stiffness. Note that the compliant gripper can be designed in a single layer for monolithic fabrication and elimination of out-of-plane motion, at the cost of a larger footprint. This can be done by using the remote rotation centre, as shown in Appendix A. A single layer gripper can be more easily designed at micro-scale on a silicon layer for MEMS devices. Despite the work mentioned above, there are other aspects to be considered in the future, including but not limited to the following: (i) An analytical nonlinear model for a more accurate large-range kinetostatic modelling of the compliant gripper; (ii) Design optimisation the compliant gripper based on a specific application; (iii) Output testing with comparison to the analytical model for a specific application; and (iv) Developing a control system to robotise the compliant gripper. (a) Resolution of a jaw: ∆Jaw (b) Diameter of micro object: Dmicro (c) Diameter of brittle object: Dbrittle, with a small breaking deformation ∆b ∆Jaw> Dmicro/2 ∆Jaw> ∆b/2 (but Dbrittle can be larger than 2∆Jaw) Figure 2 : 2 Figure 2: A planar equilateral four-bar linkage Figure 3 : 3 Figure 3: Constraint singular configuration of the planar equilateral four-bar linkage Figure 4 :Figure 5 : 45 Figure 4: A compliant four-bar mechanism at its constraint singular position (as fabricated) Figure 6 : 6 Figure 6: The generic kinematic configuration (close to constraint singularity) of the four-bar linkage Figure 7 : 7 Figure 7: The synthesized multi-mode compliant gripper Figure 8 : 8 Figure 8: Design details of the multi-mode compliant gripper (a) Top layer with two slider-crank mechanisms (indicated by dashed square) (b) Bottom layer with two slider-crank mechanisms (indicated by dashed square) bottom layer Figure 10 ( 10 c) reveals the same conclusion as that of Fig. 10(b).Figure 10(d) shows the similar finding to that of Fig. 10(a), except the larger at, the smaller the deviation of two models. It is clearly shown that in Figs. 10(b), 10(c) and 10(d) the general discrepancy of the two models is much lower than that in Fig. 10(a). Figure 9 :Figure 12 . 912 Figure 9: Gripper operation modes under input displacement control (a) Monolithic design of the reconfigurable compliant gripper (b) Actuating the linear actuator 1 only in bi-directions Table 1 : Geometrical parameters 1 Table 1 is presented to verify the analytical models in Section 3.2. The overall nominal dimension of the compliant gripper is 130 mm × 70 mm. The Young's of compliant gripper is given by E=2.4G Pa, which corresponds to the material of Polycarbonate with Yield Strength of σs > 60 MPa, and Poisson Ratio of v= 0.38. l t r h l1 l2 w1 w2 u g 25 mm 1 mm 18 mm 25 mm 5 mm 15 mm 24 mm 19 mm 10 mm 3 mm Acknowledgment The author would like to thank Mr. Tim Power and Mr. Mike O'Shea from University College Cork for the excellent prototype fabrication work as well as their kind assistance in the experimental testing. The work in this paper was funded by IRC Ulysses 2015/2016 grant. In order to reach the constraint singularity, i.e. allowing the overlapping of two revolute joints, the design uses isosceles trapezoidal flexure joints with remote center of rotation. However, this one layer mechanism has a quite large footprint, and requires more extra space for incorporating the two actuators. Moreover, due to the use of circular notch joints, operation modes II and III in this gripper may not produce large-range motion. The modelling and analysis of the present monolithic design in this appendix is left for future study. (e) Fabricated prototype
31,153
[ "1307880", "10659" ]
[ "121067", "121067", "111023", "473973", "481388", "473973" ]
01757257
en
[ "spi" ]
2024/03/05 22:32:10
2018
https://laas.hal.science/hal-01757257/file/allsensors_2018_2_30_78013.pdf
Aymen Sendi email: aymen.sendi@laas.fr Grérory Besnard email: gbesnard@laas.fr Philippe Menini email: menini@laas.fr Chaabane Talhi email: talhi@laas.fr Frédéric Blanc email: blanc@laas.fr Bernard Franc email: bfranc@laas.fr Myrtil Kahn email: myrtil.kahn@lcc-toulouse.fr Katia Fajerwerg email: katia.fajerwerg@lcc-toulouse.fr Pierre Fau email: pierre.fau@lcc-toulouse.fr Sub-ppm Nitrogen Dioxide (NO 2 ) Sensor Based on Inkjet Printed CuO on Microhotplate with a Pulsed Temperature Modulation Keywords: NO 2, CuO nanoparticles, temperature modulation, gas sensor, selectivity. I Nitrogen dioxide (NO 2 ), a toxic oxidizing gas, is considered among the main pollutants found in atmosphere and indoor air as well. Since long-term or short-term exposure to this gas is deleterious for human health, its detection is an urgent need that requires the development of efficient and cost effective methods and techniques. In this context, copper oxide (CuO) is a good candidate that is sensitive and selective for NO 2 at sub-ppm concentrations. In this work, CuO nanoparticles have been deposited by inkjet printing technology on a micro hotplate that can be operated up to 500°C at low power consumption (55 mW). The optimum detection capacity is obtained thanks to a temperature modulation (two -consecutive temperature steps from 100°C to 500°C), where the sensing resistance is measured. Thanks to this operating mode, we report in this study a very simple method for data processing and exploitation in order to obtain a good selectivity for the nitrogen dioxide over few interferent gases. Only four parameters from the sensor response allow us to make an efficient discrimination between individual or mixed gases in humid atmosphere. INTRODUCTION Humans spend more than 90% of their time in closed environments, even though this indoor environment offers a wide variety of pollutants [START_REF] Lévesque | Indoor air quality[END_REF] [START_REF] Namiesnik | Pollutants, their sources, and concentration levels[END_REF]. Indoor air pollution is a real health threat, so measuring indoor air quality is important for protecting the health from chemical and gaseous contaminants. Nitrogen dioxide (NO 2 ) is a dangerous pulmonary irritant [START_REF] Lévesque | Indoor air quality[END_REF]. NO 2 is generated by multiple sources of combustion in indoor air, such as smoking and heaters, but it also comes from outside air (industrial sources, road traffic) [START_REF] Cadiergues | Indoor air quality[END_REF]. NO 2 may have adverse effects of shortness of breath, asthma attacks and bronchial obstructions [START_REF] Koistinen | The INDEX project: executive summary of a European Union project on indoor air pollutants[END_REF]. It is also classified as toxic by the "International Agency for Research on Cancer (IARC)" [START_REF] Loomis | The International Agency for Research on Cancer (IARC) evaluation of the carcinogenicity of outdoor air pollution: focus on China[END_REF], hence the necessity for sensor development for accurate NO 2 detection is an acute need. Among sensors techniques, the metal oxide gas (MOX) sensors are promising candidates because of their high performance in terms of sensitivity on top of their low production cost. The copper oxide (CuO) material is highly studied because of its high sensitivity and its ability to detect oxidant gaseous compounds, but also for other indoor air pollutants, such as acetaldehyde (C 2 H 4 O), formaldehyde (CH 2 O), NO 2 , CO, etc. However, CuO suffers from a major disadvantage which is the lack of selectivity with respect to targeted gas. In this study, our main objective is to develop an innovative and simple pulsed-temperature operating mode associated with an efficient data processing technique, which enables good selectivity toward NO 2 in gas mixtures. This technique is based on few parameters extracted from the dynamic response of sensor versus temperature changes in a gaseous environment. These parameters are: the normalized sensing resistance, the values of the slope at the origin, the intermediate slope and the final slope of the response of NO 2 against different reference gases, such as C 2 H 4 O, CH 2 O and moist air. The selectivity of NO 2 was examined in relation to air moisture with 30% humidity, C 2 H 4 O at 0.5-ppm, CH 2 O at a concentration of 0.5-ppm and the binary mixture of these gases with 0.3-ppm of each. In Section II of the paper, we describe the materials and methods used in our work. Section III presents our results and the discussion. We conclude this work in Section IV. II. MATERIALS AND METHODS The sensitive layer made of CuO nanoparticles is deposited by inkjet printing on a silicon microhotplate [START_REF] Dufour | Technological improvements of a metal oxide gas multi-sensor based on a micro-hotplate structure and inkjet deposition for an automotive air quality sensor application[END_REF]. The ink is prepared with 5% CuO weight, which was dispersed in ethylene glycol by an ultrasonic bath for about one hour. The dispersions obtained were allowed to settle for 24 hours. The final ink was collected and then used for printing using Altadrop equipment control, where the numbers of the deposited drops of ink were controlled [START_REF] Dufour | Technological improvements of a metal oxide gas multi-sensor based on a micro-hotplate structure and inkjet deposition for an automotive air quality sensor application[END_REF]. This technique is simple and allowed us to obtain reproducible layers thicknesses of a few micrometers depending on the number of deposited drops. In addition, this technique permits to have a precisely localized deposit without need of additional complex photolithographic steps [START_REF] Morozova | Thin TiO2 films prepared by inkjet printing of the reverse micelles sol-gel composition[END_REF]. The CuO layer is finally annealed in ambient air from room temperature to 500°C (rate 1°C/min) followed by a plateau at 500°C for 1 hour before cooling to room temperature (1°C/min). This initial temperature treatment is necessary because CuO requires operating temperatures between 100°C<T<500°C. The thermal pretreatment is necessary to generate ionized oxygen species in atomic or molecular form at the oxide surface and therefore to improve the reactivity between the reacting gas and the sensor surface [START_REF] Dufour | Technological improvements of a metal oxide gas multi-sensor based on a micro-hotplate structure and inkjet deposition for an automotive air quality sensor application[END_REF]. In this study, we have used a pulsed temperature profile, presented in a previously published work, which showed that optimized sensitivity can be achieved with the use of two different temperature stages at 100°C and 500°C respectively. This dual temperature protocol also reduces the total power consumption of the device (see Figure 1). The CuO sensor was placed in a 250 ml test chamber and the test conditions were as follow: -A flow rate of 200 ml/min, controlled by a digital flowmeter. -A relative humidity (RH) level of 30% is obtained by bubbling synthetic air flow controlled by a mass flow controller. -The measuring chamber is at ambient temperature, controlled by a temperature sensor placed inside the vessel. -A bias current is applied to the sensitive layer, controlled by a Source Measure Unit (SMU). We started a test with single gas injections after a phase of two-hours stabilization in humid air, then with injections of binary mixtures for 16-min. During 32-min, moist air is injected between two successive gas injections. This time is enough to clean the chamber and stabilize the sensor to its baseline. The gas injections concentrations are summarized in Table 1. A schematic representation of these injections is presented in Figure 2. During all the experience (stabilization phase, gases injections and stage between two successive gases) , the sensor is powered by a square signal voltage applied on the heater in order to obtain two temperature steps as shown in Figure 1. To ensure a constant overall flow, we adapted the gas injection sequences duration in correlation with the heating signal period (see Figure 3). The resistance variation of CuO is measured under a fixed supply current of 100 nA in order to obtain voltage measurements in the range of few volts, far from compliance limit of the measuring device (20V). We also verified that this procedure (temperature cycling) doesn't affect the sensor reproducibility in terms of baseline or the sensor sensitivity. Under such test conditions, we achieved a continuous 6.5 hours testing period without observing any drift on the raw sensor response. The sampling period is 500 ms, which gives us 60 points on a 30-second response step, this acquisition rate being enough for accurate data processing. Finally, we analyzed the sensor responses at each steps, according to the different gas injections, using a simple method of data processing in order to have the better selectivity of NO 2 . III. RESULTS AND DISCUSSION A. Method of analysis During each gas injection, 16 periods of temperature modulation are applied. After verifying the reproducibility of sensor responses along these cycles, we only present here the responses of the last cycle, which is stabilized and reproducible from one cycle to another. As mentioned previously, we used new simple data treatment methods to obtain the better selectivity toward NO 2 with respect to several interferent gases. Among the multiple possible criteria, we chose representative variables that take into account the dynamic sensor behavior during a change of gaseous conditions and during a pulsed temperature; these criteria are obtained from the sensor resistance slopes during the gas response on each 30-second-steps. The data acquisition relies on the decomposition of the response into three distinct domains (see Figure 4): -Starting Slope: from the 1st point to the 10th point (in yellow), -Intermediate slope: from the 10th point to the 30th point (in red), -Final slope: from the 30th point to the 60th point (in black). The normalized resistance is measured from the last point on each step: this is the absolute difference between the resistance of the sensor under a reference gas (like moist air) and the resistance of the sensor under targeted gas(es), at the final cycle of each injection: R n = ((R gas -R air ) / R air ) * 100 (1) By treating these four parameters, a good selectivity of NO 2 can be obtained with respect to moist air, C 2 H 4 O and CH 2 O. B. Slope at the origin The slope at the origin is calculated on the first 10 points of each temperature step of the 8th cycle of each injection. Regarding the reference gas (moist air); we took the response of the 8th cycle of the last sequence under humid air before the injection of gases. The values of these slopes (in Ohms/ms) are shown in Figure 5; each bar represents the value of the slope at the origin of each gas at 500 and 100°C. Figure 5 clearly shows that the calculations of this parameter enable us to differentiate NO 2 from the other reference gases by measures on the plateau at 100°C. Regarding the other step at 500°C we note that the response is almost zero, because the transition from a cold to a hot state decreases the detection sensitivity and therefore reduces the sensors resistance variations under gas. We also note that with this criterion we evidence a significant difference between the value of the starting slope under NO 2 compare to others under other gases and mixtures without NO 2 . C. Intermediate slope The intermediate slope is calculated from the 10th point to the 30th point of each temperature step of the last cycle during each gas injection and compared with the value under air. The calculated values are shown in Figure 6. According to Figure 6, the value of the intermediate slope of individual NO 2 injection or of the gas mixtures in which there is NO 2 , is not prominent compared with the other reference gases even on the plateau at 100°C. This parameter is less effective than the previous one to detect NO 2 in gas mixtures. D. Final slope The final slope is the slope of the second half of the gas response; it is calculated between the 30th point and the 60th point. The response to the different temperature steps of the last cycle during NO 2 injection and the reference gases is presented in Figure 7. It is worth noting that the value of the final slopes of NO 2 or gaseous mixtures, which contain the NO 2 , are very different compared to the other reference gases on the two stages at 100°C only. This parameter allows us to select NO 2 with respect to other interfering gases. E. Normalized resistance As previously presented, the normalized resistance is calculated with respect to humid air from each last cycle level. The reference resistance used is the resistance of each stage of the last wet air sequence before the gas injection. The results obtained from these calculations are shown in Figure 8. Figure 8 shows a slight variation in the values of the normalized resistance between two similar and successive temperatures for the same gases. This slight variation can be explained by data dispersion which is +/-2%, due to the fact that the normalized resistance is calculated from the raw values during the gas injection and the raw values of resistances during the injections of humid air that may be slightly different between two similar and successive temperature stages. The CuO sensor response to sub-ppm NO 2 levels when injected individually or in combination with another gas, is specific when measurements are taken on the low temperature plateau at 100°C. IV. CONCLUSION The selectivity and sensitivity of our CuO sensor has been studied by different operating modes and simple methods of analysis. Specific temperature modulation was applied to the metal oxide with the use of temperature steps at 500 and 100°C. The response of CuO sensitive layer toward gases representatives of indoor air pollution (C 2 H 4 O, CH 2 O, NO 2 , humid air) has been studied. These responses were analysed with several parameters, such as the study of the slope of resistance variation at the origin, the intermediate slope, the final slope and the normalized resistance measured at each temperature steps. The study of these different parameters shows that the CuO material is able to detect sub-ppm levels of NO 2 with a good selectivity compared to different interfering gases. To still improve the selectivity of gas sensor device to a larger variety of polluting gases, we plan to integrate these CuO sensors in a multichip system, which will allow us to use in parallel new metal oxide layers with specific temperature profiles and data analysis criteria. Figure 1 . 1 Figure 1. CuO temperature profile. TABLE 1 .Figure 2 . 12 Figure 2. Synoptic representative of a sequence of gas injections. Figure 3 . 3 Figure 3. Diagram of a gas sequence. Figure 4 . 4 Figure 4. Representative diagram of a response of a gas sensor during an injection cycle and showing the 3 domain slopes. Figure 5 . 5 Figure 5. Representation of the slopes at the origin under different gases of the 8th cycle in "Ohms/ms" according to the temperature of the sensors. Figure 6 . 6 Figure 6. Representation of the intermediate slopes under different gases at the 8th cycle in "Ohms/ms" according to the temperature of the sensors. Figure 7 . 7 Figure 7. Representation of the final slopes under different gases at the 8th cycle in "Ohms/ms" according to the temperature of the sensors. Figure 8 . 8 Figure 8. The normalized resistance of the different gas injections at the 8th cycle according to the temperature. Copyright (c) IARIA, 2018. ISBN: 978-1-61208-621-7 ALLSENSORS 2018 : The Third International Conference on Advances in Sensors, Actuators, Metering and Sensing ALLSENSORS 2018 : The Third International Conference on Advances in Sensors, Actuators, Metering and Sensing Copyright (c) IARIA, 2018. ISBN: 978-1-61208-621-7 ACKNOWLEDGMENT The authors express their gratitude to neOCampus, the university project of the Paul Sabatier University of Toulouse, for the financial support and the Chemical Coordination Laboratory of Toulouse for the preparation of the CuO nanoparticle powder. This work was also partly supported by the French RENATECH network.
16,506
[ "4212", "945578", "1115317", "930079", "753944", "753888" ]
[ "392533", "392533", "392533", "389612", "389612", "399701", "461", "461", "461" ]
01618268
en
[ "chim", "spi" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01618268/file/M%20Hijazi%20Sensors%20and%20Actuators%20B%202018.pdf
Mohamad Hijazi Mathilde Rieu email: rieu@emse.fr Valérie Stambouli Guy Tournier Jean-Paul Viricelle Christophe Pijolat Ambient temperature selective ammonia gas sensor Keywords: SnO2, Functionalization, APTES, Ammonia gas, Room temperature detection teaching and research institutions in France or abroad, or from public or private research centers. Introduction Breath analysis is considered as noninvasive and safe method for the detection of diseases [START_REF] Broza | Combined Volatolomics for Monitoring of Human Body Chemistry[END_REF]. Gas sensors have shown to be promising devices for selective gas detection related to disease diagnosis [START_REF] Adiguzel | Breath sensors for lung cancer diagnosis[END_REF][START_REF] Righettoni | Breath analysis by nanostructured metal oxides as chemo-resistive gas sensors[END_REF]. These sensors can be used to detect the gases emanated from the human body. For example, ammonia is a disease marker for liver problems. Indeed, ammonia in humans is converted to urea in the liver and then passes in the urines through the kidney, while unconverted ammonia is excreted in breath of 10 ppb for healthy subjects [START_REF] Capone | Solid state gas sensors: state of the art and future activities[END_REF]. The ammonia concentration increases in case of malfunctioning of liver and kidney reaching more than 1 ppm in presence of renal failure [START_REF] Dubois | Breath Ammonia Testing for Diagnosis of Hepatic Encephalopathy[END_REF][START_REF] Güntner | Selective sensing of NH3 by Si-doped α-MoO3 for breath analysis[END_REF]. SnO2 sensors have been well investigated from a very long time [START_REF] Yamazoe | Effects of additives on semiconductor gas sensors[END_REF][START_REF] Lalauze | A new approach to selective detection of gas by an SnO2 solid-state sensor[END_REF][START_REF] Watson | The tin oxide gas sensor and its applications[END_REF], since they can detect many gases with high sensitivity and low synthesis cost [START_REF] Barsan | Metal oxide-based gas sensor research: How to?[END_REF][START_REF] Korotcenkov | Gas response control through structural and chemical modification of metal oxide films: state of the art and approaches[END_REF]. The interactions of SnO2 material with gases have been extensively studied [START_REF] Gong | Interaction between thin-film tin oxide gas sensor and five organic vapors[END_REF][START_REF] Wang | Metal Oxide Gas Sensors: Sensitivity and Influencing Factors[END_REF]. The chemical reactions of target gases with SnO2 particle surface can generate variations in their electrical resistances. SnO2 is n-type semiconductor, in this case, the adsorbed oxygen on particle surface takes electrons from the conduction band at elevated temperature, generating depletion layer between the conduction band and the surface (space-charge region). Reducing gases such as CO are oxidized on the surface, then they consume the adsorbed surface oxygen giving back the electrons to the conduction band. This decrease in the depletion layer decreases the resistance of whole film [START_REF] Barsan | Conduction mechanism switch for SnO2 based sensors during operation in application relevant conditions; implications for modeling of sensing[END_REF]. However, like other metal oxides, SnO2 sensors have lack of selectivity and operate at high temperature (350-500 °C), except if some particular activation with light for example, is carried out [START_REF] Anothainart | Light enhanced NO2 gas sensing with tin oxide at room temperature: conductance and work function measurements[END_REF][START_REF] Comini | UV light activation of tin oxide thin films for NO2 sensing at low temperatures[END_REF]. Many techniques were applied to enhance the selectivity such as (i) the addition of gas filter [START_REF] Tournier | Selective filter for SnO2-based gas sensor: application to hydrogen trace detection[END_REF], or small amount of noble metals [START_REF] Tian | A low temperature gas sensor based on Pd-functionalized mesoporous SnO2 fibers for detecting trace formaldehyde[END_REF][START_REF] Cabot | Analysis of the noble metal catalytic additives introduced by impregnation of as obtained SnO2 sol-gel nanocrystals for gas sensors[END_REF][START_REF] Trung | Effective decoration of Pd nanoparticles on the surface of SnO2 nanowires for enhancement of CO gas-sensing performance[END_REF], (ii) the use of oxides mixture [START_REF] Zeng | Enhanced gas sensing properties by SnO2 nanosphere functionalized TiO2 nanobelts[END_REF][START_REF] Van Hieu | Enhanced performance of SnO2 nanowires ethanol sensor by functionalizing with La2O3[END_REF], or hybrid film of SnO2 and organic polymers [START_REF] Geng | Characterization and gas sensitivity study of polyaniline/SnO2 hybrid material prepared by hydrothermal route[END_REF][START_REF] Bai | Gas sensors based on conducting polymers[END_REF]. Since several years, there is a high request to develop analytical tools which are able to work at temperature lower than 200 °C in order to incorporate them in plastic devices and to reduce the power consumption [START_REF] Camara | Tubular gas preconcentrators based on inkjet printed micro-hotplates on foil[END_REF][START_REF] Rieu | Fully inkjet printed SnO2 gas sensor on plastic substrate[END_REF]. For such devices, researchers are now focused on the development of room temperature gas sensors. The first reported metal oxide gas sensor was based on palladium nanowires for the detection H2 [START_REF] Atashbar | Room temperature gas sensor based on metallic nanowires[END_REF]. Concerning SnO2 gas sensors, the more recent studies at room temperature are related to NO2 detection [START_REF] Anothainart | Light enhanced NO2 gas sensing with tin oxide at room temperature: conductance and work function measurements[END_REF][START_REF] Comini | UV light activation of tin oxide thin films for NO2 sensing at low temperatures[END_REF] or to formaldehyde sensors [START_REF] Tian | A low temperature gas sensor based on Pd-functionalized mesoporous SnO2 fibers for detecting trace formaldehyde[END_REF]. As exposed previously, for breath analysis, there is also a high demand for ammonia sensors working at room temperature. Among the current studies, it can be mentioned the sensors using tungsten disulfide (WS2) [START_REF] Li | WS2 nanoflakes based selective ammonia sensors at room temperature[END_REF] or carbon nanotubes (CNTs) [START_REF] Van Hieu | Highly sensitive thin film NH3 gas sensor operating at room temperature based on SnO2/MWCNTs composite[END_REF] as sensing material, and especially the ones based on reduced grapheme oxide (RGO) [START_REF] Sun | Facile preparation of polypyrrole-reduced graphene oxide hybrid for enhancing NH3 sensing at room temperature[END_REF][START_REF] Tran | Reduced graphene oxide as an over-coating layer on silver nanostructures for detecting NH3 gas at room temperature[END_REF][START_REF] Su | NH3 gas sensor based on Pd/SnO2/RGO ternary composite operated at room-temperature[END_REF], or on polyaniline (PANI) [START_REF] Kumar | Flexible room temperature ammonia sensor based on polyaniline[END_REF][START_REF] Bai | Polyaniline@SnO2 heterojunction loading on flexible PET thin film for detection of NH3 at room temperature[END_REF][START_REF] Abdulla | Highly sensitive, room temperature gas sensor based on polyaniline-multiwalled carbon nanotubes (PANI/MWCNTs) nanocomposite for trace-level ammonia detection[END_REF][START_REF] Khuspe | SnO2 nanoparticles-modified polyaniline films as highly selective, sensitive, reproducible and stable ammonia sensors[END_REF]. Molecular modification of metal oxide by organic film is another way to enhance the selectivity and to decrease the sensing temperature (room temperature gas sensors) [START_REF] Matsubara | Organically hybridized SnO2 gas sensors[END_REF][START_REF] Wang | Effect of Functional Groups on the Sensing Properties of Silicon Nanowires toward Volatile Compounds[END_REF][START_REF] Wang | Artificial Sensing Intelligence with Silicon Nanowires for Ultraselective Detection in the Gas Phase[END_REF]. The need of selective sensors with high sensitivity in presence of humidity at low gases concentration pushes the research to modify SnO2 sensing element in order to change its interaction with gases. The modifications with organic functional groups having different polarities could change the sensor response to specific gases (e.g. ammonia) depending on their polarity [START_REF] Wang | Effect of Functional Groups on the Sensing Properties of Silicon Nanowires toward Volatile Compounds[END_REF]. In the literature, a functionalization based on APTES (3-aminopropyltriethoxysilane) combined with hexanoyl chloride or methyl adipoyl chloride was investigated on silicon oxide field effect transistors [START_REF] Wang | Artificial Sensing Intelligence with Silicon Nanowires for Ultraselective Detection in the Gas Phase[END_REF]. These devices were tested under some aliphatic alcohol and alkanes molecules. The functionalized field effect transistors have shown responses to a wide variety of volatile organic compounds like alcohols, alkanes etc. The response of such sensors to gases is derived from the change in electrostatic field of the molecular layer which can generate charge carriers in the silicon field effect transistor interface. Interactions with volatile organic compounds on the molecular layer can take place in two types: the first type is the adsorption on the surface of the molecular layer and the second type is the diffusion between the molecular layers [START_REF] Wang | Effect of Functional Groups on the Sensing Properties of Silicon Nanowires toward Volatile Compounds[END_REF][START_REF] Wang | Artificial Sensing Intelligence with Silicon Nanowires for Ultraselective Detection in the Gas Phase[END_REF]. However, these field effect transistor sensors is still lacking of selectivity. The aim of the functionalization performed in this work was to passivate the surface states on the SnO2 sensors by molecular layer aiming to optimize their interactions with ammonia gas. One SnO2 sensor was coated with molecules having mostly nonpolar (functional) side groups (SnO2-APTES-alkyl) and two others were coated with molecules having mostly polar side groups (SnO2-APTES and SnO2-APTES-ester) in order to discover their interactions with ammonia which is a polar molecule. It was explored by testing the different functionalized sensors under ammonia which is the target gas. The changes in the response of the modified sensors were compared with pure SnO2. Another objective is to reduce the power consumption by decreasing the operating temperature, since the sensor will be used later for smart devices and portable applications. In the present work, we focus on the change in sensitivity and selectivity of SnO2 sensors after functionalization with amine (APTES), alkyl (CH3), and ester (COOCH3) end functional groups. The sensors are firstly functionalized by APTES followed by covalent attachment of alkyl or ester end functional groups molecule. Functionalization is characterized with FTIR analysis, and then detection performances of resulting sensors are investigated in regards of ammonia detection. Experimental Fabrication of SnO2 sensors Thick SnO2 films were deposited on alumina substrate by screen-printing technology. A semiautomatic Aurel C890 machine was used. The procedure for preparing the SnO2 ink and sensor fabrication parameters has been described elsewhere [START_REF] Tournier | Selective filter for SnO2-based gas sensor: application to hydrogen trace detection[END_REF]. SnO2 powder (Prolabo Company) was first mixed with a solvent and an organic binder. SnO2 ink was then screen-printed on an alphaalumina substrate (38×5×0.4 mm 3 ) provided with two gold electrodes deposited by reactive sputtering. The SnO2 material was finally annealed for 10 h at 700 °C in air. Film thickness was about 40 microns. SnO2 particles and agglomerates sizes were found to be between 10 nm and 500 nm. A photographic image of the two sensor faces is presented in Fig. 1. Molecular modifications of SnO2 sensor The functionalization was carried out by two-step process. In the first step, 3aminopropyltriethoxysilane (APTES, ACROS Organics) molecules were grafted on SnO2 (silanization). Silanization in liquid phase has been described elsewhere [START_REF] Le | A Label-Free Impedimetric DNA Sensor Based on a Nanoporous SnO2 Film: Fabrication and Detection Performance[END_REF]. SnO2 sensors were immersed in 50 mM APTES dissolved in 95% absolute ethanol and 5% of distilled water for 5 h under stirring at room temperature. Hydroxyl groups present on the surface of SnO2 allow the condensation of APTES. To remove the unbounded APTES molecules, the sensors were rinsed with absolute ethanol and dried under N2 flow (sensor SnO2-APTES). In a second step, SnO2-APTES sensors were immersed in a solution of 10 mM of hexanoyl chloride (98%, Fluka, alkyl: C6H11ClO) or methyl adipoyl chloride (96%, Alfa Aesar, ester: C7H11ClO3) and 5 µL of triethylamine (Fluka) in 5 mL of chloroform as solvent for 12 h under stirring. The terminal amine groups of APTES allow the coupling reaction with molecules bearing acyl chloride group. The sensors were then rinsed with chloroform and dried under N2 flow (sensors: SnO2-APTESalkyl and SnO2-APTES-ester). The functionalization of SnO2 sensors leads to covalent attachment of amine, ester, and alkyl end functional groups. A schematic illustration of the two steps functionalization is reported in Fig. 2. Characterization of molecularly modified SnO2 Modified molecular layers were characterized by Attenuated Total Reflectance-Fourier Transform Infrared spectroscopy (ATR-FTIR), the sample being placed face-down on the R = CH3, COOCH3 diamond crystal, and a force being applied by pressure tip. FTIR spectra were recorded in a wavelength range from 400 to 4000 cm -1 . The scanning resolution was 2 cm -1 . The entire ATR-FTIR spectrums were collected using a Golden Gate Diamond ATR accessory (Bruker Vertex 70). Sensing measurements of modified sensors In the test bench, the sensor was installed in 80 cm 3 glass chamber under constant gas flow of 15 l/h. The test bench was provided with gas mass flow controllers which allow controlling the concentrations of different gases at same time by PC computer. A cylinder of NH3 gas (300 ppm, diluted in nitrogen) was purchased from Air Product and Chemicals Company. In addition, the test bench was equipped with water bubbler to test the sensors under different relative humidity (RH) balanced with air at 20 °C. Conductance measurements were performed by electronic unit equipped with voltage divider circuit with 1 volt generator that permits measuring the conductance of the SnO2 film. Before injecting the target gas in the test chamber, the sensors were kept 5 h at 100 °C, and then stabilized under air flow for 5 h at 25 °C. Gas sensing properties were then measured to different concentrations of ammonia gas balanced with air with different RH. The normalized conductance was plotted to monitor the sensor response as well as to calculate response and recovery times. The normalized conductance is defined as G/G0, where G is the conductance at any time and G0 is the conductance at the beginning of the test (i.e. t=0). Response/recovery times are defined hereafter as the time to reach 90% of steady-state sensor response. In addition, the curve of relative response (GN-GA/GA, with GN: conductance after 20 min of ammonia injection, and GA: conductance under air with 5%RH) versus ammonia concentration was plotted for different sensors to compare their sensitivity. Sensitivity is defined as the slope of the calibration curve. Limit of detection (LOD) was also evaluated and it corresponds to a signal equal 3 times the standard deviation of the conductance baseline noise. Values above the LOD indicate the presence of target gas. The selectivity of SnO2-APTES-ester was tested versus acetone and ethanol gases. Results and Discussions Characterization of modified molecular layers Molecular characterization of the grafted films on SnO2 sensors was carried out by ATR-FTIR. Spectra of SnO2 (red curve), SnO2-APTES (green curve), SnO2-APTES-alkyl (blue curve), and SnO2-APTES-ester (black curve) are presented in Fig. 3 between 800 and 4000 cm -1 . All the sensors showed similar features in the range between 1950 and 2380 cm -1 which is not exploitable as it corresponds to CO2 gas contributions in ambient air. With respect to first step of functionalization which is the attachment of APTES on SnO2, most significant absorption bands were found between 800 and 1800 cm -1 (Fig. 3, green curve). The peak at 938 cm -1 is attributed to Sn-O-Si bond in stretching mode. The ethoxy groups of APTES hydrolyze and react with the hydroxyl groups presented on the surface of SnO2 grains. In addition, the hydrolyzed ethoxy groups to hydroxyl react with another hydroxyl of the neighbor grafted APTES molecule, which leads to SnO2 surface covered with siloxane network [START_REF] Kim | Formation, structure, and reactivity of aminoterminated organic films on silicon substrates[END_REF]. This feature is shown by the wide band between 978 cm -1 and 1178 cm -1 which is attributed to siloxane groups (Si-O-Si) from polymerized APTES. The -NH3 + and -NH2 vibrational signals of SnO2-APTES are found at 1496 cm -1 and 1570 cm -1 respectively. Regardless the pure SnO2, the CH2 stretch peaks for all modified sensor founded at 2935 cm -1 are related to the backbone of the attached molecules. Thus, from these peaks, the presence of characteristic features of APTES on the surface of SnO2 is confirmed. The second step of sensors modification was the attachment of a film on SnO2-APTES, ended with alkyl or ester groups. These modifications were carried out by reaction of amines with acyl chlorides leading to the production of one equivalent acid, which forms a salt with unreacted amine of APTES and diminish the yield. The addition of triethylamine base is to neutralize this acid and leads to push the reaction forward. From the FTIR spectra in Fig. 3 (blue and black curves), the alkyl and ester sensors exhibit two peaks at 1547 cm -1 and 1645 cm -1 which correspond to carbonyl stretch mode and N-H bending mode of amide respectively. An additional broad peak between ~ 3000 and ~ 3600 cm -1 corresponds to N-H stretch of amide. These peaks confirm the success of the reaction between amine group of APTES and acyl chloride groups. Asymmetrical C-H stretching mode of CH3 for SnO2-APTES-alkyl and for SnO2-APTES-ester appears at 2965 cm -1 . The stretching peak of carbon double bounded to oxygen of ester group of SnO2-APTES-ester is found at 1734 cm -1 (Fig. 3, black curve). These results show that SnO2 sensors are modified as expected with alkyl and ester end groups. As a conclusion, FTIR analysis confirms that functionalization is effectively achieved on SnO2 by showing the presence of attached APTES molecules on SnO2 after silanization, as well as the existence of ester and alkyl molecules on SnO2-APTES after reaction with acyl chloride products. Sensing measurements of different functionalized SnO2 sensors The first part of the test under gases was to show the characteristic of the response of different sensors to ammonia gas. SnO2, SnO2-APTES, SnO2-APTES-alkyl and SnO2-APTES-ester sensors were tested under 100 ppm of ammonia balanced with 5% RH air at 25 °C. The four sensors responses (normalized conductance) are reported in Fig. 4. Fig. 4. The sensor response of SnO2 (G0=1.4×10 -5 Ω -1 ), SnO2-APTES (G0=7.9×10 -6 Ω -1 ), SnO2-APTES-alkyl (G0=1.5×10 -5 Ω -1 ), and SnO2-APTES-ester (G0=9.5×10 -6 Ω -1 ) to 100 ppm ammonia gas balanced with humid air (5%RH) at 25 °C. SnO2 sensor First we have to mention that the conductance of SnO2 at room temperature is measurable as reported by Ji Haeng Yu et al. [START_REF] Yu | Selective CO gas detection of CuO-and ZnO-doped SnO2 gas sensor[END_REF]. Indeed, stoichiometric SnO2 is known to be insulator at room temperature. But the used SnO2 sensitive film contains defects and the experiments were performed with 5%RH which lead to the formation of hydroxyl groups adsorbed on SnO2 surface. These two effects explain the measurable conductance base line under air balanced with 5%RH of pure SnO2. Furthermore, the conductance was still measurable even when the test was switched to dry air as the hydroxyl groups stayed adsorbed at room temperature. The conductance of pure SnO2 decreases upon exposure to ammonia gas (Fig. 4). This type of response has been found before by Kamalpreet Khun Khun et al. [START_REF] Khun Khun | SnO2 thick films for room temperature gas sensing applications[END_REF] at temperature between 25 to 200 °C. They supposed that ammonia reacts with molecular adsorbed oxygen ion (O2 -, created as shown in Eq 1) producing nitrogen monoxide gas (NO) according to Eq 2. In presence of oxygen and at low temperature, NO can be easily transformed into NO2 which is very good oxidizing agent (Eq 3). The reaction of NO2 with SnO2 at ambient temperature causes the decrease of sensor conductance. NO2 adsorbs on SnO2 surface adsorption sites (s) and bring out electrons from the conduction band (Eq 4) [START_REF] Maeng | SnO2 Nanoslab as NO2 Sensor: Identification of the NO2 Sensing Mechanism on a SnO2 Surface[END_REF]. Thus, according to published results, the overall reaction of ammonia with SnO2 at room temperature could be written as in Eq 5. Such a mechanism is consistent with a conductance decrease upon ammonia exposure. However, actually, we have no experimental proof of such a mechanism. Eq. (1) O2 + s + 1e -→ s-O2 - Eq. ( 2) 4NH3 + 5s-O2 -→ 4NO + 6H2O + 5s + 5e - Eq. (3) 2NO + O2 ↔ 2NO2 (equilibrium in air) Eq. (4) NO2 + s + e -→ s-NO2 - Eq. ( 5) 4NH3 + 4s + 7O2 + 4e -→ 4s-NO2 -+ 6H2O where s is an adsorption site. SnO2-APTES sensor When SnO2-APTES sensor was exposed to ammonia gas, no response was observed. Formation of APTES film on SnO2 prevents the water molecules to adsorb on the surface because the active sites of SnO2 are occupied by O-Si bond of APTES, and because of the hydrophobic nature of APTES film. Therefore, in the following discussion, the conventional mechanism of interaction of SnO2 with gases cannot be taken in consideration as no reactive sites are available. SnO2-APTES shows no change in conductance upon exposure to ammonia (Fig. 4). This implies that no significant interactions occur between the grafted APTES and ammonia gas. In term of polarity and other chemical properties like acidity, the amine and ammonia groups are almost the same, since amine is one of the derivatives of ammonia. Hence, such result was expected for SnO2-APTES. In addition, this behavior indicates that the SnO2 surface is well covered by APTES molecules because the negative response observed on pure SnO2 is totally inhibited. SnO2-APTES-alkyl and SnO2-APTES-ester sensors SnO2-APTES-alkyl and SnO2-APTES-ester exhibit increase in conductance upon exposure to 100 ppm of ammonia gas as shown in Fig. 4. However, the response of SnO2-APTES-ester is more important than for SnO2-APTES-alkyl. These responses could be related to the different polarities of the attached end groups. Indeed, Wang et al. [START_REF] Wang | Effect of Functional Groups on the Sensing Properties of Silicon Nanowires toward Volatile Compounds[END_REF][START_REF] Wang | Artificial Sensing Intelligence with Silicon Nanowires for Ultraselective Detection in the Gas Phase[END_REF] reported that the response of some functionalized sensitive films is derived from the change in electrostatic field of the attached molecular layer. Ester is a good electron withdrawing group, while alkyl is mostly considered as nonpolar. Ammonia molecule is a good nucleophilic molecule (donating), thus the interaction is between electron withdrawing (ester) and electron donating groups. In this case, dipole-dipole interaction is taking place. However, in the case of SnO2-APTES-alkyl, the interaction is of induced dipole type because ammonia is a polar molecule and alkyl end group is mostly nonpolar. It is likely that the adsorption process occurs through interaction between the nitrogen of ammonia and the end functional group of the molecular layer (alkyl and ester). It was reported before by B. Wang et al. [START_REF] Wang | Effect of Functional Groups on the Sensing Properties of Silicon Nanowires toward Volatile Compounds[END_REF], that the dipole-dipole interaction is always stronger than induced dipole interaction. This can explain the difference in the response between the SnO2-APTES-alkyl and SnO2-APTESester sensors to ammonia gas. As mentioned previously, the interaction can also result from diffusion of the gas in the molecular layer. This type of interaction is favorable only for SnO2-APTES-alkyl. It is difficult for ammonia molecules to diffuse in the molecular layer of SnO2-APTES-ester, because of the steric hindrance induced by ester end groups. This phenomenon can explain the response of SnO2-APTES-alkyl to ammonia in addition to the induced dipole interaction. These two interactions (i.e. dipole-dipole and induced-dipole) result in a modification in the dipole moment of the whole film. The variation in the molecular layer's dipole moment affects the electron mobility in SnO2 film which modifies the conductance [START_REF] Hoft | Effect of dipole moment on current-voltage characteristics of single molecules[END_REF][START_REF] Young | Effect of group and net dipole moments on electron transport in molecularly doped polymers[END_REF]. The exposure to ammonia leads to increase in electron mobility (proportional to conductance). The same behavior was founded to a selection of polar and non-polar gases but on ester, and alkyl silicon oxide functionalized substrate [START_REF] Wang | Effect of Functional Groups on the Sensing Properties of Silicon Nanowires toward Volatile Compounds[END_REF]. According to the above discussion, the response of the molecular modified sensors does not obey the conventional mechanism of direct interaction with SnO2. The response comes powerfully from the interaction of ammonia molecules with end function group of the attached layer. The response of SnO2-APTES-ester is generated from dipole-dipole interaction, while the response of SnO2-APTES-alkyl is produced from induced dipole interaction which has less significant effect. Sensors sensitivity Regarding the different sensors sensitivity against ammonia concentrations, Fig. 5 shows the relative responses versus ammonia concentrations of SnO2-APTES-ester in comparison with pure SnO2, SnO2-APTES and SnO2-APTES-alkyl sensors. Sensitivity is defined as the slope of the relative response curve versus ammonia concentrations, i.e., how large is the change in the sensor signal upon a certain change in ammonia concentration. Pure SnO2 and SnO2-APTES sensors have almost no sensitivity to different ammonia concentrations. In addition, SnO2-APTES-alkyl gives no significant response between 0.5 ppm and 30 ppm, but its sensitivity starts to increase from 30 ppm of ammonia. It can be noticed that SnO2-APTES-ester exhibit constant sensitivity between 0.5 ppm and 10 ppm, around 0.023 ppm -1 . After this concentration the sensor starts to become saturated and the sensitivity continuously decreases down to nearly zero at 100 ppm NH3. However, the sensitivity of SnO2-APTES-ester at concentrations higher than 30 ppm is still more significant than the sensitivity of SnO2-APTES-alkyl. The calculated LOD for ester modified SnO2 was 80 ppb. To summarize this section, SnO2-APTES-ester sensors showed good sensitivity to ammonia gas in a range of concentration compatible with breath analysis applications (sub-ppm). This sensor is studied in more details in the following part. Focus on SnO2-APTES-ester sensor Effect of humidity on the response As known, human breath contains high amount of humidity (100 % RH at 37 °C). molecule presents high polarity, hence it can affect the response to ammonia gas. In order to check this effect, SnO2-APTES-ester was tested to ammonia gas under different amount of relative humidity ranging from 5 to 50% RH. Figure 6 shows the sensor response of SnO2-APTES-ester to 100 ppm in dry and humid air at 25 °C. Upon exposure to ammonia gas the sensor conductance increases in the four cases (dry air, 5%RH, 26%RH, and 50%RH) with fast response and recovery times. In dry air and 5%RH, the sensor shows almost the same response magnitude, 1.46 and 1.45 respectively, which decreases to 1.25 and 1.06 in 26%RH and 50%RH respectively. This means that a small quantity of relative humidity does not affect the sensor response, while at elevated amount the response starts to be less significant. A potential explanation is that the attached ester film is saturated by water molecules or adsorbed hydroxyl groups at high RH. Hence, during exposure to ammonia, there is a limitation of response due to adsorption competition between water and ammonia. For the future tests, the humidity will be kept at 5%RH. In these conditions, 5%RH at 25 °C, the response and recovery times (as defined in section 2.4) are 98 s and 130 s respectively, which is quite noticeable for a tin oxide based sensors working at room temperature. Effect of operating temperature In the most cases, the increase of the operating temperature of SnO2 sensors increases the sensitivity to gases. In contrast, in the case of SnO2-APTES-ester, the increase of temperature generates decrease of ammonia sensor response. As shown in Fig. 7, the response of SnO2-APTES-ester to ammonia decreases from 1.45 (25 °C) to 1.18 and 1.06 when the operating temperature is increased to 50 °C and 100 °C respectively. Therefore, the interaction of ammonia with ester attached to SnO2 is more significant at low temperature (25 °C) than at higher one (50 and 100 °C). So, interesting conclusion is that SnO2-APTES-ester has to be operated close to room temperature (25 °C), without any power consumption for the sensitive detection of ammonia. Effect of ammonia concentration on sensor response The influence of ammonia concentrations on sensor response was studied at the optimum conditions defined previously, 5%RH at 25 °C. Figure 8 shows the change in conductance of SnO2-APTES-ester upon exposure to different concentrations of ammonia gas (0.2-100 ppm). The curve at low concentrations (Fig. 8a) was used to calculate the limit of detection (LOD) of the sensor to ammonia gas which is around 80 ppb. Such a value confirms the potentiality of ammonia detection for breath analysis application. Selectivity Human breath contains a wide variety of volatile organic compounds which are polar or nonpolar, oxidant or reductant. It's well known that SnO2 sensor which usually operates at temperature between 350-500 °C can give a response to most of these types of gases, unfortunately without distinction (selectivity) or even with compensating effect for oxidant/reducing gases [START_REF] Pijolat | Application of membranes and filtering films for gas sensors improvements[END_REF]. Field effect transistors functionalized with ester end group have shown responses to a wide variety of volatile organic compounds like alcohols and alkanes [START_REF] Wang | Artificial Sensing Intelligence with Silicon Nanowires for Ultraselective Detection in the Gas Phase[END_REF]. This implies that these sensors also have a lack of selectivity. To check the selectivity to ammonia of the developed SnO2-APTES-ester, such sensors were tested with respect to ethanol and acetone gases. Figure 9 shows that the SnO2-APTES-ester sensors have almost no change in conductance upon exposure to 50 ppm of acetone and ethanol at 25 °C. This means that ester modified sensor is relatively selective to ammonia, at least in regards of the two tested gases. This particular selectivity derives from the way of interactions of the grafted layer on SnO2 with ammonia gas. As mentioned previously, interactions occur between the ester end group which is strongly electron withdrawing, and the ammonia molecule which is electron donnating. Hence, electrons are withdrawed by the attached ester end group from the ammonia molecules adsorbed on it during exposure. These interactions lead to the significant response of SnO2-APTES-ester sensor. Other molecules like ethanol and acetone do not have this high affinity to donate the electrons to SnO2-APTES-ester, explaining minor changes in conductance upon exposure to these gases. Conclusion Molecularly modified SnO2 thick films were produced by screen printing and wet chemical processes. The functionalizations were carried out first by grafting of 3aminopropyltriethoxysilane followed by reaction with hexanoyl chloride or methyl adiapoyl chloride. Then, tests under gases were performed. Pure SnO2 sensor and APTES modified SnO2 didn't show any significant sensitivity to NH3 (0.5-100 ppm) at 5%RH, 25 °C, while the sensitivity to NH3 gas starts to increase from 30 ppm for alkyl modified sensor. On the contrary, ester modified sensor exhibit fast response and recovery time to NH3 gas with a limit of detection estimated to 80 ppb at 5%RH, 25 °C. In addition, this sensor shows constant sensitivity between 0.5 and 10 ppm of NH3 (0.023 ppm -1 ). Moreover, ester modified sensor is selective to NH3 gas with respect to reducing gases like ethanol and acetone. However, the relative humidity higher than 5% decreases the response. Working at room temperature, ester modified sensor may be a good candidate for breath analysis applications for the diagnosis of diseases related to ammonia gas biomarker. Such sensor could be coupled with a condenser to reduce the amount of humidity in the analyzed breath sample to 5 %RH. The sensing mechanism of ester modified SnO2 and the selectivity in regards of various volatile organic compounds have to be investigated in further work. Fig. 1 . 1 Fig. 1. Photograph of the two SnO2 sensor sides deposited by screen printing. Fig. 2 . 2 Fig. 2. Schematic illustration of SnO2-APTES, SnO2-APTES-alkyl, and SnO2-APTES-ester synthesis steps. Hexanoyl chloride represent the molecule C5H8ClOR with R: CH3 and methyl adipoyl chloride is the molecule C5H8ClOR with R: COOCH3. Fig. 3 . 3 Fig. 3. ATR-FTIR spectra of SnO2 (red curve), SnO2-APTES (green curve), SnO2-APTES-alkyl (blue curve), and SnO2-APTES-ester (black curve) films. Fig. 5 . 5 Fig. 5. Relative response of pure SnO2, SnO2-APTES, SnO2-APTES-alkyl, and SnO2-APTESester sensors versus ammonia concentrations balanced with 5% RH air at 25 °C. Fig. 6 . 6 Fig. 6. The sensor response curves of SnO2-APTES-ester to 100 ppm of ammonia gas balanced with dry air, 5%RH, 26%RH, and 50%RH at 25 °C. Fig. 7 . 7 Fig. 7. The sensor response of SnO2-APTES-ester to 100 ppm ammonia gas balanced with humid air (5%RH) at 25 °C, 50 °C, and 100 °C. Figure Figure 8b also shows the stability of the sensor with time. The stability of metal oxide is a challenge mostly for room temperature gas sensors. Present results show quite good stability of the baseline. The drift is around 0.98% over 16 hours. Fig. 8 . 8 Fig. 8. The change in conductance of SnO2-APTES-ester upon exposure to different concentrations of ammonia gas in humid air (5%RH) at 25 °C, a) [NH3] ranging from 0.2 to 5 ppm and b) [NH3] ranging from 5 to 100 ppm. Fig. 9 . 9 Fig. 9. The sensor response of SnO2-APTES-ester upon exposure to 50 ppm of ammonia, ethanol, and acetone gases in humid air (5%RH) at 25 °C.
35,795
[ "989772", "742649", "830507", "743941", "830508" ]
[ "9711", "209650", "475510", "761", "9711", "209650", "475510", "9711", "209650", "475510", "9711", "209650", "475510", "9711", "209650", "475510" ]
01757269
en
[ "spi" ]
2024/03/05 22:32:10
2016
https://hal.science/hal-01757269/file/MMT_4RUU_Nurahmi_Caro_Wenger_Schadlbauer_Husty.pdf
Latifah Nurahmi email: latifah.nurahmi@irccyn.ec-nantes.fr Stéphane Caro email: stephane.caro@irccyn.ec-nantes.fr Philippe Wenger email: philippe.wenger@irccyn.ec-nantes.fr Josef Schadlbauer email: josef.schadlbauer@uibk.ac.at Manfred Husty email: manfred.husty@uibk.ac.at Reconfiguration analysis of a 4-RUU parallel manipulator teaching and research institutions in France or abroad, or from public or private research centers. Introduction To the best of the authors knowledge, the notion of operation mode was initially introduced by Zlatanov et al. in [START_REF] Zlatanov | Constraint Singularities as C-Space Singularities[END_REF] to explain the behaviour of the three degree-of-freedom (3-dof ) DYMO robot which can undergo a variety of transformation when passing through singular configurations. In [START_REF] Kong | Reconfiguration Analysis of a 3-DOF Parallel Mechanism Using Euler Parameter Quaternions and Algebraic Geometry Method[END_REF], the author analysed the types of operation modes and the transition configurations of the 3-RER 1 Parallel Manipulator (PM) based upon the Euler parameter quaternions. Walter et al. in [START_REF] Walter | A Complete Kinematic Analysis of the SNU 3-UPU Parallel Robot[END_REF] used the Study's kinematic mapping to show that the 3-UPU PM built at the Seoul National University (SNU) has nine different operation modes. Later in [START_REF] Walter | Kinematic Analysis of the TSAI 3-UPU Parallel Manipulator Using Algebraic Methods[END_REF], the authors revealed five different operation modes of the 3-UPU PM proposed by Tsai in 1996 [START_REF] Tsai | Kinematics of a 3-DOF Platform with Three Extensible Limbs[END_REF]. By using same approach, Schadlbauer et al. in [START_REF] Schadlbauer | The 3-RPS Parallel Manipulator from an Algebraic Viewpoint[END_REF] found two distinct operation modes of the 3-RPS PM proposed by Hunt in 1983 [START_REF] Hunt | Structural Kinematics of In-Parallel-Actuated Robot-Arms[END_REF]. Later in [START_REF] Schadlbauer | Operation Modes in Lower-Mobility Parallel Manipulators[END_REF], the authors characterized the motion type in both operation modes by using the axodes. The self-motions of this manipulator were classified in [START_REF] Schadlbauer | Self-Motions of 3-RPS Manipulators[END_REF]. Another PM of the 3-RPS family is the 3-RPS Cube PM and was proposed by Huang et al. in 1995 [10]. Nurahmi et al. in [START_REF] Nurahmi | Kinematic Analysis of the 3-RPS Cube Parallel Manipulator[END_REF][START_REF] Nurahmi | Motion Capability of the 3-RPS Cube Parallel Manipulator[END_REF] found that this manipulator has only one operation mode in which the 3-dof general motion and 1-dof Vertical Darboux Motion occur inside the same operation mode. Accordingly, a general methodology for the type synthesis of reconfigurable mechanisms has been proposed and several new reconfigurable mechanisms have been generated. In [START_REF] Kong | Type Synthesis of Parallel Mechanisms with Multiple Operation Modes[END_REF][START_REF] Kong | Type Synthesis of Variable Degrees-of-Freedom Parallel Manipulators with Both Planar and 3T1R Operation Modes[END_REF], the authors proposed a general method based upon the screw theory to synthesize a PM that can perform two operation modes. In [START_REF] He | Kinematic Analysis of a Single-Loop Reconfigurable 7R Mechanism with Multiple Operation Modes[END_REF], a novel 1-dof single-loop reconfigurable 7-R mechanism with multiple operation modes based upon the Sarrus mechanism, was proposed. The following year, the reconfiguration analysis of this mechanism based on the kinematic mapping and the algebraic geometry method was presented in [START_REF] Kong | Type Synthesis and Reconfiguration Analysis of a Class of Variable-DOF Single-Loop Mechanisms[END_REF]. By using the theory of the displacement groups, the lower-mobility PM with multiple operation modes and different number of dof were presented in [START_REF] Fanghella | Parallel Robots that Change Their Group of Motion[END_REF]. Refaat et al. in [START_REF] Refaat | Two-Mode Overconstrained Three DOFs Rotational-Translational Linear-Motor-Based Parallel-Kinematics Mechanism for Machine Tool Applications[END_REF] introduced a family of 3-dof PM that can exhibit two 1T1R modes by using Lie-group theory. In [START_REF] Gogu | Maximally Regular T2R1-Type Parallel Manipulators with Bifurcated Spatial Motion[END_REF], Gogu introduced several PM with two 2T1R modes. In [START_REF] Gan | Mobility Change in Two Types of Metamorphic Parallel Mechanisms[END_REF], a new joint was presented and added in the manipulator architecture hence it allows the moving platform to change the motion types. By adding a rTPS limb which has two phases, a new metamorphic parallel mechanism is introduced in [START_REF] Gan | Unified Kinematics and Singularity Analysis of a Metamorphic Parallel Mechanism with Bifurcated Motion[END_REF]. The link-coincidence-based geometric-constraint method is proposed in [START_REF] Dai | Mobility in Metamorphic Mechanisms of Foldable/Erectable Kinds[END_REF] to obtain reconfigurable mechanisms originated from carton folds and packaging dated back to 1996. At the same year, Wohlhart in [START_REF] Wohlhart | Kinematotropic linkages[END_REF] showed mechanisms that changed mobility through singularities. In [START_REF] Li | Parallel Mechanisms with Bifurcation of Schoenflies Motion[END_REF], Li and Hervé investigated several PM with two distinct Schönflies modes. The Schönflies motion contains three independent translations and one pure rotation about an axis of fixed direction, namely 3T1R. The authors continued in [START_REF] Lee | Isoconstrained Parallel Generators of Schoenflies Motion[END_REF] to present the systematic approach to synthesize the iso-constrained parallel Schönflies motion generators with two identical 5-dof limbs. The type synthesis of the 3T1R PM with four identical limb structures was performed in [START_REF] Kong | Type Synthesis of Parallel Mechanisms[END_REF], which leads to a kinematic architecture with four revolute actuators, namely the 4-RUU PM. In [START_REF] Masouleh | Solving the Forward Kinematic Problem of 4-DOF Parallel Mechanisms (3T1R) with Identical Limb Structures and Revolute Actuators Using the Linear Implicitization Algorithm[END_REF], eight solutions of the direct kinematics were enumerated by using the linear implicitization algorithm. Amine et al. in [START_REF] Amine | Singularity Analysis of the 4-RUU Parallel Manipulator Using Grassmann-Cayley Algebra[END_REF][START_REF] Amine | Singularity Conditions of 3T1R Parallel Manipulators with Identical Limb Structures[END_REF] investigated the singularity conditions of the 4-RUU PM by using the Grassmann-Cayley Algebra and the Grassmann Geometry. It is shown that the 4-RUU PM is an over-constrained manipulator and it shares some common properties among the constraint wrenches. By using an algebraic description of the manipulator and the Study's kinematic mapping based upon [START_REF] Husty | Algebraic Methods in Mechanism Analysis and Synthesis[END_REF], a characterization of the operation modes of the 4-RUU PM are discussed in more details in this paper. Due to the unique topology of the RUU limb that comprises two links with one revolute actuator attached to the base, the actuated joint angle always appears in every constraint equation. This kinematic issue does not allow to compute a primary decomposition because the constraint equation changes for every joint inputs. As a consequence, the 4-RUU PM is decomposed into two iso-constrained 2-RUU PM. The constraint equations of each 2-RUU PM are initially derived and the primary decomposition is computed. It turns out that the 2-RUU PM has three 4-dof operation modes. By combining the results of primary decomposition from both 2-RUU PM, the operation modes of the 4-RUU PM can be characterized. It reveals that the 4-RUU PM has two 4-dof operation modes and one 2-dof operation mode. The singularities are examined by deriving the determinant of the Jacobian matrix of the constraint equations with respect to the Study parameters. It is shown that the manipulators are able to change from one operation mode to another operation mode by passing through the configurations that belong to both modes. The singularity conditions are mapped onto the joint space. Eventually, the changes of operation modes are illustrated. This paper is organized as follows: A detailed description of the manipulator architecture is given in Section 2. The constraint equations of the manipulators are expressed in Section 3. These constraint equations are used to identify the operation modes in Section 4. In Section 5, the singularity conditions and self-motions are presented. Eventually, the operation modes changing of the 4-RUU PM is discussed in Section 6. Nurahmi, Caro, Wenger, Schadlbauer, and Husty, submitted to Mech. and Mach. Theory 4 The 4-RUU PM shown in Fig. 1, is composed of a square base, a square moving platform, and four identical limbs. The origin O of the fixed frame Σ 0 and the origin P of the moving frame Σ 1 are located at the center of the square base and the square moving platform, respectively. Each limb is composed of five R-joints such that the second and the third ones, as well as the forth and the fifth ones, are built with intersecting and perpendicular axes. Thus they are assimilated to U-joint. The printed model of this manipulator is shown in Fig. 2. The first R-joint is attached to the base and is actuated. Its rotation angle is defined by θ 1i (i = 1, ..., 4). The axes of the first and the second joints are directed along Z-axis. The axis of the fifth joint is directed along z-axis. The second axis and the fifth axis are denoted by v i and n i (i = 1, ..., 4), respectively. The axes of the third and the forth joints are parallel. The axis of the third joint is denoted by s i (i = 1, ..., 4) and it changes instantaneously as a function of θ 2i (i = 1..4) as shown in Fig. 3, hence: The first R-joint of the i-th limb is located at point A i with distance a from the origin O of the fixed frame Σ 0 . The first U-joint is denoted by point B i with distance l from point A i . Link A i B i always moves in a plane normal to v i . Hence the coordinates of points A i and B i expressed in the fixed frame Σ 0 are: PSfrag A 1 A 2 A 3 A 4 C 1 C 2 C 3 C 4 B 1 v 1 n 1 s 1 s 1 O P r X Y Z x y z Σ 0 Σ 1 s i = ( 0, cos(θ 2i ), sin(θ 2i ), 0 ) T , i = 1, ..., 4 (1) r 0 A 1 = ( 1, a, 0, 0 ) T r 0 B 1 = ( 1, l cos(θ 11 ) + a, l sin(θ 11 ), 0 ) T r 0 A 2 = ( 1, 0, a, 0 ) T r 0 B 2 = ( 1, l cos(θ 12 ), l sin(θ 12 ) + a, 0 ) T r 0 A 3 = ( 1, -a, 0, 0 ) T r 0 B 3 = ( 1, l cos(θ 13 ) -a, l sin(θ 13 ), 0 ) T r 0 A 4 = ( 1, 0, -a, 0 ) T r 0 B 4 = ( 1, l cos(θ 14 ), l sin(θ 14 ) -a, 0 ) T (2) The moving platform is connected to the limbs by four U-joints, of which the intersection point of the R-joint axes is denoted by C i . The length of the moving platform from the origin P of the moving frame Σ 1 to point C i is defined by b. The length of link B i C i is defined by r. The coordinates of point C i expressed in the moving frame Σ 1 are: r 1 C 1 = ( 1, b, 0, 0 ) T r 1 C 3 = ( 1, -b, 0, 0 ) T r 1 C 2 = ( 1, 0, b, 0 ) T r 1 C 4 = ( 1, 0, -b, 0 ) T (3) As a consequence, there are four design parameters a, b, l, and r; and four joint variables θ 11 , θ 12 , θ 13 , and θ 14 that determine the motions of the 4-RUU PM. Constraint equations In this section, the constraint equations are derived whose solutions illustrate the possible poses of the moving platform (coordinate frame Σ 1 ) with respect to Σ 0 . To obtain the coordinates of points C i and vectors n i expressed in Σ 0 , the Study parametrization of a spatial Euclidean transformation matrix M ∈ SE(3) based on [START_REF] Husty | Algebraic Methods in Mechanism Analysis and Synthesis[END_REF] is used. M = x 2 0 + x 2 1 + x 2 2 + x 2 3 0 T 3×1 M T M R (4) where M T and M R represent the translational and rotational parts of the transformation matrix M, respectively, and are expressed as follows: M T =   2(-x 0 y 1 + x 1 y 0 -x 2 y 3 + x 3 y 2 ) 2(-x 0 y 2 + x 1 y 3 + x 2 y 0 -x 3 y 1 ) 2(-x 0 y 3 -x 1 y 2 + x 2 y 1 + x 3 y 0 )   M R =   x 2 0 + x 2 1 -x 2 2 -x 2 3 2(x 1 x 2 -x 0 x 3 ) 2(x 1 x 3 + x 0 x 2 ) 2(x 1 x 2 + x 0 x 3 ) x 2 0 -x 2 1 + x 2 2 -x 2 3 2(x 2 x 3 -x 0 x 1 ) 2(x 1 x 3 -x 0 x 2 ) 2(x 2 x 3 + x 0 x 1 ) x 2 0 -x 2 1 -x 2 2 + x 2 3   (5) Nurahmi, Caro, Wenger, Schadlbauer, and Husty, submitted to Mech. and Mach. Theory 7 The parameters x 0 , x 1 , x 2 , x 3 , y 0 , y 1 , y 2 , y 3 , which appear in matrix M, are called Study parameters. These parameters make it possible to parametrize SE(3) with dual quaternions. The Study's kinematic mapping maps each spatial Euclidean displacement of SE(3) via transformation matrix M onto a projective point X [x 0 : x 1 : x 2 : x 3 : y 0 : y 1 : y 2 : y 3 ] in the 6-dimensional Study quadric S ∈ P 7 [START_REF] Schadlbauer | The 3-RPS Parallel Manipulator from an Algebraic Viewpoint[END_REF], such that: SE(3) → X ∈ P 7 (x 0 : x 1 : x 2 : x 3 : y 0 : y 1 : y 2 : y 3 ) T = (0 : 0 : 0 : 0 : 0 : 0 : 0 : 0) T Every projective point X will represent a spatial Euclidean displacement, if it fulfils the following equation and inequality: x 0 y 0 + x 1 y 1 + x 2 y 2 + x 3 y 3 = 0, x 2 0 + x 2 1 + x 2 2 + x 2 3 = 0 (7) Those two conditions will be used in the following computations to simplify the algebraic expressions. First of all, the tangent half-angle substitutions are performed to rewrite the trigonometric functions of θ 1i and θ 2i (i = 1, ..., 4) in terms of rational functions of a new variable t ij . However, the tangent half-angle substitutions will increase the degree of variable and make the computation quite heavy. cos(θ ij ) = 1 -t 2 ij 1 + t 2 ij , sin(θ ij ) = 2t 2 ij 1 + t 2 ij , i = 1, 2, j = 1, ..., 4 (8) where t ij = tan( θ ij 2 ). The coordinates of points C i and vectors n i expressed in the fixed frame Σ 0 are obtained by: r 0 C i = M r 1 C i , n 0 i = M n 1 i , i = 1, ..., 4 (9) The coordinates of all points are given in terms of the Study parameters and the design parameters. The constraint equations can be obtained by examining the design of RUU limb. The link connecting points B i and C i is coplanar to the vectors v i and n 0 i . Accordingly, the scalar triple product of vectors (r 0 C ir 0 B i ), v i and n 0 i vanishes, namely: (r 0 C i -r 0 B i ) T . (v i × n 0 i ) = 0 , i = 1, ..., 4 (10) After computing the corresponding scalar triple product and removing the common denominators, the following constraint equations come out: g 1 : (at 2 11 -bt 2 11 -lt 2 11 + a -b + l)x 0 x 1 + 2lt 11 x 0 x 2 -(2t 2 11 + 2)x 0 y 0 + 2lt 11 x 3 x 1 + (-at 2 11 -bt 2 11 + lt 2 11 -a -b -l)x 3 x 2 + (-2t 2 11 -2)y 3 x 3 = 0 (11a) g 2 : (l -lt 2 12 )x 0 x 1 + (at 2 12 -bt 2 12 + 2lt 12 + a -b)x 0 x 2 -(2t 2 12 + 2)x 0 y 0 + (at 2 12 + bt 2 12 +2lt 12 + a + b)x 3 x 1 + (lt 2 12 -l)x 3 x 2 -(2t 2 12 + 2)y 3 x 3 = 0 (11b) g 3 : (at 2 13 -bt 2 13 + lt 2 13 + a -b -l)x 0 x 1 -2lt 13 x 0 x 2 + (2t 2 13 + 2)x 0 y 0 -2lt 13 x 1 x 3 + (-at 2 13 -bt 2 13 -lt 2 13 -a -b + l)x 2 x 3 + (2t 2 13 + 2)x 3 y 3 = 0 (11c) g 4 : (lt 2 14 -l)x 0 x 1 + (at 2 14 -bt 2 14 -2lt 14 + a -b)x 0 x 2 + (2t 2 14 + 2)x 0 y 0 + (at 2 14 + bt 2 14 -2lt 14 + a + b)x 1 x 3 + (-lt 2 14 + l)x 2 x 3 + (2t 2 14 + 2)x 3 y 3 = 0 (11d) To derive the constraint equations corresponding to the link length r of link B i C i , the distance equation can be formulated as: (r 0 C i -r 0 B i ) 2 = r 2 . As a consequence, the following four equations are obtained: g 5 : (a 2 t 2 11 -2abt 2 11 -2alt 2 11 + b 2 t 2 11 + 2blt 2 11 + l 2 t 2 11 -r 2 t 2 11 + a 2 -2ab + 2al + b 2 -2bl + l 2 - r 2 )x 2 0 -8blt 11 x 0 x 3 + (4at 2 11 -4bt 2 11 -4lt 2 11 + 4a -4b + 4l)x 0 y 1 + 8lt 11 x 0 y 2 + (a 2 t 2 11 -2a bt 2 11 -2alt 2 11 + b 2 t 2 11 + 2blt 2 11 + l 2 t 2 11 -r 2 t 2 11 + a 2 -2ab + 2al + b 2 -2bl + l 2 -r 2 )x 2 1 -8 blt 11 x 1 x 2 + (-4at 2 11 + 4bt 2 11 + 4lt 2 11 -4a + 4b -4l)x 1 y 0 -8lt 11 x 1 y 3 + (a 2 t 2 11 + 2abt 2 11 -2 alt 2 11 + b 2 t 2 11 -2blt 2 11 + l 2 t 2 11 -r 2 t 2 11 + a 2 + 2ab + 2al + b 2 + 2bl + l 2 -r 2 )x 2 2 -8lt 11 x 2 y 0 +(4at 2 11 + 4bt 2 11 -4lt 2 11 + 4a + 4b + 4l)x 2 y 3 + (a 2 t 2 11 + 2abt 2 11 -2alt 2 11 + b 2 t 2 11 -2blt 2 11 + l 2 t 2 11 -r 2 t 2 11 + a 2 + 2ab + 2al + b 2 + 2bl + l 2 -r 2 )x 2 3 + 8lt 11 x 3 y 1 + (-4at 2 11 -4bt 2 11 + 4l t 2 11 -4a -4b -4l)x 3 y 2 + (4t 2 11 + 4)y 2 0 + (4t 2 11 + 4)y 2 1 + (4t 2 11 + 4)y 2 2 + (4t 2 11 + 4)y 2 3 = 0 (12a) g 6 : (a 2 t 2 g 7 : (a 2 t 2 13 -2abt 2 13 + 2alt 2 13 + b 2 t 2 13 -2blt 2 13 + l 2 t 2 13 -r 2 t 2 13 + a 2 -2ab -2al + b 2 + 2bl + l 2 - r 2 )x 2 0 + 8blt 13 x 0 x 3 + (-4at 2 13 + 4bt 2 13 -4lt 2 13 -4a + 4b + 4l)x 0 y 1 + 8lt 13 x 0 y 2 + (a 2 t 2 13 -2 abt 2 13 + 2alt 2 13 + b 2 t 2 13 -2blt 2 13 + l 2 t 2 13 -r 2 t 2 13 + a 2 -2ab -2al + b 2 + 2bl + l 2 -r 2 )x 2 1 + 8 blt 13 x 1 x 2 + (4at 2 13 -4bt 2 13 + 4lt 2 13 + 4a -4b -4l)x 1 y 0 -8lt 13 x 1 y 3 + (a 2 t 2 13 + 2abt 2 13 + 2al t 2 13 + b 2 t 2 13 + 2blt 2 13 + l 2 t 2 13 -r 2 t 2 13 + a 2 + 2ab -2al + b 2 -2bl + l 2 -r 2 )x 2 2 -8lt 13 x 2 y 0 + ( -4at 2 13 -4bt 2 13 -4lt 2 13 -4a -4b + 4l)x 2 y 3 + (a 2 t 2 13 + 2abt 2 13 + 2alt 2 13 + b 2 t 2 13 + 2blt 2 13 + l 2 t 2 13 -r 2 t 2 13 + a 2 + 2ab -2al + b 2 -2bl + l 2 -r 2 )x 2 3 + 8lt 13 x 3 y 1 + (4at + a 2 + 2ab + b 2 + l 2 -r 2 )x 2 1 + (-4blt 2 14 + 4 bl)x 1 x 2 + (4lt 2 14 -4l)x 1 y 0 + (4at 2 14 + 4bt 2 14 -8lt 14 + 4a + 4b)x 1 y 3 + (a 2 t 2 14 -2abt 2 14 + b 2 t 2 14 +l 2 t 2 14 -r 2 t 2 14 -4alt 14 + 4blt 14 + a 2 -2ab + b 2 + l 2 -r 2 )x 2 2 + (4at 2 14 -4bt 2 14 -8lt 14 + 4a -4b)x 2 y 0 + (-4lt 2 14 + 4l)x 2 y 3 + (a 2 t 2 14 + 2abt 2 14 + b 2 t 2 14 + l 2 t 2 14 -r 2 t 2 14 -4alt 14 -4blt 14 + a 2 + 2ab + b 2 + l 2 -r 2 )x 2 3 + (-4at 2 14 -4bt 2 14 + 8lt 14 -4a -4b)x 3 y 1 + (4lt 2 14 -4l)x 3 y 2 + (4t 2 14 + 4)y 2 0 + (4t 2 14 + 4)y 2 1 + (4t 2 14 + 4)y 2 2 + (4t 2 14 + 4)y 2 3 = 0 (12d) To derive the constraint equations corresponding to the axes s i of each limb, the scalar product of vector --→ B i C i and vector s i should vanish, as : (r 0 C ir 0 B i ) T s i = 0. Hence, the following constraint equations are obtained: 7) is added since all solutions have to be within the Study quadric, i.e.: g 13 : x 0 y 0 + x 1 y 1 + x 2 y 2 + x 3 y 3 = 0. To exclude the exceptional generator (x 0 = x 1 = x 2 = x 3 = 0), we add the following normalization equation: g 9 : (-at g 14 : x 2 0 + x 2 1 + x 2 2 + x 2 3 -1 = 0. It assures that there is no point of the exceptional generators appears as a solution. Operation modes The 4-RUU PM is an over-constrained mechanism [START_REF] Amine | Singularity Analysis of the 4-RUU Parallel Manipulator Using Grassmann-Cayley Algebra[END_REF][START_REF] Amine | Singularity Conditions of 3T1R Parallel Manipulators with Identical Limb Structures[END_REF], therefore it can be decomposed into two iso-constrained 2-RUU PM as shown in Fig. 4. The printed model of the 4-RUU PM presented in Fig. 2, can also be decomposed into 2-RUU PM as shown in Fig. 5. The first mechanism consists of the 1st and the 3rd limbs, hence it is named the 2-RUU (I) PM. The second mechanism consists of the 2nd and the 4th limbs, hence it is named the 2-RUU (II) PM. The moving platforms of both mechanisms move independently. When the moving frames of both mechanisms are coincident (accordingly P I ≡ P II ), we obtain the 4-RUU PM. As a consequence, the operation modes of the 4-RUU PM are determined by the linear combination of the results of primary decomposition of the 2-RUU (I) and the 2-RUU (II) PM, as presented in the following. The operation modes and the self-motions of the 2-RUU PM are presented in more detail in [START_REF] Nurahmi | Operation Modes and Self-motions of a 2-RUU Parallel Manipulator[END_REF]. A 1 A 2 A 3 A 4 C 1 C 2 C 3 C 4 X Y Z x x The 2-RUU (I) PM In the 2-RUU (I) PM, the first and the second R-joints in each limb are actuated. The design parameters are assigned as a = 2, b = 1, l = 1, r = 2 (in units that need nod be specified). The set of eight constraint equations is written as a polynomial ideal with variables {x 0 , x 1 , x 2 , x 3 , y 0 , y 1 , y 2 , y 3 } over the coefficient ring C[t 11 , t 13 , t 21 , t 23 ], defined as: I (I) = g 1 , g 3 , g 5 , g 7 , g 9 , g 11 , g 13 , g 14 . At this point, the following ideal is examined: J (I) = g 1 , g 3 , g 13 . The primary decomposition is computed to verify if the ideal J (I) is the intersection of several smaller ideals. Indeed, the ideal J (I) is decomposed into three components as: J (I) = 3 k=1 J k(I) , with the results of primary decomposition: J 1(I) = x 0 , x 3 , x 1 y 1 + x 2 y 2 J 2(I) = x 1 , x 2 , x 0 y 0 + x 3 y 3 J 3(I) = ( Accordingly, the 2-RUU (I) PM under study has three operation modes. The computation of the Hilbert dimension of ideal J k(I) with t 11 , t 13 , t 21 , t 23 treated as variables shows that: dim(J k(I) ) = 4 (k = 1, ..., 3). To complete the analysis, the remaining equations are added by writing: K k(I) : J k(I) ∪ g 5 , g 7 , g 9 , g 11 , g 14 , k = 1, ..., 3 (15) It follows that the 2-RUU (I) PM has three 4-dof operation modes. This type of manipulator is called invariable-dof PM in [START_REF] Kong | Type Synthesis of Variable Degrees-of-Freedom Parallel Manipulators with Both Planar and 3T1R Operation Modes[END_REF]. Each system K k(I) corresponds to a specific operation mode that will be discussed in the following. System K 1(I) : 1st Schönflies mode In this operation mode, the moving platform is reversed about an axis parallel to the XY -plane of Σ 0 by 180 degrees from the "identity condition". The identity condition is when the moving frame and the fixed frame are coincident, i.e. Σ 1 ≡ Σ 0 and the transformation matrix is an identity matrix. The condition x 0 = 0, x 3 = 0, x 1 y 1 + x 2 y 2 = 0 are valid for all poses and are substituted into the transformation matrix M, such that: M 1(I) =       1 0 0 2(x 1 y 0 -x 2 y 3 ) x 2 1 -x 2 2 2x 1 x 2 0 2(x 1 y 3 + x 2 y 0 ) 2x 1 x 2 -x 2 1 + x 2 2 0 - 2y 2 x 1 0 0 -1       (16) From the transformation matrix M 1(I) , it can be seen that the 2-RUU (I) PM has 3-dof translational motions, which are parametrized by y 0 , y 2 , y 3 and 1-dof rotational motion, which is parametrized by x 1 , x 2 in connection with x 2 1 + x 2 2 -1 = 0 [START_REF] Schadlbauer | Operation Modes in Lower-Mobility Parallel Manipulators[END_REF]. The z-axis of frame Σ 1 attached to the moving platform is always pointing downward in this operation mode and the moving platform remains parallel to the base. System K 2(I) : 2nd Schönflies mode In this operation mode, the condition x 1 = 0, x 2 = 0, x 0 y 0 + x 3 y 3 = 0 are valid for all poses. The transformation matrix in this operation mode is written as: M 2(I) =       1 0 0 0 -2(x 0 y 1 -x 3 y 2 ) x 2 0 -x 2 3 -2x 0 x 3 0 -2(x 0 y 2 + x 3 y 1 ) 2x 0 x 3 x 2 0 -x 2 3 0 - 2y 3 x 0 0 0 1       (17) From the transformation matrix M 2(I) , it can be seen that the 2-RUU (I) PM has 3-dof translational motions, which are parametrized by y 1 , y 2 , y 3 and 1-dof rotational motion, which is parametrized by x 0 , x 3 in connection with x 2 0 + x 2 3 -1 = 0 [START_REF] Schadlbauer | Operation Modes in Lower-Mobility Parallel Manipulators[END_REF]. In this operation mode, the z-axis of frame Σ 1 attached the moving platform is always pointing upward and the moving platform remains parallel to the base. The systems K 1(I) and K 2(I) have the same motion type, i.e. Schönflies motion, however they do not have configurations in common. It occurs since the orientation of the moving platform is not the same from one operation mode to the other. The z-axis of frame Σ 1 attached to the moving platform in system K 1(I) is always pointing downward (the moving platform is always titled by 180 degrees), while in the system K 2(I) , the z-axis of frame Σ 1 attached to the moving platform is always pointing upward. System K 3(I) : Third mode In this operation mode, the moving platform is no longer parallel to the base. The variables x 3 , y 0 , y 1 can be solved linearly from the ideal J 3(I) and are shown in Eq. [START_REF] Refaat | Two-Mode Overconstrained Three DOFs Rotational-Translational Linear-Motor-Based Parallel-Kinematics Mechanism for Machine Tool Applications[END_REF]. Since solving the inverse kinematics of t 11 , t 13 are quite computationally expensive, the joint variables t 11 , t 13 are considered to be the independent parameters of this mode. Then the parameters y 2 , y 3 can be solved in terms of (x 0 , x 1 , x 2 , t 11 , t 13 ). Substituting back the parameters y 2 , y 3 into Eq. ( 18), then the Study parameters x 3 , y 0 , y 1 , y 2 , y 3 are now parametrized by (x 0 , x 1 , x 2 , t 11 , t 13 ). Accordingly, the 2-RUU (I) PM will perform two translational motions, which are parametrized by variables t 11 , t 13 and two rotational motions, which are parametrized by variables x 0 , x 1 , x 2 in connection with the normalization equation g 14 . x 3(I) = (t ). As a consequence, in this operation mode, the links B i C i (i = 1, 3) from both limbs are always parallel to the same plane and the axes s i (i = 1, 3) from both limbs are always parallel too. The 2-RUU (II) PM In the 2-RUU (II) PM, the first and the second R-joints in each limb are also actuated. The design parameters are assigned with the same values as a = 2, b = 1, l = 1, r = 2 (in units that need not be specified). The set of eight constraint equations is written as a polynomial ideal with variables {x 0 , x 1 , x 2 , x 3 , y 0 , y 1 , y 2 , y 3 } over the coefficient ring C[t 12 , t 14 , t 22 , t 24 ], defined as: I (II) = g 2 , g 4 , g 6 , g 8 , g 10 , g 12 , g 13 , g 14 . At this point, the following ideal is examined: J (II) = g 2 , g 4 , g 13 . The primary decomposition is computed and it turns out that the ideal J (II) is decomposed into three components as: J (II) = 3 k=1 J k(II) , with the results of primary decomposition: J 1(II) = x 0 , x 3 , x 1 y 1 + x 2 y 2 J 2(II) = x 1 , x 2 , x 0 y 0 + x 3 y 3 J 3(II) = (3t Accordingly, the 2-RUU (II) PM under study has three operation modes. The computation of the Hilbert dimension of ideal J k(II) with t 12 , t 14 , t 22 , t 24 treated as variables shows that: dim(J k(II) ) = 4 (k = 1, ..., 3). To complete the analysis, the remaining equations are added by writing: K k(II) : J k(II) ∪ g 6 , g 8 , g 10 , g 12 , g 14 , k = 1, ..., 3 (20) It follows that the 2-RUU (II) PM has three 4-dof operation modes too. The system K 1(II) is identical with the system K 1(I) (as explained in Section 4.1.1), in which the moving platform is titled about an axis of parallel to XY -plane of Σ 0 by 180 degrees and it can exhibit the Schönflies motion with pure rotation about Z-axis. The system K 2(II) is identical with the system K 2(I) (as explained in Section 4.1.2), where the moving platform can exhibit 3-dof independent translations and one pure rotation about Z-axis. In the system K 3(II) , the moving platform is no longer parallel to the base. The variables x 3 , y 0 , y 1 can be solved linearly from the ideal J 3(II) and are shown in Eq. [START_REF] Gan | Unified Kinematics and Singularity Analysis of a Metamorphic Parallel Mechanism with Bifurcated Motion[END_REF]. Since solving the inverse kinematics of t 12 , t 14 are quite computationally expensive, the joint variables t 12 , t 14 are considered to be the independent parameters in this third mode. Then the parameters y 2 , y 3 can be solved in terms of (x 0 , x 1 , x 2 , t 12 , t 14 ). Substituting back the parameters y 2 , y 3 into Eq. ( 21), the Study parameters x 3 , y 0 , y 1 , y 2 , y 3 are now obtained and parametrized by (x 0 , x 1 , x 2 , t 12 , t 14 ). Hence the moving platform of the 2-RUU (II) PM will perform two translational motions which are parametrized by t 12 , t 14 and two rotational motions which are parametrized by x 0 , x 1 , x 2 in connection with the normalization equation g 14 . Under the system K 3(II) , the joint angles t 22 and t 24 can be computed from the equations g 10 , g 12 . It reveals that no matter the value of the first actuated joint (t 12 , t 14 ) in each limb, these equations (g 10 , g 12 ) vanish for two real solutions, namely (1.) t 22 = - 1 t 24 (θ 22 = π + θ 24 ) and (2.) t 22 = t 24 (θ 22 = θ 24 ). It means that in this operation mode, the links B i C i (i = 2, 4) from both limbs are always parallel to the same plane and the axes s i (i = 2, 4) from both limbs are always parallel too. Noticeably, the third mode of the 4-RUU PM is a 2-dof operation mode since two input joint angles are sufficient to define the pose of the manipulator. This operation mode was referred to coupled motion in [START_REF] Amine | Singularity Analysis of the 4-RUU Parallel Manipulator Using Grassmann-Cayley Algebra[END_REF][START_REF] Amine | Singularity Conditions of 3T1R Parallel Manipulators with Identical Limb Structures[END_REF]. Since the system K 3 is a lower dimension operation mode, namely 2-dof , this type of manipulator is called variable-dof PM in [START_REF] Kong | Type Synthesis of Variable Degrees-of-Freedom Parallel Manipulators with Both Planar and 3T1R Operation Modes[END_REF]. X Y Z O A 1 A 2 A 3 A 4 Singularities and Self-motions The 4-RUU PM reaches a singularity condition when the determinant of its Jacobian matrix vanishes. The Jacobian matrix is the matrix of all first order partial derivative of the constraint equations with respect to the Study parameters. Since the 4-RUU PM has more than one operation mode, the singular configurations can be classified into two different types, i.e. the singularity configurations that belong to a single operation mode and the singularity configurations that belong to more than one operation mode. The common configurations than belong to more than one operation mode allow the 4-RUU PM to switch from one operation mode to another operation mode, which will be discussed in Section 6. The singular poses are examined by taking the Jacobian matrix from each system of polynomial and computing its determinant. From practical point of view, the singularity surface is desirable also in the joint space. Hence the expression of the Jacobian determinant is added The second factor is y 3 = 0, when the moving platform is coplanar to the base, the 4-RUU PM is always in a singular configuration. Finally, the last factor of S 2 : det(J 2 ) = 0 is analysed. Due to the heavy elimination process, the actuated joint angles are assigned as t 11 = 0, t 12 = 1, and t 13 = 0. The elimination gives a univariate polynomial of degree 18 in t 14 as: 99544625t 18 14 -1042686200t 17 14 + 4293155895t 16 14 -9293913184t 15 14 + 10513736564t 14 14 -175591 6864t 13 14 -14239053636t 12 14 + 24856530336t 11 14 -20314694418t 10 14 + 4683758224t 9 14 + 92888105 78t 8 14 -13708185120t 7 14 + 10456187332t 6 14 -5370369152t 5 14 + 1960220428t 4 14 -507121440t 3 14 +89099433t 2 14 -9580248t 14 + 476847 = 0 (31) One singularity configuration in the 2nd Schönflies mode can be obtained by solving Eq. (31), for example t 14 = 1. Then the direct kinematics of at least one singularity pose can be computed with θ 11 = 0 • , θ 12 = 90 • , θ 13 = 0 • , θ 14 = 90 • and it is shown in Fig. 8. X Y Z O A 1 A 2 A 3 A 4 The determinant of Jacobian det(J 2 ) also vanishes in two particular conditions, namely when all the actuated joint angles have the same values and when the first links of each limb are pointing inward toward the origin O of the fixed frame Σ 0 . In the first condition, when all the actuated joint angles have the same values, the moving platform gains 1-dof self-motion. During the motion, the first links of each limb stay fixed and the moving platform can perform a rotational motion. Let us consider the actuated joint angles being t 11 = 0, t 12 = 0, t 13 = 0, t 14 = 0 and the 1-dof self-motion is parametrized by x 3 , as shown in Fig. 9. 5.3 Self-motions in Third mode (K 3 ) X Y Z O A 1 A 2 A 3 A 4 X Y Z A 1 A 2 A 3 A 4 X Y Z A 1 A 2 A 3 A 4 X Y Z A 1 A 2 A 3 A 4 Y Z A 1 A 2 A 3 A 4 X Y Z A 1 A 2 A 3 A 4 X Y Z A 1 A 2 A 3 A 4 Figure 10: First translation of self-motion in K 2 . X Y Z A 1 A 2 A 3 A 4 X Y Z A 1 A 2 A 3 A 4 X Y Z A 1 A 2 A 3 A 4 Before computing the self-motion of the 4-RUU PM in the third mode K 3 , the singularity conditions of the 2-RUU (I) and 2-RUU (II) PM in K 3(I) and K 3(II) , respectively, are first discussed. The determinants of the Jacobian matrices are computed in each system K 3(I) and K 3(II) . The determinants of these Jacobian matrices consist of eleven factors that are defined as: S 3(I) : det(J X Y Z A 1 A 2 A 3 A 4 X Y Z A 1 A 2 A 3 A 4 X Y Z A 1 A 2 A 3 A 4 Figure 13: Self-motion when θ 11 = θ 14 and θ 12 = θ 13 in K 3 . X Y Z A 1 A 2 A 3 A 4 X Y Z A 1 A 2 A 3 A 4 X Y Z A 1 A 2 A 3 A 4 Operation mode changing There exist common configurations where the mechanism, i.e. the 4-RUU PM, can switch from one operation mode to another operation mode. These configurations are well known as transition configurations. Transition configuration analysis is an important issue in the design process and control of the parallel manipulators with multiple operation modes [START_REF] Kong | Reconfiguration Analysis of a 3-DOF Parallel Mechanism Using Euler Parameter Quaternions and Algebraic Geometry Method[END_REF]. However, the 1st Schönflies mode and the 2nd Schönflies mode do not have configurations in common, since the variables x 0 , x 1 , x 2 , x 3 can never vanish simultaneously. It means that the 4-RUU PM cannot switch from the 1st Schönflies mode to the 2nd Schönflies mode directly. To change from the 1st Schönflies mode to the 2nd Schönflies mode, the 4-RUU PM should pass through the third mode, namely system K 3 . There exist some configurations in which the manipulator can switch from the 1st Schönflies mode to the third mode or vice versa, and these configurations belong to both operation modes. Noticeably, these configurations are also singular configurations since they lie in the intersection of two operation modes. In the following, the conditions on the actuated joint angles for the 4-RUU PM to change from one operation mode to another are presented. Each pair of ideals {K i ∪ K j } is analysed and the Study parameters are eliminated to find common solutions. 6.1 1st Schönflies mode (K 1 ) ←→ Third mode (K 3 ) To switch from the 1st Schönflies mode (K 1 ) to the third mode (K 3 ) or vice versa, one should find the configurations of the 4-RUU PM that fulfil the condition of both operation modes, namely (J 1 ∪ J 3 ). Then all Study parameters are eliminated to find an equation in terms of the actuated joint angles t 11 , t 12 , t 13 , t 14 , written as: 9t ... = 0 (34) In this transition configurations, the moving platform is twisted about an axis parallel to XY -plane of Σ 0 by 180 degrees and the actuated joint angles fulfil Eq. (34). The three conditions of the self-motions (1.) t 11 = t 12 and t 13 = t 14 , (2.) t 11 = -1/t 13 and t 12 = -1/t 14 , and (3.) t 11 = t 14 and t 12 = t 13 given in Section 5, are contained in Eq. (34). It shows that the moving platform is in a transition configuration of the 1st Schönflies mode K 1 and the third mode K 3 that amounts to a self-motion. 2nd Schönflies mode (K 2 ) ←→ Third mode (K 3 ) To switch from the 2nd Schönflies mode (K 2 ) to the third mode (K 3 ) or vice versa, one should find the configurations of the 4-RUU PM that fulfil the condition of both operation modes, namely (J 2 ∪ J 3 ). Then all Study parameters are eliminated to find an equation in terms of the actuated joint angles t 11 , t 12 , t 13 , t 14 , written as: 9t The moving platform of the 4-RUU PM is in a transition configuration between K 2 and K 3 when the moving platform is parallel to the base and the actuated joint angles fulfil Eq. (35). The three conditions of the self-motions (1.) t 11 = t 12 and t 13 = t 14 , (2.) t 11 = -1/t 13 and t 12 = -1/t 14 , and (3.) t 11 = t 14 and t 12 = t 13 given in Section 5, are contained in Eq. ( 35). It means that the moving platform is in a transition configuration of the 2nd Schönflies mode K 2 and the third mode K 3 that amounts to a self-motion. As a consequence, the transition between systems K 1 and K 2 occurs through the third system K 3 , which is lower dimension operation mode and amounts to a self-motion. The transition from K 2 to K 1 through the third mode K 3 with the condition of the actuated joint angles t 11 = t 12 and t 13 = t 14 , is shown in Fig. 15(a)-15(f). Conclusions In this paper, the method of algebraic geometry was applied to characterize the type of operation modes of the 4-RUU PM. The 4-RUU PM is initially decomposed into two 2-RUU PM. The constraint equations corresponding to two 2-RUU PM are derived and the primary decomposition is computed. It reveals that the 2-RUU PM have three 4-dof operation modes. However, when they are assembled to be the 4-RUU PM, its operation modes are composed of two 4-dof Schönflies modes and one 2-dof operation mode. The singularity conditions were computed and represented in the joint space, i.e., the actuated joint angles (t 11 , t 12 , t 13 , t 14 ). It turns out that every configuration in the 4-dof third modes of both 2-RUU PM, is always in singularity and it amounts to a self-motion. However, every configuration in the 2-dof third mode of the 4-RUU PM is not always in singularity, i.e., self-motion. The self-motion in this operation mode occurs if the actuated joint angles fulfil some particular conditions, namely (1.) t 11 = t 12 and t 13 = t 14 , (2.) t 11 = -1/t 13 and t 12 = -1/t 14 , and (3.) t 11 = t 14 and t 12 = t 13 . The 4-RUU PM is able to switch from the 1st Schönflies mode to the 2nd Schönflies mode by passing through the third mode, which contains self-motion. Figure 1 : 1 Figure 1: The 4-RUU PM. Figure 2 : 2 Figure 2: Printed model of 4-RUU PM. 2 A 3 A 4 B 1 B 2 B 3 B 4 Figure 3 : 23412343 Figure 3: Parametrization of the first two joint angles in each leg from top view. Figure 4 : 4 Figure 4: The 4-RUU PM decomposed into two 2-RUU PM. Figure 5 : 5 Figure 5: Printed model of 4-RUU PM decomposed into two 2-RUU PM. Figure 6 : 6 Figure 6: The third mode K 3 . Figure 7 : 7 Figure 7: Singularity pose in the 1st Schönflies mode K 1 . Figure 8 : 8 Figure 8: Singularity pose in the 2nd Schönflies mode K 2 . Figure 9 : 9 Figure 9: Self-motion when θ 11 = θ 12 = θ 13 = θ 14 = 0 in K 2 . X Figure 11 : 11 Figure 11: Second translation of self-motion in K 2 . Figure 14 : 14 Figure 14: Self-motion when θ 11 = π + θ 13 and θ 12 = π + θ 14 in K 3 . Figure 15 : 15 Figure 15: Transition from the 2nd Schönflies mode to the 1st Schönflies mode via the third mode with θ 11 = θ 12 and θ 13 = θ 14 . 2 11 t 2 13 x 1 -t 2 11 t 13 x 2 + t 11 t 2 13 x 2 + 2t 2 13 x 1 + t 11 x 2 -t 13 x 2 + x 1 )x 0 (3t 2 11 t 2 13 x 2 + t 2 11 t 13 x 1 -t 11 t 2 13 x 1 + 2t 2 11 x 2 + 4t 2 13 x 2 -t 11 x 1 + t 13 x 1 + 3x 2 ) + t 2 11 t 13 x 1 -t 11 t 2 13 x 1 + 2t 2 11 x 2 + 4t 2 13 x 2 -t 11 x 1 + t 13 x 1 + 3x 2 13 x 2 y 3 -t 11 t 2 13 x 2 1 -2t 11 t 2 13 x 2 2 + t 11 t 2 13 x 2y 3 + 2t 2 13 x 1 y 3 -t 11 x 2 2 + t 11 x 2 y 3 -t 13 x 2 1 -2t 13 x 2 2 -t 13 x 2 y 3 -x 1 x 2 + x 1 y 3 ) + t 2 11 t 13 x 1 -t 11 t 2 13 x 1 + 2t 2 11 x 2 + 4t 2 13 x 2 -t 11 x 1 + t 13 x 1 + 3x 2 )x 1 -t 11 x 0 x 2 2 + t 11 x 1 x 2 y 2 -t 13 x 0 x 2 1 -2t 13 x 0 x 2 2 -t 13 x 1 x 2 y 2 -x 0 x 1 x 2 -3x 2 2 y 2 )(18)Under this operation mode, the joint angles t 21 and t 23 can be computed from the equations g 9 , g 11 . It turns out that no matter the value of the first actuated joints (t 11 , t 13 ) in each limb, these equations vanish for two real solutions, namely (1.) t 21 = -1 t 23 y 0(I) = - 1 13 x 2 (t 2 11 t 2 3t 2 11 t 2 13 x 1 x 2 + t 2 11 t 2 13 x 1 y 3 -t 2 11 t 13 x 2 2 -t 2 11 t y 1(I) = (3t 2 11 t 2 13 x 2 (t 2 1 11 t 2 13 x 0 x 1 x 2 -3t 2 11 t 2 13 x 2 2 y 2 -t 2 11 t 13 x 0 x 2 2 -t 2 11 t 13 x 1 x 2 y 2 -t 11 t 2 13 x 0 x 2 1 -2t 11 t 2 13 x 0 x 2 2 + t 11 t 2 13 x 1 x 2 y 2 -2t 2 11 x 2 2 y 2 -4t 2 13 x 2 2 y 2 (θ 21 = π + θ 23 ) and (2.) t 21 = t 23 (θ 21 = θ 23 3(I) ) = (t 21 t 23 + 1)(-t 23 + t 21 )x 0 (t 2 13 + 1) 3 (t 2 11 + 1) 3 ... = 0 S 3(II) : det(J 3(II) ) = (t 22 t 24 + 1)(-t 24 + t 22 )x 0 (t 2 It can be seen that the first two factors of S 3(I) and S 3(II) in Eq. (32) are the necessary conditions for the 2-RUU (I) and 2-RUU (II) PM to be in the systems K 3(I) and K 3(II) , respectively (as explained in Sections 4.1 and 4.2). They are (1.) t 21 = -1 t 23 (θ 21 = π+θ 23 ) and (2.) t 21 = t 23 To investigate the self-motion in the third mode K 3 of the 4-RUU PM, let us recall the first every configuration in the third mode K 3 of the 4-RUU PM is not always in self-motion. The self-motion of the third mode K 3 occur if and only if the actuated joint angles (t 11 , t 12 , t 13 , t 14 ) fulfil particular conditions. 14 + 1) 3 (t 2 12 + 1) 3 ... = 0 (32) (θ 21 = θ 23 ) for the 2-RUU (I) PM, and (1.) t 22 = -1 t 24 (θ 22 = π+θ 24 ) and (2.) t 22 = t 24 (θ 22 = θ 24 ) for the 2-RUU (II) PM. It means that each configuration in the systems K 3(I) and K 3(II) amounts to a self-motion. R, P, E, U, S denote revolute, prismatic, planar, universal and spherical joints, respectively. The 4-RUU PM In the 4-RUU PM, the first R-joint in each limb is actuated. The design parameters are assigned with the same values as a = 2, b = 1, l = 1, r = 2. The set of ten constraint equations is written as a polynomial ideal with variables {x 0 , x 1 , x 2 , x 3 , y 0 , y 1 , y 2 , y 3 } over the coefficient ring C[t 11 , t 12 , t 13 , t 14 ], defined as: I = g 1 , g 2 , g 3 , g 4 , g 5 , g 6 , g 7 , g 8 , g 13 , g 14 . At this point, the following ideal is examined: J = g 1 , g 2 , g 3 , g 4 , g 13 . Since the 4-RUU PM can be assembled by the 2-RUU (I) and 2-RUU (II) PM, the ideal J can be written as the linear combination of the results of primary decomposition from the 2-RUU (I) and 2-RUU (II) PM. It is noteworthy that the first and second components of the 2-RUU (I) and 2-RUU (II) PM are identical, so that J 1(I) = J 1(II) and J 2(I) = J 2(II) . with: As a consequence, the 4-RUU PM has four operation modes. To complete the analysis, the remaining equations are added by writing: The systems K 1 and K 2 are 4-dof operation modes, which correspond to the 1st Schönflies mode and the 2nd Schönflies mode, as explained in Sections 4.1.1 and 4.1.2, respectively. However, the characterization of the system K 3 needs to be discussed further, as presented hereafter. System K 3 : Third mode The third mode of the 4-RUU PM is characterized by the system K 3 . In this mode, the primary decomposition leads to the ideal J 3 and all polynomial equations in this ideal should vanish simultaneously. Hence the variables x 3 , y 0 , y 1 , y 2 , y 3 can be obtained in cascade from the ideal J 3 , such that: ) Not all polynomial in the ideal J 3 vanishes and it remains two polynomial equations, as follows: Nurahmi, Caro, Wenger, Schadlbauer, and Husty, submitted to Mech. and Mach. Theory 18 x 2 1 -... = 0 (26) As two iso-constrained 2-RUU PM are assembled to be the 4-RUU PM by combining the results of primary decomposition into J 3 , one of the 2-RUU PM is dependent to another. Accordingly, one of the 2-RUU PM can be selected to represent the third mode of the 4-RUU PM. The new ideals are defined corresponding to the two 2-RUU parallel manipulators as follows: Both ideals in Eq. ( 27) are solved separately to show that they lead to the same results. The variables x 3 , y 0 , y 1 , y 2 , y 3 obtained in Eq. ( 25) are then substituted into ideals L I , L II in Eq. [START_REF] Masouleh | Solving the Forward Kinematic Problem of 4-DOF Parallel Mechanisms (3T1R) with Identical Limb Structures and Revolute Actuators Using the Linear Implicitization Algorithm[END_REF]. The variable x 0 can be solved from g 14 and the equations g 5 , g 6 , g 7 , g 8 split into two factors. The first factors of these equations have the same mathematical expression and lead to the 1-dof self-motion, which will be discussed further in Section 5.3. The second factors are analysed thereafter. The variable x 1 is solved from each ideal and yields two equations in terms of (x 2 , t 29). Back substitution into Eqs. ( 25)-( 28), all Study parameters can be solved and one of the manipulator poses can be obtained as shown in Fig. 6 with θ 11 = -84 Singularities in 1st Schönflies mode (K 1 ) The determinant of the Jacobian matrix is computed in the system K 1 , which consists of five constraint equations over five variables (x 1 , x 2 , y 0 , y 2 , y 3 ). Hence the 5 × 5 Jacobian matrix can be obtained. The factorization of the determinant of this Jacobian matrix S 1 : det(J 1 ) = 0 yields three factors. The inspection of the first factor shows the singularity configurations that lie in the intersection with the system K 2 . However, this factor is neglected since the systems K 1 and K 2 do not have configurations in common, i.e. x 0 , x 1 , x 2 , x 3 can never vanish simultaneously. The second factor is y 2 = 0, when the moving platform is coplanar to the base, the 4-RUU PM is always in a singular configuration. Eventually, the third factor of S 1 : det(J 1 ) = 0 is analysed. This factor is added to the system K 1 and the remaining five Study parameters are eliminated. Due to the heavy elimination process, the actuated joint angles are assigned as t 11 = -2, t 12 = -1, and t 13 = 1/2. The elimination yields a univariate polynomial of degree 14 in t 14 as: Singularities and Self-motions in 2nd Schönflies mode (K 2 ) The determinant of the Jacobian matrix is computed in the system K 2 , which consists of five constraint equations over five variables. Therefore the 5 × 5 Jacobian matrix can be obtained. The determinant of this Jacobian matrix S 2 : det(J 2 ) = 0 consists of three factors too. The investigation of the first factor gives the condition in which the mechanism is in the intersection between the systems K 1 and K 2 . As explained in Section 5.1, this factor is removed. The variable x 1 can be solved and it yields two equations in terms of the actuated joint angles only. The first equation is similar to Eq. ( 28) and the second equation takes the form: 3t = 0 (33) The variable x 2 is an independent parameter. As a consequence, when all the corresponding actuated joints are actuated, there is an additional 1-dof rotational motion exhibited by the moving platform. This motion is a self-motion and it is parametrized by the variable x 2 . Two equations in Eq. ( 28) and Eq. (33) are solved to find the relations among the actuated joint angles in the self-motion, namely: Since two 2-RUU PM are assembled perpendicularly as in the 4-RUU PM, only one example between the self-motion of solutions 1 and 2 is shown. The example of self-motion of solution 2 is shown in Fig. 13 with θ 11 = θ 14 = 90 • and θ 12 = θ 13 = 0 • . Figure 14 shows the example of self-motion of solution 3 with θ 11 = 90 • , θ 12 = 0 • , θ 13 = -90 • , θ 14 = 180 • . Every configuration in the third modes of the 2-RUU (I) and 2-RUU (II) PM amounts to a selfmotion. However, when the 2-RUU (I) and 2-RUU (II) PM are assembled to obtain the 4-RUU PM,
47,581
[ "10659", "16879", "949366", "949367" ]
[ "21439", "481388", "473973", "473973", "490478", "490478" ]
01757286
en
[ "spi" ]
2024/03/05 22:32:10
2015
https://hal.science/hal-01757286/file/MMT2015_Wu_Caro_Wang_HAL.pdf
Guanglei Wu Stéphane Caro email: stephane.caro@irccyn.ec-nantes.fr Jiawei Wang Design and Transmission Analysis of an Asymmetrical Spherical Parallel Manipulator Keywords: Asymmetrical spherical parallel manipulator, transmission wrench screw, transmissibility, universal joint This paper presents an asymmetrical spherical parallel manipulator and its transmissibility analysis. This manipulator contains a center shaft to both generate a decoupled unlimited-torsion motion and support the mobile platform for high positioning accuracy. This work addresses the transmission analysis and optimal design of the proposed manipulator based on its kinematic analysis. The input and output transmission indices of the manipulator are defined for its optimum design based on the virtual coefficient between the transmission wrenches and twist screws. The sets of optimal parameters are identified and the distribution of the transmission index is visualized. Moreover, a comparative study regarding to the performances with the symmetrical spherical parallel manipulators is conducted and the comparison shows the advantages of the proposed manipulator with respect to its spherical parallel manipulator counterparts. Introduction Three degree-of-freedom (3-DOF) spherical parallel manipulators (SPMs) are most widely used as camera-orientating device [START_REF] Gosselin | The Agile Eye: a high-performance three-degree-of-freedom camera-orienting device[END_REF], minimally invasive surgical robots [START_REF] Li | Design of spherical parallel mechanisms for application to laparoscopic surgery[END_REF] and wrist joints [START_REF] Asada | Kinematic and static characterization of wrist joints and their optimal design[END_REF] because of their large orientation workspace and high payload capacity. Since they can generate three pure rotations, another potential application is that they can work as a tool head for complicated surface machining. However, the general SPM can only produce a limited torsion motion under a certain tilt orientation, whereas an unlimited torsion is necessary in some common material processing such as milling or drilling. The co-axial input SPM reported in [START_REF] Asada | Kinematic and static characterization of wrist joints and their optimal design[END_REF] can achieve unlimited torsion, however, its unique structure leads to a complex input mechanism. Moreover, the general SPMs result in low positioning accuracy [START_REF] Wu | Mobile platform center shift in spherical parallel manipulators with flexible limbs[END_REF] without a ball-and-socket joint as the center of rotation. In this paper, an asymmetrical SPM (AsySPM) is proposed, which can generate unlimited torsion motion with enhanced positioning accuracy. This manipulator adopts a universal joint as the center of rotation supported by an input shaft at the center, which simplifies the manipulator architecture. The design of 3-DOF SPMs can be based on many criteria, i.e., workspace [START_REF] Gosselin | The optimum kinematic design of a spherical three-degree-offreedom parallel manipulator[END_REF][START_REF] Bai | Optimum design of spherical parallel manipulator for a prescribed workspace[END_REF], dexterity [START_REF] Gosselin | A global performance index for the kinematic optimization of robotic manipulators[END_REF][START_REF] Bai | Modelling of a special class of spherical parallel manipulators with Euler parameters[END_REF][START_REF] Wu | Multiobjective optimum design of a 3-RRR spherical parallel manipulator with kinematic and dynamic dexterities[END_REF], singularity avoidance [START_REF] Bonev | Singularity loci of spherical parallel mechanisms[END_REF], stiffness [START_REF] Wu | Mobile platform center shift in spherical parallel manipulators with flexible limbs[END_REF][START_REF] Bidault | Structural optimization of a spherical parallel manipulator using a two-level approach[END_REF], dynamics [START_REF] Staicu | Recursive modelling in dynamics of Agile Wrist spherical parallel robot[END_REF][START_REF] Wu | Dynamic modeling and design optimization of a 3-DOF spherical parallel manipulator[END_REF], and so on. The prime function of mechanisms is to transmit motion/force between the input joint and the output joint. Henceforth, we will focus on the transmissibility analysis of the proposed SPM. In the design procedure, the performance index is of importance for performance evaluation of the manipulator. A number of transmission indices, such as the transmission angle, the pressure angle, and the transmission factor, have been proposed in the literature to evaluate the quality of motion/force transmission. The transmission angle was introduced by Alt [START_REF] Alt | Der Üertragungswinkel und seine bedeutung für das konstruieren periodischer getriebe[END_REF], developed by Hain [START_REF] Hain | Applied Kinematics[END_REF], and can be applied in linkage synthesis problems [START_REF] Dresner | Definition of pressure and transmission angles applicable to multi-input mechanisms[END_REF][START_REF] Bawab | Rectified synthesis of six-bar mechanisms with welldefined transmission angles for four-position motion generation[END_REF]. Takeda et al. [START_REF] Takeda | A transmission index for in-parallel wire-driven mechanisms[END_REF] proposed a transmission index (TI) for parallel mechanisms based on the minimum value of the cosine of the pressure angle between the leg and the moving platform, where all the inputs but one are fixed. Based on the virtual coefficient between the transmission wrench screw (TWS) and the output twist screw (OTS) introduced by Ball [19], Yuan et al. [START_REF] Yuan | Kinematic analysis of spatial mechanism by means of screw coordinates. part 2-analysis of spatial mechanisms[END_REF] used it as an unbounded transmission factor for spatial mechanisms. Sutherland and Roth [START_REF] Sutherland | A transmission index for spatial mechanisms[END_REF] defined the transmission index using a normalized form of the transmission factor, which depends only on the linkages' geometric properties. Chen and Angeles [START_REF] Chen | Generalized transmission index and transmission quality for spatial linkages[END_REF] proposed a generalized transmission index that is applicable to single-loop spatial linkages with fixed output and single or multiple DOFs. Wu et al. [START_REF] Wu | Optimal design of spherical 5R parallel manipulators considering the motion/force transmissibility[END_REF] introduced a frame-free index related to the motion/force transmission analysis for the optimum design of the spherical five-bar mechanism. Wang et al. [START_REF] Wang | Performance evaluation of parallel manipulators: Motion/force transmissibility and its index[END_REF] presented the transmission analysis of fully parallel manipulators based on the transmission indices defined by Sutherland, Roth [START_REF] Sutherland | A transmission index for spatial mechanisms[END_REF] and Takeda [START_REF] Takeda | A transmission index for in-parallel wire-driven mechanisms[END_REF]. Recently, some approaches to identify singularity and closeness to singularity have been reported via transmission analysis [START_REF] Liu | A new approach for singularity analysis and closeness measurement to singularities of parallel manipulators[END_REF][START_REF] Liu | A generalized approach for computing the transmission index of parallel mechanisms[END_REF]. Henceforth, the virtual coefficient based indices will be adopted in this paper for the evaluation of the transmission quality and the optimal design of the proposed manipulator. This paper presents an asymmetrical SPM and its optimum design with regard to its transmission quality. The inverse and forward kinematic problems of the AsySPM are analyzed based on the kinematic analysis of classical SPMs. By virtue of the virtual coefficient between the transmission wrench screw and output twist screw, the input and output transmission indices are defined for the optimum design, of which an optimization problem is formulated to achieve the optimal design of the proposed SPM. The performances of the proposed spherical manipulator are compared with those of its counterparts in order to highlight its advantages and drawbacks. and inner rings connected to each other with a revolute joint, the revolute joint being realized with a revolve bearing. The orientation of the outer ring is determined by two RRR1 legs and constrained in a vertical plane by a fully passive RRS leg or an alternative RPS one. Through a U-joint, the decoupled rotation of the inner ring is driven by the center shaft, which also supports the MP to improve the positioning accuracy. This manipulator can provide an unlimited rotation of the moving-platform, which can be used in milling or drilling operations and among other material processing. It can also be used as the active spherical joint, i.e., wrist or waist joint. The coordinate system (x, y, z) is denoted in Fig. 1(b), of which the origin is located at the center of rotation, namely, point O. The ith active leg consists of three revolute joints, whose axes are parallel to unit vectors u i , v i , w i . Both of these two legs have the same architecture, defined by α 1 and α 2 angles. The design parameters of the base platform are γ and η. The design parameter of the mobile platform is β. It is noteworthy that the manipulator is symmetrical with respect to the yz plane. Inverse Geometric Problem Under the prescribed coordinate system, the unit vector u i is derived as: u i = (-1) i+1 sin η sin γ cos η sin γ -cos γ T , i = 1, 2 (1) The unit vector v i of the axis of the intermediate revolute joint in the ith leg is obtained in terms of the input joint angle θ i following the angle-axis representation [START_REF] Angeles | Fundamentals of Robotic Mechanical Systems: Theory, Methods, and Algorithms[END_REF], namely, v i = R(u i , θ i )v * i ; R(u i , θ i ) = cos θ i I 3 + sin θ i [u i ] × + (1 -cos θ i )u i ⊗ u i ( 2 ) where I 3 is the identity matrix, [u i ] × is the cross product matrix of u i and ⊗ is the tensor product. Moreover, v * i is the unit vector of the axis of the intermediate revolute joint in the ith leg at the original configuration: v * i = (-1) i+1 sin η sin(γ + α 1 ) cos η sin(γ + α 1 ) -cos(γ + α 1 ) T (3) The unit vector w i of the top revolute joint in the ith leg, is a function of the MP orientation: w i = x i y i z i T = Qw * i ; w * i = (-1) i+1 sin β cos β 0 T (4) where w * i is the unit vector of the axis of the top revolute joint of the ith leg when the mobile platform is located in its home configuration. Moreover, Q = R(x, φ x )R(y, φ y ) is the rotation matrix of the outer ring. Hence, the orientation of the inner ring can be described with Cardan angles (φ x , φ y , φ z ) and its output axis is denoted by: p = Qz; z = [0, 0, 1] T (5) According to the motion of the U-joint [START_REF] Weisbach | Mechanics of Engineering and of Machinery[END_REF], the input angle θ 3 of the center shaft is derived as: θ 3 = tan -1 (tan φ z cos φ x cos φ y ) (6) Referring to the inverse kinematic problem of the general SPMs, the loop-closure equation for the ith RRR leg is expressed as: A i t 2 i + 2B i t i + C i = 0, i = 1, 2 (7) with The input angle displacements can be solved as: A i = (-1) i+1 x i sη + y i cη s(γ -α 1 ) -z i c(γ -α 1 ) -cα 2 (8a) B i = (x i cη -y i sη)sα 1 (8b) C i = (-1) i+1 x i sη + y i cη s(γ + α 1 ) -z i c(γ + α 1 ) -cα 2 (8c) cos θ i = 1 -t 2 i 1 + t 2 i , sin θ i = 2t i 1 + t 2 i ; t i = tan θ i 2 = -B i ± B 2 i -A i C i A i (9) The inverse geometric problem has four solutions corresponding to the four working modes characterized by the sign "-/+" of u i × v i • w i , i.e., "-+", "--", "+-" and "++" modes. Here, the "-+" working mode is selected. Forward Geometric Problem The forward geometric problem of the AsySPM can be obtained by searching for the angles ϕ i of a spherical four-bar linkages with the given input angles θ i , i = 1, 2, as displayed in Fig. 2, where the input/output (I/O) equation takes the form [START_REF] Yang | Application of dual-number quaternion algebra to the analysis of spatial mechanisms[END_REF][START_REF] Bai | A unified input-output analysis of four-bar linkages[END_REF]: f (ϕ 1 , ϕ 2 ) = k 1 + k 2 cos ϕ 1 + k 3 cos ϕ 1 cos ϕ 2 -k 4 cos ϕ 2 + k 5 sin ϕ 1 sin ϕ 2 = 0 (10) with k 1 ≡ Cα 0 C 2 α 2 -Cβ ; k 2 = k 4 ≡ Sα 0 Sα 2 Cα 2 ; k 3 ≡ Cα 0 S 2 α 2 ; k 5 ≡ S 2 α 2 (11) where S and C stand for the sine and cosine functions, respectively, and α 0 = cos -1 (v 1 • v 2 ), β = 2β. On the other hand, the motion of the unit vector e is constrained in the yz plane due to the passive leg, thus: g(ϕ 1 , ϕ 2 ) = x 1 + x 2 = 0 ( 12 ) where the unit vector w i can be also represented with angle-axis rotation matrix, namely, Solving Eqs. ( 10) and ( 12) leads to four solutions for the angles ϕ i , i = 1, 2, i.e., the two functions have four common points in the plane z = 0 as shown in Fig. 3. Figure 4 illustrates the four assembly modes corresponding to the four solutions. Then, substituting ϕ i into Eq. ( 13), the unit vector w i and the output Euler angles φ x and φ y can be obtained, and the output angle φ z can be obtained from Eq. ( 6) accordingly. w i = x i y i z i T = R(v i , ϕ i )R(v 0 , α 2 )v i ; v 0 = v 1 × v 2 / v 1 × v 2 ( 13 ) -2 0 2 -2 0 2 -1 0 1 2 z f( 1 ,  2 ) g( 1 ,  2 ) z=0 f (φ 1 ,φ 2 ) g (φ 1 ,φ 2 ) z=0 φ 1 [rad] φ 2 [rad] Transmission Index The main function of the mechanism is to transmit motion from the input element to the output element. As a result, the force applied to the output element is to be transmitted to the input one. The arising internal wrench due to transmission is defined as a transmission wrench, which is characterized by the magnitude of the force and transmission wrench screw (TWS), and the latter is used to evaluate the quality of the transmission. In order to evaluate the transmission performance of the manipulator, some transmission indices (TI) should be defined. Transmission Wrench and Twist Screw As shown in Fig. 5(a), the instantaneous motion of a rigid body can be represented by using a twist screw defined by its Plücker coordinates: $ T = (ω ; v) = ω $T = (L 1 , M 1 , N 1 ; P * 1 , Q * 1 , R * 1 ) ( 14 ) where ω is the amplitude of the twist screw and $T is the unit twist screw. Likewise, a wrench exerted on the rigid body can be expressed as a wrench screw defined by its Plücker coordinates as: $ W = (f ; m) = f $W = (L 2 , M 2 , N 2 ; P * 2 , Q * 2 , R * 2 ) ( 15 ) where f is the amplitude of the wrench screw and $W is the unit wrench screw. The reciprocal product between the two screws $ T and $ W is defined as: $ T • $ W = f • v + m • ω = L 1 P * 2 + M 1 Q * 2 + N 1 R * 2 + L 2 P * 1 + M 2 Q * 1 + N 2 R * 1 ( 16 ) This reciprocal product amounts to the instantaneous power between the wrench and the twist. Subsequently, the transmission index is defined as a dimensionless index [START_REF] Chen | Generalized transmission index and transmission quality for spatial linkages[END_REF]: TI = | $T • $W | | $T • $W | max (17) where | $T • $W | max represents the potential maximal magnitude of the reciprocal product between $T and $W . The larger TI, the more important the power transmission from the wrench to the twist, namely, the better the transmission quality. For a planar manipulator, this index corresponds to the transmission angle, which is the smallest angle between the direction of velocity of the driven link and the direction of absolute velocity vector of output link both taken at the common point [START_REF] Hartenberg | Kinematic Synthesis of Linkages[END_REF]. As illustrated in Fig. 5(b), it is the angle σ between the follower link and the coupler of a four-bar mechanism, also known as forward transmission angle. Conversely, the angle ψ is the inverse transmission angle. Therefore, the input (λ I ) and output (λ O ) transmission can be expressed as: λ I = | sin ψ| ; λ O = | sin σ| (18) Input Transmission Index The wrench applied to a SPM is usually a pure moment, thus, for a spherical RRR leg, the transmission wrench is a pure torque. As the TWS is reciprocal to all the passive joint screws in the leg, the axis of the wrench in the ith leg is perpendicular to the plane OB i C i and passes through point O, as shown in Fig. 6(a). According to Eq. ( 17), the input transmission index of the ith RRR leg is obtained as: λ Ii = | $Ii • $W i | | $Ii • $W i | max = |u i • τ i | |u i • τ i | max , i = 1, 2 (19) with $Ii = (u i ; 0); $W i = (0; τ i ) = (0; v i × w i / v i × w i ) (20) When $W i lies in the plane OA i B i , i.e., plane OA i B i being perpendicular to plane OB i C i , |u i • τ i | reaches its maximum value. This situation occurs when the angle between the wrench screw and the twist screw is equal to ψ 0 or 180 o -ψ 0 , namely, |u i • τ i | max = | cos ψ 0 | = | sin α 1 | (21) From Fig. 6(a), Eq. ( 19) is equivalent to λ Ii = | cos ψ 1 | | cos ψ 0 | = | cos ψ 2 | = | sin ψ i | = 1 -cos 2 ψ i , i = 1, 2 (22) where ψ i is the inverse transmission angle, i.e., the angle between planes OA i B i and OB i C i , and cos ψ i = (v i × u i ) • (v i × w i ) v i × u i v i × w i (23) Finally, the input transmission index of the manipulator is defined as: λ I = min{λ Ii }, i = 1, 2 (24) Output Transmission Index Referring to the pressure angle at the attached point of the leg with the moving platform [START_REF] Takeda | A transmission index for in-parallel wire-driven mechanisms[END_REF], the output transmission index of a single leg can be defined by fixing the other input joints, where the parallel manipulator thus becoming a 1-DOF system. By fixing the active joint at point A 2 (point B 2 will be fixed) and keeping joint at point A 1 actuated in Fig. 6(b), the transmission wrench $ W 2 becomes a constraint wrench for the mobile platform. The instantaneous motion of the mobile platform will be a rotation about a unique vector constrained by $ W 2 and the vector e in the passive leg, namely, s 1 = τ 2 × e τ 2 × e ; e = Qj, j = [0, 1, 0] T (25) Henceforth, the output twist screw can be expressed as: $O1 = (s 1 ; 0). Based on Eq. ( 17), the output transmission index of the first leg is defined by λ O1 = | $O1 • $W 1 | | $O1 • $W 1 | max = |s 1 • τ 1 | |s 1 • τ 1 | max (26) when s 1 and τ 1 are parallel, i.e., both planes OC 1 C 2 and OB 1 C 1 being perpendicular to OB 2 C 2 , 26) is rewritten as: |s 1 • τ 1 | is a maximum, namely, |s 1 • τ 1 | max = cos(0) = 1. Equation ( λ O1 = |s 1 • τ 1 | = |τ 12 • e|/ τ 2 × e ; τ 12 = τ 1 × τ 2 (27) By the same token, the output transmission index of the second leg is derived as: λ O2 = |τ 12 • e|/ τ 1 × e (28) Similarly to Eq. ( 24), the output transmission index of the manipulator is defined as λ O = min{λ Oi }, i = 1, 2 (29) Transmission Efficiency of the U joint The output of the inner ring of the mobile platform is driven by the center shaft through a universal joint, consequently, the TI of the U joint (UTI) is defined as: λ U = |p • z| = | cos φ x cos φ y | (30) where the vectors p and z are defined in Eq. ( 5). Local Transmission Index On the basis of the ITI, OTI and UTI, the local transmission index (LTI) of the manipulator under study, namely, the transmission index at a prescribed orientation, is defined as: λ = min{λ I , λ O , λ U } (31) The higher λ, the higher the quality of the input and output transmission. The distribution of the LTI can indicate the workspace (WS) region with a good transmissibility. Thus, this index can be used for either the evaluation of the transmission quality or the design optimization. Optimal Design and Analysis The optimum design of SPMs can be based on many aspects, such as workspace, dexterity, singularity, and so on. However, these criteria are usually antagonistic. In order for the proposed AsySPM to achieve a regular workspace (RWS) with a good transmission quality, the following optimization problem is formulated: maximize f (x) = λ for θ ∈ [0, θ 0 ] ( 32 ) over x = [α 1 ; α 2 ; β; γ; η] subject to g 1 : 45 o ≤ {α 1 , α 2 } ≤ 120 o g 2 : 15 o ≤ {β, η} ≤ 60 o g 3 : 30 o ≤ γ ≤ 75 o where θ is the tilt angle and θ 0 defines the workspace region, as shown in Fig. 7. Moreover, the lower and upper bounds of the design variables are assigned in order to avoid mechanical collisions. This problem can be solved with the optimization toolbox of the mathematical software at hand. Hereby, it is solved with the genetic algorithm (GA) toolbox in Matlab. When θ 0 = 60 o , it is found that quality transmission [START_REF] Tao | Applied Linkage Synthesis[END_REF], whence the TI is λ = sin 45 o ≈ 0.7, i.e., the manipulator at a configuration with LTI λ ≥ 0.7 has good motion/force transmission. Henceforth, a set of poses in which LTI is greater than 0.7 is identified as high-transmissibility workspace (HTW), such as the blue dashed line enveloped region displayed in Fig. 8(a). The area of HTW can be used to evaluate the manipulator performance. The larger the HTW, the better the transmission quality of the manipulator. When the objective function in the optimization problem [START_REF] Tao | Applied Linkage Synthesis[END_REF] is replaced by f (x) = A HTW , where A HTW is the area of HTW, the optimal parameters for θ 0 = 60 o are found as: x = [53. geometrical parameters given in Eq. ( 34) is much larger than the HTW of the manipulator with the geometrical parameters given in Eq. (33). To evaluate the transmissibility of the manipulator within a designated workspace, a transmission index (WTI) similar to GCI [START_REF] Gosselin | A global performance index for the kinematic optimization of robotic manipulators[END_REF] is defined over the workspace W , which is calculated through a discrete approach in practice, namely, WTI = λdW dW or WTI = 1 W n i=1 λ i ∆W = 1 n n i=1 λ i ( 35 ) where n is the discrete number. The index obtained through the above equation is an arithmetic mean, which can be replaced with a quadratic mean for a better indication of the transmission, subsequently, WTI is redefined as WTI = 1 n n i=1 λ 2 i ( 36 ) As a consequence, with θ ∈ [0, 60 o ], WTI is equal to 0.66 for the first design and is equal to 0.72 for the second one. Comparison with Symmetrical SPMs In this section, a comparative study is conducted between the asymmetrical and symmetrical SPMs. A general SPM is shown in Fig. 9(a), which consists of three identical RRR legs connected to the base and the mobile platform. Moreover, β and γ define the geometry of two triangular pyramids on the mobile and the base platforms, respectively. A base coordinate system (x, y, z) is located at point O and the z axis is normal to the bottom surface of the base pyramid and points upwards, while the y axis is located in the plane spanned by the z-axis and vector u 1 . Figure 9(b) illustrates the Agile Wrist [START_REF] Bidault | Structural optimization of a spherical parallel manipulator using a two-level approach[END_REF] while Fig. 9(c) shows a co-axial input SPM (CoSPM) of a special case with γ = 0. Their geometrical parameters are given by Table 1. LTI distributions Referring to Eqs. ( 19) and ( 26), the ITI and OTI of each leg for the symmetrical SPMs can be obtained. The difference lies in the output twist screw of Eq. ( 25) in the calculation of OTI, namely, s i = (v j × w j ) × (v k × w k ) (v j × w j ) × (v k × w k ) ; i, j, k ∈ {1, 2, 3}, i = j = k (37) Using the index defined in Eq. ( 31 the HTW is extremely small. When α 1 reduces to 47 o which yields better kinematic and dynamic dexterities [START_REF] Wu | Dynamic modeling and design optimization of a 3-DOF spherical parallel manipulator[END_REF], the HTW of full torsion is a spherical cap with θ ∈ [0, 30 o ]. Comparison of Overall Performances The performance comparison of the asymmetrical SPM with the Agile Wrist and the Co-axial input spherical parallel manipulator is summarized in Table 3, which shows the advantages and drawbacks of the proposed AsySPM with respect to its symmetrical counterparts. The AsySPM can have the advantages of the general and co-axial input SPMs simultaneously except the drawback of evenly distributed power consumption. Conclusion This paper introduced an asymmetrical spherical parallel manipulator, whose the mobile platform is composed of an inner ring and an outer ring. The orientation of the outer ring is determined by two RRR legs as well as a fully passive leg, and the inner ring can generate a decoupled unlimited-torsion motion thanks to a center input shaft and a universal joint. Moreover, the center shaft can improve the positioning accuracy of the center of rotation for the manipulator. This manipulator can be used as a tool head for the complicated surface machining, such as milling or drilling, and can also work as As one of the most important performance, transmission analysis for the proposed manipulator was addressed. By virtue of the transmission wrench screw and the output twist screw, the input and output transmission indices are defined and are further used for the optimal design of the proposed manipulator. Two sets of optimal parameters have been identified and the isocontours of the transmission indices were traced to show the quality of the transmission. Moreover, a comparative study dealing with the mechanical design, kinematic and transmission performances was carried out between the proposed manipulator and its symmetrical counterparts, which highlights the advantages and drawbacks of the proposed manipulator with respect to its symmetrical counterparts. Besides the advantages of the general spherical parallel manipulators, such as compactness, low inertia, large regular workspace, the proposed manipulator can outperform in terms of unlimited torsion rotation, positioning accuracy and transmission quality, on the other hand, a main drawback lies in the unevenly distributed power consumption due to its asymmetrical structure. Figure 1 :Figure 1 11 Figure 1: The asymmetrical spherical parallel manipulator: (a) CAD model; (b) coordinate system. Figure 2 : 2 Figure 2: The spherical four-bar linkage. Figure 3 : 3 Figure 3: Graphical representation (four black points) of the four solutions to the forward geometric problem of the AsySPM. Figure 4 : 4 Figure 4: The four assembly modes of the AsySPM. Figure 5 : 5 Figure 5: Transmission wrench and twist screw: (a) the twist screw and wrench screw of a rigid body; (b) a planar four-bar linkage. Figure 6 : 6 Figure 6: The transmission wrench screw and transmission angle: (a) input transmission; (b) output tranmission. Figure 7 : 7 Figure 7: The spherical surface of a designated regular workspace. 3 o ; 69.5 o ; 15 o ; 75 o ; 60 o ] (34) The corresponding distribution of performance index is shown in Fig. 8(b), from which it is seen that the manipulator can still reach a large RWS with θ = 75 o , whereas, the minimum LTI within θ ∈ [0, 60 o ] reduces to 0.3 compared to Fig. 8(a). In contrast, the HTW of the manipulator with the Figure 8 : 8 Figure 8: The isocontours of the transmission index of the asymmetrical SPM for workspace θ 0 = 60 o : (a) max(λ); (b) max(HTW). Figure 9 : 9 Figure 9: The symmetrical SPMs: (a) general SPM; (b) Agile Wrist; (c) co-axial input SPM. Figure 10 : 10 Figure 10: The LTI isocontours of the Agile Wrist: (a) φ z = 0; (b) φ z = 30 o . Figure 11 : 11 Figure 11: The LTI isocontours of the CoSPM with torsion φ z = 0: (a) α 1 = 60 o ; (b) α 1 = 47 o . Table 1 : 1 Geometrical parameters of the Agile Wrist and CoSPM. Agile Wrist CoSPM α 1 , α deg] 90 sin -1 ( √ 6/3) 60(47) 90 90 2 [deg] β, γ [rad] α 1 [deg] α 2 [deg] β [ Table 2 : 2 WTI for the three SPMs. AsySPM Agile Wirst CoSPM parameters (33) parameters (34) α 1 = 60 o α 1 = 47 o θ ∈ [0, 45 o ] 0.68 0.77 0.75 0.58 0.79 θ ∈ [0, 60 o ] 0.66 0.72 0.69 0.54 Table 3 : 3 Performance comparison of the asymmetrical SPM with the Agile Wrist and the Co-axial input SPM. AsySPM Agile Wrist CoSPM Throughout this paper, R, U, S and P stand for revolute, universal, spherical and prismatic joints, respectively, and an underlined letter indicates an actuated joint.
28,359
[ "10659" ]
[ "224365", "481388", "473973", "302599" ]
01757303
en
[ "shs", "spi" ]
2024/03/05 22:32:10
2017
https://hal.science/hal-01757303/file/pan_19793.pdf
Lei Pan email: panlei@ivpp.ac.cn John Francis Thackeray Jean Dumoncel | Cl Ement Zanolli Anna Oettl Frikkie De Beer Jakobus Hoffman Benjamin Duployer Christophe Tenailleau | Jos E Braga Intra-individual metameric variation expressed at the enameldentine junction of lower post-canine dentition of South African fossil hominins and modern humans Objectives: The aim of this study is to compare the degree and patterning of inter-and intraindividual metameric variation in South African australopiths, early Homo and modern humans. Metameric variation likely reflects developmental and taxonomical issues, and could also be used to infer ecological and functional adaptations. However, its patterning along the early hominin postcanine dentition, particularly among South African fossil hominins, remains unexplored. Materials and Methods: Using microfocus X-ray computed tomography (mXCT) and geometric morphometric tools, we studied the enamel-dentine junction (EDJ) morphology and we investigated the intra-and inter-individual EDJ metameric variation among eight australopiths and two early Homo specimens from South Africa, as well as 32 modern humans. Results: Along post-canine dentition, shape changes between metameres represented by relative positions and height of dentine horns, outlines of the EDJ occlusal table are reported in modern and fossil taxa. Comparisons of EDJ mean shapes and multivariate analyses reveal substantial variation in the direction and magnitude of metameric shape changes among taxa, but some common trends can be found. In modern humans, both the direction and magnitude of metameric shape change show increased variability in M 2 -M 3 compared to M 1 -M 2 . Fossil specimens are clustered together showing similar magnitudes of shape change. Along M 2 -M 3 , the lengths of their metameric vectors are not as variable as those of modern humans, but they display considerable variability in the direction of shape change. Conclusion: The distalward increase of metameric variation along the modern human molar row is consistent with the odontogenetic models of molar row structure (inhibitory cascade model). Though much remains to be tested, the variable trends and magnitudes in metamerism in fossil hominins reported here, together with differences in the scale of shape change between modern humans and fossil hominins may provide valuable information regarding functional morphology and developmental processes in fossil species. In mammalian teeth, metameric variations observed among species may reflect differences in developmental processes [START_REF] Braga | The enamel-dentine junction in the postcanine dentition of Australopithecus africanus: Intra-individual metameric and antimeric variation[END_REF][START_REF] Evans | A simple rule governs the evolution and development of hominin tooth size[END_REF][START_REF] Morita | Exploring metameric variation in human molars: A morphological study using morphometric mapping[END_REF][START_REF] Weiss | Duplication with variation: Metameric logic in evolution from genes to morphology[END_REF]. For instance, size-related shape variations in human upper molars are consistent with odontogenetic models of molar row structure and molar crown morphology (inhibitory cascade model) (Kavanagh, [START_REF] Kavanagh | Predicting evolutionary patterns of mammalian teeth from development[END_REF]. Metameric variation could also be used to infer ecological conditions and/or functional adaptations [START_REF] Kavanagh | Predicting evolutionary patterns of mammalian teeth from development[END_REF][START_REF] Polly | Development with a bite[END_REF], as well as to distinguish symplesiomorphic traits from autamorphic traits [START_REF] Hlusko | Identifying metameric variation in extant hominoid and fossil hominid mandibular molars[END_REF]. Dental metameric studies also yield some insights into primate taxonomy (e.g., Bailey, Benazzi, andHublin, 2014, 2016;[START_REF] Braga | The enamel-dentine junction in the postcanine dentition of Australopithecus africanus: Intra-individual metameric and antimeric variation[END_REF][START_REF] Hlusko | Identifying metameric variation in extant hominoid and fossil hominid mandibular molars[END_REF][START_REF] Morita | Exploring metameric variation in human molars: A morphological study using morphometric mapping[END_REF][START_REF] Singleton | Allometric and metameric shape variation in Pan mandibular molars: A digital morphometric analysis[END_REF][START_REF] Weiss | Duplication with variation: Metameric logic in evolution from genes to morphology[END_REF]. However, few analyzes of metameric variation have yet been conducted at the intraindividual level (i.e., strictly based on teeth from the same individuals) [START_REF] Braga | The enamel-dentine junction in the postcanine dentition of Australopithecus africanus: Intra-individual metameric and antimeric variation[END_REF]: most studies were based on teeth representing different sets of individuals [START_REF] Hlusko | Identifying metameric variation in extant hominoid and fossil hominid mandibular molars[END_REF][START_REF] Olejniczak | Morphology of the enamel-dentine junction in sections of anthropoid primate maxillary molars[END_REF]Skinner, Gunz, & Wood, 2008a;[START_REF] Smith | Modern human molar enamel thickness and enamel-dentine junction shape[END_REF]. Besides the potential usefulness of metameric variation for developmental, functional/ecological and taxonomic inferences, it may also help to identify position of isolated teeth among fossil hominin assemblages. Previous studies of metameric variation mainly focused on the outer enamel surface (OES) [START_REF] Hlusko | Identifying metameric variation in extant hominoid and fossil hominid mandibular molars[END_REF][START_REF] Singleton | Allometric and metameric shape variation in Pan mandibular molars: A digital morphometric analysis[END_REF]. Since the assessment of OES features may be obscured by occlusal wear, more recent analyses have investigated the enamel-dentine junction (EDJ). However, only a few studies have yet dealt with the relevance of the EDJ morphology for assessing metameric variation in hominin teeth [START_REF] Braga | The enamel-dentine junction in the postcanine dentition of Australopithecus africanus: Intra-individual metameric and antimeric variation[END_REF]Skinner et al., 2008a). As noted by [START_REF] Braga | The enamel-dentine junction in the postcanine dentition of Australopithecus africanus: Intra-individual metameric and antimeric variation[END_REF], modern humans exhibit stronger metameric variation between first and second molars, as compared with Au. africanus. Another study based on EDJ shape showed that Au. africanus and P. robustus display similar 3D-EDJ intra-taxon metameric trend along the molar dentition, but P. robustus preserves a marked reduction in the buccolingual breadth of the distal crown between M 2 and M 3 , and a marked interradicular extension of the enamel cap in M 1 and M 2 (Skinner et al., 2008a). Here we assess the metameric variation within and between individuals and groups at the EDJ along the post-canine dentition in three Plio-Pleistocene hominin taxa (Australopithecus, Paranthropus and early Homo) and in modern humans. We mainly aim to test whether the intra-and inter-individual metameric patterns and scales differ between australopiths and genus Homo. In other words, is the metameric variation observed in modern humans seen in early Homo and/or australopiths? We also test if 3D-EDJ metameric variation is a useful indicator of dental position in lower postcanine teeth. | MATERIALS A ND METHODS | Study sample We selected only the postcanine dentition from mandibular specimens -isolated teeth were excluded-to study the intra-individual metameric variation. Whenever the antimeres were preserved, only the teeth on the better preserved side were used in our analyses. The fossil hominin materials came from collections housed at the Ditsong Museum of Natural History (Pretoria, South Africa). Our fossil sample includes permanent lower post-canine teeth representing P. robustus (N 5 17, six individuals), Au. africanus (N 5 4, one individual) and early Pleistocene Homo specimens, attributed to Homo erectus s.l. [START_REF] Wood | Reconstructing human evolution: Achievements, challenges, and opportunities[END_REF]; but see [START_REF] Clarke | Australopithecus and early Homo in Southern Africa[END_REF][START_REF] Curnoe | Odontometric systematic assessment of the Swartkrans SK 15 mandible[END_REF]. The modern human reference material includes 88 teeth, representing 32 individuals of European, Asian and African origin (Table 1). Because of unconformable wear stages in many modern specimens that affected the EDJ morphology in molar dentition, we used different individuals for comparisons M 1 -M 2 and M 2 -M 3 (detailed in Table 1). The degree of wear for each fossil specimen is listed in Table 1, according to tooth wear categories proposed by [START_REF] Molnar | Human tooth wear, tooth function and cultural variability[END_REF]. In the majority of cases, our specimens show no dentine horn wear on EDJs, but when necessary the dentine horn tip was reconstructed based on the morphology of the intact cusps. The specimens were scanned using four comparable X-ray microtomographic instruments: an X-Tek (Metris) XT H225L industrial mXCT Micro-XCT image stacks were imported for semi-automated (modules "magic wand" and "threshold") and automated (module "watershed") segmentations. After the segmentation, the EDJ surface was generated using the "unconstrained smoothing" parameter. | Analyses In some cases, especially in fossil specimens, only a tooth from one side is preserved (details in Table 1). Any image data from the left side were flipped to obtain a homogeneous right-sided sample. For each tooth, we defined a set of main landmarks as well as a set of semilandmarks along the marginal ridge between the main dentine horns (DHs), as an approximation of the marginal ridge (Figure 1). Along each semi-landmark section, a smooth curve was interpolated using a Bspline function using the "Create Splines" module. Interpolated curves were then imported into R (R Development Core Team, 2012), and were resampled to collect semi-landmarks that were equally spaced along each section/curve, delimited by traditional landmarks. For premolars, the main landmarks were placed on the tips of the DHs (i.e., 1. protoconid, 2. metaconid), with 55 semi-landmarks in between (15 on the mesial marginal ridge, 30 on the distal marginal ridge and 10 on the essential crest that connects the two DHs). The molar landmark set included four main landmarks on the tips of the DHs (i.e., 1. protoconid, 2. metaconid, 3. entoconid, 4. hypoconid), with 60 semi-landmarks, forming a continuous line, beginning at the tip of the protoconid and moving in a counter-clockwise direction. In order to investigate intra-taxon metameric variation, the samples were grouped into three pairs, according to tooth position (P 3 -P 4 , M 1 -M 2 , M 2 -M 3 ), and comparisons were performed within each pair. The landmark sets were imported in R software (R Development Core Team, 2012), and statistical analyses were conducted subsequently. The study of intra-individual metameric variation in shape was completed using R packages ade4 and Morpho; each sample of landmark configurations was registered using generalized Procrustes analysis (GPA; [START_REF] Gower | Generalized procrustes analysis[END_REF], treating semi-landmarks as equally spaced points. The resulting matrix of shape coordinates was analyzed in three ways. First, to visualize the average metameric shape differences for each taxon, mean configurations of each of the tooth classes (except for early Homo and Au. africanus represented by an isolated individual) were created, and superimposed using smooth curves with regard to tooth positons. A between-group PCA (bgPCA) was performed based on the Procrustes shape coordinates to explore the distribution of each group in shape space [START_REF] Braga | A new partial temporal bone of a juvenile hominin from the site of Kromdraai B (South Africa)[END_REF][START_REF] Gunz | The mammalian bony labyrinth reconsidered, introducing a comprehensive geometric morphometric approach[END_REF][START_REF] Mitteroecker | Linear discrimination, ordination, and the visualization of selection gradients in modern morphometrics[END_REF][START_REF] Pan | Further morphological evidence on South African earliest Homo lower postcanine dentition: Enamel thickness and enamel dentine junction[END_REF][START_REF] Ritzman | Mandibular ramus shape of Australopithecus sediba suggests a single variable species[END_REF]. The bgPCA computes a covariance matrix of the predefined group means and then projects all specimens into the space spanned by the eigenvectors of this covariance matrix. Between-group PCAs were conducted separately for each of the metameric pairs (P 3 -P 4 , M 1 -M 2 , M 2 -M 3 ). Because early Homo and Au. africanus have only one specimen, respectively, representing their groups, they were projected subsequently onto the shape space, ordinated by the modern human and P. robustus tooth means, without being assigned to groups a priori. The first two axes were plotted in order to visualize the trends and vectors of EDJ shape change between metameres of the same individual. As the metameric vectors generated by bgPCA are actually placed in a multidimensional shape space, the bgPC1-bgPC3 axes were also plotted, shown in Supporting Information Figure S1. Our interpretation of the spatial positions and metameric vectors between specimens are mainly based on the first two bgPCs, but we refer to the third bgPC as well. As a complementary method of comparing the magnitude of shape variation between metameres, hierarchical clustering (HC) as well as subsequent dendrograms were We visualized the magnitude of shape variation using dendrograms and a 0 to 25 scale. Ward's minimum variance method was applied in HC, as it aims at finding compact clusters in which individuals are grouped. We chose to use this method because it minimizes the increase of intra-group inertia, and maximizes the inter-group inertia, at each step of the algorithm. | R E S U LTS The within-taxon metameric variation of P 3 -P 4 , M 1 -M 2 and M 2 -M 3 mean shapes are illustrated in Figures 2 and3; bgPCA plots and dendrograms are illustrated in Figure 4, and the bgPC1-bgPC3 axes are plotted in Supporting Information Figure S1, with the max-min values along the first two bgPC axes shown as landmark series in Supporting Information Figure S2. Specimens are marked according to species and dental position. Our analyses observed appreciable variation in the metameric relationships within and between taxa, but some common trends in shape change can be found. From P 3 to P 4 , a distalward displacement of the mesial marginal ridge is shared among taxa (Figure 2); a decrease in the height of metaconid dentine horn is presented in modern humans, P. africanus specimen Sts 52 (Figure 4A, Supporting Information Figure S1A). In the axes represented here, the shapes of their P 3 s resemble those of modern humans (note that Sts 52 P 3 is placed just between the range of modern humans and P. robustus), but their P 4 s are similar to those of P. robustus (Figure 4A, Supporting Information Figure S1A). But as our analyses mixed interspecific and metameric variation, inferences of EDJ shape differences/similarities between isolated cases should be carefully addressed. Almost all modern human M 2 s and many M 3 s lack a hypoconulid, and hence hypoconulid dentine horn tip was not included in the homologous landmarks (Figure 1D-F). Therefore, differences in the | DISCUSSION Tooth morphology is controlled by the combined effects of biochemical signaling degraded from mesial to distal direction at the tooth row level and at the individual crown level [START_REF] Jernvall | Linking development with generation of novelty in mammalian teeth[END_REF][START_REF] Weiss | Duplication with variation: Metameric logic in evolution from genes to morphology[END_REF]. As suggested by the inhibitory cascade model [START_REF] Evans | A simple rule governs the evolution and development of hominin tooth size[END_REF][START_REF] Kavanagh | Predicting evolutionary patterns of mammalian teeth from development[END_REF], the development of each deciduous and permanent molar is controlled by the balance between inhibitor molecules from mesially located tooth germs and activator molecules from the mesenchyme. The ratio of genetic activation and inhibition during development determines the relative size of the dental elements in the dental row. However, permanent premolars are derived independently from deciduous molars, so the tooth germs of P 3 and P 4 are not directly connected, therefore caution should be addressed when linking inhibitory cascade model to the metameric variation in permanent premolars. Metameric differences are often subtle, and the risk of conflating metameric and taxonomic variation is a general concern [START_REF] Hlusko | Identifying metameric variation in extant hominoid and fossil hominid mandibular molars[END_REF][START_REF] Singleton | Allometric and metameric shape variation in Pan mandibular molars: A digital morphometric analysis[END_REF]. However, dental metameric variation in Plio-Pleistocene hominins remains relatively unexplored owing to difficulty in quantifying the complex and subtle shape variation in premolar and But due to the small sample size, further investigation will be needed. In all three pairs of metameres, hierarchical clustering placed fossil specimens together, showing similar degree of shape change, but the magnitude of metameric variation is quite diversified in modern humans. Moreover, as revealed by our bivariate plots, modern human M 3 s show a large scale of shape variability. This is consistent with previous observations using conventional tools [START_REF] Garn | Third molar agenesis and size reduction of the remaining teeth[END_REF][START_REF] Townsend | Molar intercuspal dimensions: Genetic input to phenotypic variation[END_REF] or geometric morphometrics [START_REF] Morita | Exploring metameric variation in human molars: A morphological study using morphometric mapping[END_REF][START_REF] Pan | Further morphological evidence on South African earliest Homo lower postcanine dentition: Enamel thickness and enamel dentine junction[END_REF]. It has been suggested that in modern humans, the inter-individual differences are larger for the M 2 s than for the M 1 s [START_REF] Braga | The enamel-dentine junction in the postcanine dentition of Australopithecus africanus: Intra-individual metameric and antimeric variation[END_REF]. This observation is in line with the distalward molar size reduction seen in Pleistocene humans (Berm udez [START_REF] Berm Udez De Castro | Posterior dental size reduction in hominids: The Atapuerca evidence[END_REF][START_REF] Brace | Post-Pleistocene changes in the human dentition[END_REF][START_REF] Brace | Gradual change in human tooth size in the late Pleistocene and post-Pleistocene[END_REF][START_REF] Evans | A simple rule governs the evolution and development of hominin tooth size[END_REF]. Since the shape stability was considered to increase from third molar to first molar [START_REF] Dahlberg | The changing dentition of man[END_REF], it is possible that small molar size results in a more unstable shape rather than larger molar size. Just as Skinner et al. (2008a) observed a small-scale, distal expansion of the marginal ridge in M 2 -FIG URE 4 A-C: Results of bgPCA of the semi-landmark configurations. Axes represent functions of the shape variation, specimens from the same individual are marked using the same symbol (for easier visual inspection, all of the modern individuals are marked using the same symbol), and each color representing the tooth position for each taxon. Metameres are connected using lines, showing metameric vectors. D-F: Dendrograms of intra-individual metameric variation in EDJ shape yielded by Hierachical Clustering (HC), showing the magnitude of metameric variation within and between groups. HC was done between metameric pairs P 3 -P 4 (D), M 1 -M 2 (E), and M 2 -M 3 (F). Scales from 0 to 25 indicate similitude of metameric shape variation between individuals. For easier visual inspection, only fossil specimens are marked M 3 s of P. robustus, we confirm such trend in all of our P. robustus specimens and in a number of modern humans. This trend is weakly expressed in our Au. africanus and is not presented in the early Homo specimen. A previous study observed that, in australopiths there is an increase in the height of mesial dentine horns from M 2 to M 3 (Skinner et al., 2008a), in contrast, we found a reduction in relative dentine horn height from M 1 to M 2 , and from M 2 to M 3 among all the taxa examined here, similar to metameric patterns expressed in Pan [START_REF] Skinner | Discrimination of extant Pan species and subspecies using the enamel-dentine junction morphology of lower molars[END_REF]. We suggest that this is probably because of our small sample size, and the fact that our study is focused on intra-individual variation therefore only metameres from strictly the same individual were investigated. The early Homo dentition SK 15 displays similar degree of metameric variation to other fossil samples, closer to the three P. robustus specimens than to the Au. africanus specimen. In all, the three fossil groups display variable directions in the shape change. In modern humans, the direction and magnitude of metameric vectors show increased variability in metameres M 2 -M 3 , than in M 1 -M 2 . However, in P. robustus, the length and direction of metameric vectors seem more variable in M 1 -M 2 pairs, than in M 2 -M 3 pairs, which show consistent metameric shape change. Further studies including more fossil specimens will be necessary to ascertain whether the metameric patterns observed here are characteristic of these groups. | C O NC LU S I O N S While 2D studies based on the OES suggest the existence of a distinctive metameric pattern in modern humans compared with that found in chimpanzees and Au. africanus [START_REF] Hlusko | Identifying metameric variation in extant hominoid and fossil hominid mandibular molars[END_REF], in each comparative pair of EDJ (P 3 -P 4 , M 1 -M 2 and M 2 -M 3 ) that we investigated, we do not observe a specific metameric pattern that belongs only to extant humans, but rather a few common trends shared by groups despite a degree of inter-and intra-group variaiton. As a whole, the EDJ proves to be a reliable proxy to identify the taxonomic identity [START_REF] Pan | Further morphological evidence on South African earliest Homo lower postcanine dentition: Enamel thickness and enamel dentine junction[END_REF]Skinner et al., 2008aSkinner et al., , 2008bSkinner et al., , 2009b[START_REF] Skinner | A dental perspective on the taxonomic affinity of the Balanica mandible (BH-1)[END_REF][START_REF] Zanolli | Brief communication: Molar crown inner structural organization in Javanese Homo erectus[END_REF][START_REF] Zanolli | Brief communication: Two human fossil deciduous molars from the Sangiran Dome (Java, Indonesia): Outer and inner morphology[END_REF], but further research is needed to determine whether the metameric trends in 3D-EDJ observed here could act as one piece of evidence to identify tooth position from isolated specimens. Moreover, the underlying mechanisms remain to be answered. Along the molar dentition, based on the axes examined in this study, our results with regard to modern humans are generally in accordance with morphogenetic models of molar rows and molar crowns (inhibitory cascade model). In P. robustus specimens examined here, trends of meanshape changes from M 1 to M 2 and from M 2 to M 3 differed from each other, instead of a simple gradation, such differential expression of metamerism has been previously reported in modern human upper molars [START_REF] Morita | Exploring metameric variation in human molars: A morphological study using morphometric mapping[END_REF]. It should be noted that our study focuses only on the EDJ marginal ridges, but additional studies of the accessory ridges (e.g. protostylid), and a more global analysis of shape variation among early hominins based on the whole EDJ sur-face and diffeomorphisms [START_REF] Braga | In press. The Kromdraai hominins revisited with an updated portray of differences between Australopithecus africanus and Paranthropus robustus[END_REF] will supplement our understanding on the metameric variation in hominin dentition. Australopithecus africanus, early Homo, Homo sapiens, metamerism, Paranthropus robustus, tooth internal structure1 | I N T R O D U C T I O N system at the South African Nuclear Energy Corporation (Necsa) (Hoffman & de Beer, 2012), a Scanco Medical X-Treme micro-XCT scanner at the Institute for Space Medicine and Physiology (MEDES) of Toulouse, a Phoenix Nanotom 180 scanner from the FERMAT Federation from the Inter-university Material Research and Engineering Centre (CIRIMAT, UMR 5085 CNRS), and a 225 kV-mXCT scanner housed at the Institute of Vertebrate Paleontology and Paleoanthropology (IVPP, Chinese Academy of Sciences). Isometric voxel size ranged from 10 to 70 lm. Each specimen was segmented in Avizo 8.0 (Visualization Sciences Group, www.vsg3d.com). (1949); (3)[START_REF] Broom | Swartkrans ape-man, Paranthropus crassidens[END_REF]; (4)Grine and Daeglin (1993); (5)[START_REF] Grine | New hominid fossils from the Swartkrans formation (1979-1986 excavations): craniodental specimens[END_REF]; (6)Broom and Robinson (1949); (7)Rampont (1994). b Owing to different wear stages, 32 M 2 s were taken into account. 14 of them belong to the same individuals as M 1 s, they were used in the analyses M 1 -M 2 . The other 18 teeth belong to the same individuals as M 3 s, they were used in the analyses M 2 -M 3 . computed. Dendrograms were obtained from the decomposition of the total variance (inertia) in between-and within-group variance. Contrary to the bgPCA, the HC does not require any a priori classification and specimens are aggregated into clusters according to the distances recorded between them. Aimed at finding how the metameric variation between fossils compares to modern human sample in the shape space, we used Euclidean distances between metameric pairs for clustering. FIG URE1EDJ surface model of a lower premolar (A-C, SKX 21204 from Swartkrans) and a lower molar (D-F, SKW 5 from Swartkrans), illustrating the landmarks collected on the tips of the dentine horns (red spheres) and semi-landmarks that run between the dentine horns (orange spheres) used to capture EDJ shape. Abbreviations: buccal (B), distal (D), lingual (L) and mesial (M). Numbers on the red spheres stand for landmarks collected on the dentine horn tips, numbers next to the ridge curves stand for number of semi-landmarks. Note that relative sizes of premolars to molars are not to scale hypoconulid dentine horn height and relative position are not recorded in the present study. In metameres M 1 -M 2 , there is a marked reduction in the height of dentine horns of both P. robustus and modern humans, particularly in the talonid, resulting in a flattened topology in M 2 (Figure3A,B); entoconid dentine horn is more centrally placed in M 2 (a pattern that is more marked in modern humans). Unfortunaltely, materials from early Homo and Au. africanus are not avaliable for comparison. With regard to multivariate analyses, similar magnitude of shape change is seen in both species (Figure4B,E; Supporting Information FigureS1B). The bgPC1 axis is driven by the presence of the hypoconulid, and the position and height of the hypoconid dentine horn; while the bgPC2 axis is driven by the height of the lingual dentine horns (Figure4B; Supporting Information FigureS2E-H). It is worth noticing that SK 6 M 1 -M 2 show a unique trend of shape change along the bgPC1, and SK 63 has a much shorter metameric vector in bgPC1-bgPC2 plot, increasing the within-group metameric variation in P. robustus. In shape space bgPC1-bgPC3, P. robustus shows vertically-oriented metameric vectors with similar lengths contrasted with modern humans; along bgPC3, the direction of shape change in modern humans is more variable compared to bgPC1 (Supporting Information FigureS1B). Our results of hierachichal clustering reveal that, although placed separately from modern groups, the P. robustus sample displays closer affinity to a few modern individuals, indicating a similarity in intra-individual metameric distances (Figure4E).For metameres M 2 -M 3 , a slight distal expansion of the marginal ridges is observed except for the early Homo specimen, SK 15 (Figure3C-F). A reduction in the height of dentine horns on the talonid is seen in modern humans and australopiths (a pattern that is more marked in modern humans; Figure3C,D). Changes in the relative positions of the dentine horns are widely observed: for modern humans, talonid dentine horns are more centrally placed in M 2 than in M 3 buccal view); for australopiths, the hypoconid dentine horn is more centrally placed in M 3 , more markedly in P. robustus, in addition, Au. africanus individual displays a more centrally placed entoconid in M 2 (Figure3D,F; occlusal view); for the early Homo specimen SK 15, a more centrally placed protoconid and entoconid dentine horns are seen in M 3(Figure 3E; occlusal view). With regard to multivariate analyzes, the bgPC1 exhibits shape changes in the EDJ outline (oviod to elongated, Supporting Information FigureS2I,J) and changes in the FIG URE 2 2 FIG URE 2 Comparison of metameric variation between P 3 -P 4 based on mean shapes of the EDJ, after Procrustes superimposition. EDJ ridge curves are shown in occlusal, buccal and lingual views. Colors indicate different dental positions FIG URE 3 3 FIG URE 3 Comparison of metameric variation between M 1 -M 2 , and between M 2 -M 3 , based on mean shapes of the EDJ, after Procrustes superimposition. EDJ ridge curves are shown in occlusal, buccal and lingual views. Colors indicate different dental positions. A-B: M 1 (red) compared to M 2 (blue); C-F: M 2 (red) compared to M 3 (blue) TABLE 1 1 Composition of the study sample Occlusal Specimen P 3 P 4 M 1 M 2 M 3 Provenance Age wear Citations a P. robustus SK 6 1 1 1 1 1 Mb. 1, HR Swartkrans 1.80 6 0.09 Ma-2.19 6 0.08 1-3 1 -2 Ma (Gibbon et al., 2014), 2.31-1.64 Ma (Pickering et al., 2011) SK 843 1 1 Mb. 1, HR 2-3 1 SK 61 1 1 Mb. 1, HR 1 3 SK 63 1 1 1 1 Mb. 1, HR 1 1 SKW 5 1 1 Mb. 1, HR 1-late 2 4 SKX 4446 1 1 Mb. 2 1.36 6 0.69 Ma 1-3 5 (Balter et al., 2008) Au. africanus Sts 52 1 1 1 1 Mb. 4 Sterkfontein 3.0-2.5 Ma (White, Harris, 2-3 1 1977; Tobias, 1978; Clarke, 1994), 2.8-2.4 Ma (Vrba, 1985; but see Berger et al., 2002), 2.1 6 0.5 Ma (Schwarcz et al., 1994) Early Homo SKX 21204 1 1 Mb. 1, LB Swartkrans 1.80 6 0.09 Ma-2.19 6 0.08 1 4 Ma (Gibbon et al., 2014), 2.31-1.64 Ma (Pickering et al., 2011) SK 15 1 1 Mb. 2 1.36 6 0.69 Ma 2-3 6 (Balter et al., 2008) Extant H. sapiens 12 12 14 14/18 b 18 South Africa/Europe/East Asia 1-late 2 7 a Citations: (1) [START_REF] Robinson | The dentition of the Australopithecinae[END_REF] ; (2) Broom ACKNOWLEDGMENTS This work was supported by the Centre National de la Recherche Scientifique (CNRS), the French Ministry of Foreign Affairs, the French Embassy in South Africa through the Cultural and Cooperation Services, National Natural Science Foundation of China and the China Scholarship Council. For access to specimens we thank the following individuals and institutions: Stephany Potze (Ditsong National Museum of Natural History, Pretoria), Jean-Luc Kahn (Strasbourg University), Dr. S. Xing (Institute of Vertebrate Paleontology and Paleoanthropology, Beijing), Dr. M. Zhou (Institute of Archeology and Cultural Relics of Hubei Province, Wuhan), and Dr. C. Thèves (UMR 5288 CNRS). We thank Dr. A. Beaudet for her technical support during the imaging processing of data and the statistical analysis. We are also grateful to the Associated Editor, and two anonymous reviewers of this manuscript, for their insightful comments and suggestions. SUPPORTING INFORMATION Additional Supporting Information may be found in the online version of this article at the publisher's website.
32,972
[ "781530", "764254", "177105", "790942" ]
[ "303369", "445683", "12196", "445683", "445683", "332738", "479787", "479787", "580", "580", "12196", "445683" ]
01757334
en
[ "spi" ]
2024/03/05 22:32:10
2015
https://hal.science/hal-01757334/file/ASME_JMR_2015_Nurahmi_Schadlbauer_Caro_Husty_Wenger_HAL.pdf
Latifah Nurahmi email: latifah.nurahmi@irccyn.ec-nantes.fr Josef Schadlbauer email: josef.schadlbauer@uibk.ac.at Stéphane Caro email: stephane.caro@irccyn.ec-nantes.fr Manfred Husty email: manfred.husty@uibk.ac.at Philippe Wenger email: philippe.wenger@irccyn.ec-nantes.fr Caro Kinematic Analysis of the 3-RPS Cube Parallel Manipulator Keywords: 3-RPS-Cube, parallel manipulators, singularities, operation mode, motion type, Darboux motion teaching and research institutions in France or abroad, or from public or private research centers. Introduction Since the development of robot technology, the lower-mobility parallel manipulators have been extensively studied. One parallel manipulator of the 3-dof family is the 3-RPS Cube and was proposed by [START_REF] Huang | Motion Characteristics and Rotational Axis Analysis of Three DOF Parallel Robot Mechanisms[END_REF] [START_REF] Huang | Motion Characteristics and Rotational Axis Analysis of Three DOF Parallel Robot Mechanisms[END_REF]. The 3-RPS Cube parallel manipulator, shown in Fig. 1, is composed of a cube-shaped base, an equilateral triangular-shaped platform, and three identical legs. Each leg is composed of a revolute joint, an actuated prismatic joint and a spherical joint mounted in series. By referring to the design of the 3-RPS Cube manipulator, the type synthesis of 3dof rotational manipulators with no intersecting axes was discussed in [START_REF] Chen | Type Synthesis of 3-DOF Rotational Parallel Mechanisms With No Intersecting Axes[END_REF]. The kinematic characteristics of this mechanism were studied in [3][START_REF] Huang | Identification of Principal Screws of 3-DOF Parallel Manipulators by Quadric Degeneration[END_REF][START_REF] Huang | Analysis of Instantaneous Motions of Deficient Rank 3-RPS Parallel Manipulators[END_REF], by identifying the principal screws, and the authors showed that the manipulator belongs to the general third-order screw system, which can rotate in three dimensions and the axes do not intersect. In [6], Huang et al. showed that the mechanism is able to perform a motion along its diagonal, which is known as the Vertical Darboux Motion (VDM). Several mechanical generators of the VDM were later revealed by Lee and Hervé [START_REF] Lee | On the Vertical Darboux Motion[END_REF], in which one point in the moving platform is compelled to move in a plane. Later in [START_REF] Huang | A 3DOF Rotational Parallel Manipulator Without Intersecting Axes[END_REF], the authors showed that the manufacturing errors have little impact on the motion properties of the 3-RPS Cube parallel manipulator. By analysing the Instantaneous Screw Axes (ISA), Chen et al. showed in [START_REF] Chen | Axodes Analysis of the Multi DOF Parallel Mechanisms and Parasitic Motion[END_REF] that this mechanism performs parasitic motions, in which the translations and the rotations are coupled. By using an algebraic description of the manipulator and the Study kinematic mapping, a characterisation of the operation mode, the direct kinematics, the general motion, and the singular poses of the 3-RPS Cube parallel manipulator are discussed in more detail in this paper, which is based on [10][11][START_REF] Schadlbauer | A Complete Kinematic Analysis of the 3-RPS Parallel Manipulator[END_REF][START_REF] Schadlbauer | Operation Modes in Lower Mobility Parallel Manipulators[END_REF][START_REF] Schadlbauer | The 3-RPS parallel manipulator from an algebraic viewpoint[END_REF]. The derivation of the constraint equations is the first essential step to reveal the existence of only one operation mode and to solve the direct kinematics problem. In 1897, Darboux [START_REF] Bottema | Theoretical Kinematics[END_REF] studied the 3-dof motion where the vertices of a triangle are compelled to remain in the planes of a trihedron respectively. The three planes are mutually orthogonal and this is the case of the 3-RPS Cube parallel manipulator. Darboux showed that in this 3-dof motion, the workspace of each point of the moving platform is bounded by a Steiner surface, while the vertices of the moving platform remain in the planes. Under the condition that the prismatic lengths remain equal, the moving platform of the manipulator is able to perform the VDM. It follows from Bottema and Roth [START_REF] Bottema | Theoretical Kinematics[END_REF] that this motion is the result of a rotation about an axis and a harmonic translation along the same axis. In this motion, all points in the moving platform (except the geometric center of the moving platform) move in ellipses and the path of a line in the moving platform is a right-conoid surface. The singularities are examined in this paper by deriving the determinant of the Jacobian matrix of the constraint equations with respect to the Study parameters. Based on the reciprocity conditions, Joshi and Tsai in [START_REF] Joshi | Jacobian Analysis of Limited-DOF Parallel Manipulators[END_REF] developed a procedure to express the Jacobian matrix J of lower-mobility parallel manipulators that comprises both actuation and constraint wrenches. In this paper, this matrix is named the extended Jacobian matrix (J E ) of the lower-mobility parallel manipulators, as explained in [START_REF] Amine | Singularity Analysis of the H4 Robot Using Grassmann-Cayley Algebra[END_REF][START_REF] Amine | Lower-Mobility Parallel Manipulators: Geometrical Analysis, Singularities and Conceptual Design[END_REF][START_REF] Amine | Singularity analysis of 3T2R parallel mechanisms using Grassmann Cayley algebra and Grassmann geometry[END_REF][START_REF] Amine | Conceptual Design of Schonflies Motion Generators Based on the Wrench Graph[END_REF][START_REF] Amine | Singularity Conditions of 3T1R Parallel Manipulators With Identical Limb Structures[END_REF]. The rows of J E are composed of n linearly independent actuation wrenches plus (6 -n) linearly independent constraint wrenches. In a general configuration, the constraint wrench system, W c , must be reciprocal to the twist system of the moving platform of the parallel manipulator. A constraint singularity occurs when the (6-n) constraint wrench system W c degenerates. In such a configuration, at least one of the initial constrained motions will no longer be constrained. As a result, the mechanism gains one or several dof . This can lead to a change in the motion pattern of the mechanism, which then can switch to another operation mode. By locking the actuated joints of the parallel manipulator, the moving platform must be fully constrained, i.e., the system spanned by the actuation wrench system, W a , and constraint wrench system, W c , must span a 6-system. An actuation singularity hence occurs when this overall wrench system of the manipulator degenerates, i.e., is not a 6-system any more, while the manipulator does not reach a constraint singularity. This concept will be applied in this paper to illustrate the singularities of the extended Jacobian matrix (J E ) of the 3-RPS Cube parallel manipulator. It allows us to investigate the actuation and constraint singularities that occur during the manipulator motion. This paper is organized as follows: A detailed description of the manipulator architecture is given in Section 2. The constraint equations of the manipulator are expressed in Section 3. These equations are used to identify the operation mode(s) and the solutions of the direct kinematics of the manipulator in Section 4. In Section 5, the conditions on the leg lengths for the manipulator to reach a singularity configuration are presented. Eventually, the general motion and the Vertical Darboux Motion (VDM) of the manipulator are reviewed in Sections 6 and 7. Manipulator Architecture The 3-RPS Cube parallel manipulator shown in Fig. 1, is composed of a cube-shaped base, an equilateral triangular-shaped platform and three identical legs. Each leg is composed of a revolute joint, an actuated prismatic joint and a spherical joint mounted in series. The origin O of the fixed frame Σ 0 is shifted along σ 0 = [h 0 , h 0 , h 0 ] from the center of the base in order to fulfill the identity condition (when the fixed frame and the moving frame are coincident), as shown by the large and red dashed box in Fig. 1. Likewise, the origin P of the moving frame Σ 1 is shifted along σ 1 = [h 1 , h 1 , h 1 ] as described by the small and blue dashed box in Fig. 1. The revolute joint in the i-th (i = 1 . . . 3) leg is located at point A i , its axis being along vector s i , while the spherical joint is located at point B i , the i-th corner of the moving platform. The distance between the origin O of the fixed frame Σ 0 and point A i is equal to h 0 √ 2. The axes s 1 , s 2 and s 3 are orthogonal to each other. The moving platform has an equilateral triangle shape and its circumradius is equal to d 1 = h 1 √ 6/3. Each pair of vertices A i and B i (i = 1, 2, 3 ) is connected by a prismatic joint. The prismatic length is denoted by r i . Since the i-th prismatic length is orthogonal to the revolute axis s i , the leg A i B i moves in a plane normal to s i . As a consequence, there are five parameters, namely r 1 , r 2 , r 3 , h 0 , and h 1 . h 0 and h 1 are design parameters, while r 1 , r 2 , and r 3 are joint variables that determine the manipulator motion. Constraint Equations In this section, the constraint equations are expressed whose solutions illustrate the possible poses of the moving platform (coordinate frame Σ 1 ) with respect to Σ 0 . In the following, we use projective coordinates to define the position vectors of points A i and B i . The coordinates of points A i and points B i expressed in Σ 0 and Σ 1 are respectively: r 0 A 1 = [1, 0, -h 0 , -h 0 ] T , r 1 B 1 = [1, 0, -h 1 , -h 1 ] T , r 0 A 2 = [1, -h 0 , 0, -h 0 ] T , r 1 B 2 = [1, -h 1 , 0, -h 1 ] T , r 0 A 3 = [1, -h 0 , -h 0 , 0] T , r 1 B 3 = [1, -h 1 , -h 1 , 0] T (1) To obtain the coordinates of points B 1 , B 2 and B 3 expressed in Σ 0 , the Study parametrization of a spatial Euclidean transformation matrix M ∈ SE(3) is used as follows: M =    x 2 0 + x 2 1 + x 2 2 + x 2 3 0 T 3×1 M T M R    (2) where M T and M R represent the translational and rotational parts of transformation matrix M, respectively, and are expressed as follows: M T =       2(-x 0 y 1 + x 1 y 0 -x 2 y 3 + x 3 y 2 ) 2(-x 0 y 2 + x 1 y 3 + x 2 y 0 -x 3 y 1 ) 2(-x 0 y 3 -x 1 y 2 + x 2 y 1 + x 3 y 0 )       , M R =       x 2 0 + x 2 1 -x 2 2 -x 2 3 2(x 1 x 2 -x 0 x 3 ) 2(x 1 x 3 + x 0 x 2 ) 2(x 1 x 2 + x 0 x 3 ) x 2 0 -x 2 1 + x 2 2 -x 2 3 2(x 2 x 3 -x 0 x 1 ) 2(x 1 x 3 -x 0 x 2 ) 2(x 2 x 3 + x 0 x 1 ) x 2 0 -x 2 1 -x 2 2 + x 2 3       (3) The parameters x 0 , x 1 , x 2 , x 3 , y 0 , y (x 0 : x 1 : x 2 : x 3 : y 0 : y 1 : y 2 : y 3 ) T = (0 : 0 : 0 : 0 : 0 : 0 : 0 : 0) T Every projective point X will represent a spatial Euclidean displacement, if it fulfills the following equation and inequality: x 0 y 0 + x 1 y 1 + x 2 y 2 + x 3 y 3 = 0, x 2 0 + x 2 1 + x 2 2 + x 2 3 = 0 (5) Those two conditions will be used in the following computations to simplify the algebraic expressions. The coordinates of points B i expressed in Σ 0 are obtained by: r 0 B i = M r 1 B i i = 0, . . . , 3 (6) As the coordinates of all points are given in terms of Study parameters, design parameters and joint variables, the constraint equations can be obtained by examining the manipulator architecture. The leg connecting points A i and B i is orthogonal to the axis s i of the i-th revolute joint, expressed as follows: s 1 = [0, 1, 0, 0] T s 2 = [0, 0, 1, 0] T s 3 = [0, 0, 0, 1] T (7) Accordingly, the scalar product of vector (r 0 B ir 0 A i ) and vector s i vanishes, namely: (r 0 B i -r 0 A i ) T s i = 0 (8) After computing the corresponding scalar products and removing the common denom- inators (x 2 0 + x 2 1 + x 2 2 + x 2 3 ), the following three equations come out: g 1 : -h 1 x 0 x 2 + h 1 x 0 x 3 -h 1 x 1 x 2 -h 1 x 1 x 3 -x 0 y 1 + x 1 y 0 -x 2 y 3 + x 3 y 2 = 0 g 2 : h 1 x 0 x 1 -h 1 x 0 x 3 -h 1 x 1 x 2 -h 1 x 2 x 3 -x 0 y 2 + x 1 y 3 + x 2 y 0 -x 3 y 1 = 0 g 3 : -h 1 x 0 x 1 + h 1 x 0 x 2 -h 1 x 1 x 3 -h 1 x 2 x 3 -x 0 y 3 -x 1 y 2 + x 2 y 1 + x 3 y 0 = 0 (9) To derive the constraint equations corresponding to the leg lengths, the joint variables r i are given and we assume that the distance between points A i and B i is constant, i.e. r i = const. It follows that point B i has the freedom to move along a circle of center A i and the distance equation can be formulated as (r 0 B i -r 0 A i ) 2 = r 2 i . As a consequence, the following three equations are obtained: g 4 : 2h 2 0 x 2 0 + 2h 2 0 x 2 1 + 2h 2 0 x 2 2 + 2h 2 0 x 2 3 -4h 0 h 1 x 2 0 + 4h 0 h 1 x 2 1 + 2h 2 1 x 2 0 + 2h 2 1 x 2 1 + 2h 2 1 x 2 2 + 2h 2 1 x 2 3 -8h 0 h 1 x 2 x 3 -r 2 1 x 2 0 -r 2 1 x 2 1 -r 2 1 x 2 2 -r 2 1 x 2 3 -4h 0 x 0 y 2 -4h 0 x 0 y 3 -4h 0 x 1 y 2 + 4h 0 x 1 y 3 + 4h 0 x 2 y 0 + 4h 0 x 2 y 1 + 4h 0 x 3 y 0 -4h 0 x 3 y 1 + 4h 1 x 0 y 2 + 4h 1 x 0 y 3 + 4y 2 0 -4h 1 x 1 y 2 + 4h 1 x 1 y 3 -4h 1 x 2 y 0 + 4h 1 x 2 y 1 -4h 1 x 3 y 0 + 4y 2 1 -4h 1 x 3 y 1 + 4y 2 2 + 4y 2 3 = 0 g 5 : 2h 2 0 x 2 0 + 2h 2 0 x 2 1 + 2h 2 0 x 2 2 + 2h 2 0 x 2 3 -4h 0 h 1 x 2 0 + 4h 0 h 1 x 2 2 + 2h 2 1 x 2 0 + 2h 2 1 x 2 1 + 2h 2 1 x 2 2 + 2h 2 1 x 2 3 -8h 0 h 1 x 1 x 3 -r 2 2 x 2 0 -r 2 2 x 2 1 -r 2 2 x 2 2 -r 2 2 x 2 3 -4h 0 x 0 y 1 -4h 0 x 0 y 3 + 4h 0 x 1 y 0 -4h 0 x 1 y 2 + 4h 0 x 2 y 1 -4h 0 x 2 y 3 + 4h 0 x 3 y 0 + 4h 0 x 3 y 2 + 4h 1 x 0 y 1 + 4h 1 x 0 y 3 + 4y 2 0 -4h 1 x 1 y 0 -4h 1 x 1 y 2 + 4h 1 x 2 y 1 -4h 1 x 2 y 3 -4h 1 x 3 y 0 + 4y 2 1 + 4h 1 x 3 y 2 + 4y 2 2 + 4y 2 3 = 0 g 6 : 2h 2 0 x 2 0 + 2h 2 0 x 2 1 + 2h 2 0 x 2 2 + 2h 2 0 x 2 3 -4h 0 h 1 x 2 0 + 4h 0 h 1 x 2 3 + 2h 2 1 x 2 0 + 2h 2 1 x 2 1 + 2h 2 1 x 2 2 + 2h 2 1 x 2 3 -8h 0 h 1 x 1 x 2 -r 2 3 x 2 0 -r 2 3 x 2 1 -r 2 3 x 2 2 -r 2 3 x 2 3 -4h 0 x 0 y 1 -4h 0 x 0 y 2 + 4h 0 x 1 y 0 + 4h 0 x 1 y 3 + 4h 0 x 2 y 0 -4h 0 x 2 y 3 -4h 0 x 3 y 1 + 4h 0 x 3 y 2 + 4h 1 x 0 y 1 + 4h 1 x 0 y 2 + 4y 2 0 -4h 1 x 1 y 0 + 4h 1 x 1 y 3 -4h 1 x 2 y 0 -4h 1 x 2 y 3 -4h 1 x 3 y 1 + 4y 2 1 + 4h 1 x 3 y 2 + 4y 2 2 + 4y 2 3 = 0 (10) The Study equation in Eq. ( 5) is added since all solutions have to be within the Study quadric, i.e.: g 7 : x 0 y 0 + x 1 y 1 + x 2 y 2 + x 3 y 3 = 0 (11) Under the condition (x 2 0 + x 2 1 + x 2 2 + x 2 3 = 0), we can find all possible points in P 7 that satisfy those seven equations. To exclude the exceptional generator (x 0 = x 1 = x 2 = x 3 = 0), we add the following normalization equation: g 8 : x 2 0 + x 2 1 + x 2 2 + x 2 3 -1 = 0 (12) It assures that there is no point of the exceptional generator that appears as a solution. However, for each projective solution point, we obtain two affine representatives. This has to be taken into account for the enumeration of the number of solutions. Solving the System Solving the direct kinematics means finding all possible points in P 7 that fulfill the set of equations {g 1 , ..., g 8 }. Those points are the solutions of the eight constraint equations that represent all feasible poses of the 3-RPS Cube parallel manipulator. They also depend on the design parameters (h 0 , h 1 ) and the joint variables (r 1 , r 2 , r 3 ). The set of eight constraint equations are always written as a polynomial ideal with variables {x 0 , x 1 , x 2 , x 3 , y 0 , y 1 , y 2 , y 3 } over the coefficient ring C[h 0 , h 1 , r 1 , r 2 , r 3 ]. Although the solutions of the direct kinematics can be complex, they are still considered as solutions. To apply the method of algebraic geometry, the ideal is now defined as: I =< g 1 , g 2 , g 3 , g 4 , g 5 , g 6 , g 7 , g 8 > (13) The vanishing set V(I) of the ideal I comprises all points in P 7 for which all equations vanish, namely all solutions of the direct kinematic problem. At this point, the following ideal is examined, which is independent of the joints variables r 1 , r 2 and r 3 : J =< g 1 , g 2 , g 3 , g 7 > (14) The primary decomposition is computed to verify if the ideal J is the intersection of several smaller ideals. The primary decomposition returns several J i in which J = i J i . In other words, the vanishing set is given by V(J ) = i V(J i ). It expresses that the variety V(J ) is the union of some other or simpler varieties V(J i ). The primary decomposition geometrically tells us that the intersection of those equa- Paper JMR-14-1262, corresponding author's last name: CARO tions will split into smaller parts. Indeed, it turns out that the ideal J is decomposed into two components J i as: J = 2 i=1 J i (15) with the results of primary decompositions 1 as follows: J 1 = < x 0 y 0 + x 1 y 1 + x 2 y 2 + x 3 y 3 , ... > J 2 = < x 0 , x 1 , x 2 , x 3 > (16) An inspection of the vanishing set V(J 2 ∪ g 8 ) yields an empty result, since the set of polynomials {x 0 , x 1 , x 2 , x 3 , x 2 0 + x 2 1 + x 2 2 + x 2 3 -1 = 0} can never vanish simultaneously over R or C. Therefore, only one component is left and as a consequence, the manipulator has only one operation mode, which is defined by J 1 . To complete the analysis, the remaining equations have to be added by writing: K i = J i ∪ < g 4 , g 5 , g 6 , g 8 > ( 17 ) Since there is only one component, the vanishing set of I is now defined by: V(I) = V(K 1 ) (18) From the primary decomposition, it is shown that the ideal I cannot be split and K 1 is named I hereafter. Solutions for arbitrary design parameters The 3-RPS Cube manipulator generally has only one operation mode, which is described by the ideal I. The solutions of the direct kinematic problem in this operation mode will be given for arbitrary values of design parameters (h 0 , h 1 ). To find out the Hilbert dimension of the ideal I, the certain values of design parameters are chosen as h 0 = 2 m dim(I) = 0 ( 19 ) The dim denotes the dimension over C[h 0 , h 1 , r 1 , r 2 , r 3 ] and shows that the number of solutions to the direct kinematic problems is finite in this general mode. The number of solutions and the solutions themselves were computed via an ordered Gröbner basis, which led to a univariate polynomial of degree 32. As two solutions of a system describe the same pose (position and orientation) of the moving platform, the number of solutions has to be halved into 16 solutions. | V(I) |= 16 (20) Therefore, there are at most 16 different solutions for given general design parameters and joint variables, i.e., there are theoretically 16 feasible poses of the moving platform for given joint variables. Notably, for arbitrarily values of design parameters and joint variables, some solutions might be complex. Solutions for equal leg lengths In the following subsection, it is assumed that all legs have the same length. The corresponding prismatic lengths are r 1 = r 2 = r 3 = r. Similar computations can be performed which were done in the previous subsection to enumerate the Hilbert dimension of the ideal. The Hilbert dimension is calculated and it follows that: dim(I) = 0 ( 21 ) This shows that the solutions of the direct kinematics problem with equal leg lengths are finite. When the number of solutions is computed for the system, it has to be halved and the following result is obtained: | V(I) |= 16 (22) The number of solutions for equal leg lengths is the same number as the solutions for arbitrary design parameters. Due to the fact that there are fewer parameters, the Gröebner basis can be computed without specifying any value. The solutions of Study parameters in the case of equal leg lengths are x 1 = x 2 = x 3 and y 1 = y 2 = y 3 . One manipulator pose with equal prismatic lengths leads to the following solutions of Study parameters: x 0 = 1 2 -h 1 3h 2 0 -2h 2 1 + 3r 2 -3h 0 -2h 1 h1 x 1 = 1 6 √ 3 h 1 3h 2 0 -2h 2 1 + 3r 2 -3h 0 + 2h 1 h1 y 0 = 1 12 √ 3 h 1 3h 2 0 -2h 2 1 + 3r 2 -3h 0 + 2h 1 3/2 h1 2 y 1 = -1 12h 1 -h 1 3h 2 0 -2h 2 1 + 3r 2 -3h 0 -2h 1 2h 1 -3h 0 + 3h 2 0 -2h 2 1 + 3r 2 x 1 = x 2 = x 3 y 1 = y 2 = y 3 (23) Operation mode analysis In the previous section, the joint variables (r 1 , r 2 , r 3 ) were fixed. In this section, they can change, i.e., the behaviour of the mechanism is studied when the prismatic joints are actuated. The joint variables (r 1 , r 2 and r 3 ) are used as unknowns and the computation of the Hilbert dimension shows that: dim(I) = 3 ( 24 ) where dim denotes the dimension over C[h 0 , h 1 ] and shows that the manipulator has 3 dof in general motion. The matrix M ∈ SE(3) in Eq. ( 2) represents a discrete screw motion from the pose corresponding to the identity condition, where Σ 0 and Σ 1 are coincident, to the transformed pose of Σ 1 with respect to Σ 0 . A discrete screw motion is the concatenation of a rotation about an axis and a translation along the same axis. The axis A, the translational distance s, and the rotational angle ϕ of the discrete screw motion can be computed from the matrix M. This information can also be obtained directly from the Study parameters, as they contain the information on the transformation. The Plücker coordinates L = (p 0 : p 1 : p 2 : p 3 : p 4 : p 5 ) of the corresponding discrete screw motion are expressed as: p 0 = (-x 2 1 -x 2 2 -x 2 3 )x 1 , p 1 = (-x 2 1 -x 2 2 -x 2 3 )x 2 , p 2 = (-x 2 1 -x 2 2 -x 2 3 )x 3 , p 3 = x 0 y 0 x 1 -(-x 2 1 -x 2 2 -x 2 3 )y 1 , p 4 = x 0 y 0 x 2 -(-x 2 1 -x 2 2 -x 2 3 )y 2 , p 5 = x 0 y 0 x 3 -(-x 2 1 -x 2 2 -x 2 3 )y 3 . (25) The unit vector of an axis A of the corresponding discrete screw motion is given by [p 0 , p 1 , p 2 ] T . The Plücker coordinates of a line should satisfy the following condition [START_REF] Pottmann | Computational Line Geometry[END_REF]: p 0 p 3 + p 1 p 4 + p 2 p 5 = 0 (26) The rotational angle ϕ can be enumerated directly from cos ϕ 2 = x 0 , whereas the translational distance s of the transformation can be computed from the Study parameters, as follows: s = 2y 0 x 2 1 + x 2 2 + x 2 3 ( 27 ) The following example shows the manipulator poses by solving the direct kinematic problem. Arbitrary values are assigned to the design parameters and joint variables as The first solution of the direct kinematics is depicted in Fig. 2(a), with (x 0 : x 1 : x 2 : follows: h 0 = 2 m, h 1 = 1 m, r 1 = 1.2 m, x 3 : y 0 : y 1 : y 2 : y 3 ) = (-0.961 : -0.189, -0.153 : 0.128 : -0.007 : 0.304 : -0.250 : 0.089). The discrete screw motion of the moving platform from identity into the actual pose in Singularity Conditions of the Manipulator The manipulator reaches a singular configuration when the determinant of the Jacobian matrix vanishes. The Jacobian matrix is the matrix of all first order partial derivatives of eight constraint equations {g 1 , g 2 , g 3 , g 4 , g 5 , g 6 , g 7 , g 8 } with respect to {x 0 , x 1 , x 2 , x 3 , y 0 , y 1 , y 2 , y 3 }. Since the manipulator has one operation mode, the singular configurations occur within this operation mode only. In the kinematic image space, the singular poses are computed by taking the Jacobian matrix from I: The vanishing condition det(J) = 0 of the determinant J is denoted by S. The factorization of the equation of the Jacobian determinant splits it into two components, namely S 1 : det 1 (J) = 0 and S 2 : det 2 (J) = 0. det(J) = 0 det 1 (J) det 2 (J) = 0 (29) It shows that the overall determinant will vanish if either S 1 or S 2 vanishes or both S 1 and S 2 vanish simultaneously. By adding the expression of the Jacobian determinant into the system I, the new ideal associated with the singular poses can be defined as: L i = I ∪ S i i = 1, 2 (30) The ideals now consist of a set of nine equations L i =< g 1 , g 2 , g 3 , g 4 , g 5 , g 6 , g 7 , g 8 , g 9 >. The ninth equation is the determinant of the Jacobian matrix. In mechanics, the singularity surface is desirable also in the joint space R, where R is the polynomial ring over variables r 1 , r 2 and r 3 . To obtain the singularity surface in R, the following projections are determined from ideals L i : L i → R i = 1, 2 (31) Algebraically, each projection is an elimination of Study parameters from the ideal L i and is mapped onto one equation generated by r 1 , r 2 and r 3 . It was not possible to compute the elimination in general, thus we assigned some values to the design parameters, namely h 0 = 2 m and h 1 = 1 m. The eight Study parameters x 0 , x 1 , x 2 , x 3 , y 0 , y 1 , y 2 , y 3 were eliminated to obtain a single polynomial in r 1 , r 2 and r 3 . For the system L 1 , the elimination yields a polynomial of degree four in r 1 , r 2 and r 3 in Eq. ( 32) and its zero set of polynomial is plotted in Fig. 3. By taking a point on this surface, we are able to compute the direct kinematics of at least one singularity pose. r 4 1 -r 2 1 r 2 2 + r 4 2 -r 2 1 r 2 3 -r 2 2 r 2 3 + r 4 3 -2r 2 1 -2r 2 2 -2r 2 3 -20 = 0 (32) Due to the heavy elimination process of Study parameters from ideal L 2 , some arbitrary values have been assigned to the joint variables r 1 = 2 m and r 2 = 1.7 m. Then the elimination can be carried out and the result is a univariate polynomial of degree 64 in r 3 . Let us consider one singularity configuration of the manipulator when the moving frame Σ 1 coincides with the fixed frame Σ 0 and all joint variables have the same values. The system L 2 is now solved by assigning the joint variables as r 1 = r 2 = r 3 . The elimination process returns a univariate polynomial of degree 24 in r 3 . The real solutions of joint variables in this condition is r 1 = r 2 = r 3 = √ 2 m. The coordinates of points B 1 , B 2 and B 3 can be determined by solving the direct kinematics. Accordingly, we can form the extended Jacobian matrix (J E ) of the manipulator, which is based on the screw theory. The rows of the extended Jacobian matrix J E are composed of n actuation wrenches W a and (6 -n) constraint wrenches W c . Since the Paper JMR-14-1262, corresponding author's last name: CARO By considering that the prismatic joints are actuated, each leg applies one actuation force whose axis is along the direction of the corresponding actuated joint u i , as follows: F a1 = [ u 1 , r 0 B 1 × u 1 ] F a2 = [ u 2 , r 0 B 2 × u 2 ] F a3 = [ u 3 , r 0 B 3 × u 3 ] W a = span( F a1 , F a2 , F a3 ) (33) Due to the manipulator architecture, each leg applies one constraint force, which is perpendicular to the actuated prismatic joint and parallel to the axis s i of the revolute joint, written as: F c1 = [ s 1 , r 0 B 1 × s 1 ] F c2 = [ s 2 , r 0 B 2 × s 2 ] F c3 = [ s 3 , r 0 B 3 × s 3 ] W c = span( F c1 , F c2 , F c3 ) (34) By collecting all components of the extended Jacobian matrix, we obtained: J T E = F a1 F a2 F a3 F c1 F c2 F c3 (35) The degeneracy of matrix J E indicates that the manipulator reaches a singularity configuration. We can observe the pose of the manipulator when r 1 = r 2 = r 3 = √ 2 m, the matrix J E in this pose is rank deficient, while neither the constraint wrench system nor the actuation wrench system degenerates, i.e. rank(J E ) = 5, rank(W a ) = 3, and rank(W c ) = 3. This means that the manipulator reaches an actuation singularity. By examining the null space of the degenerate matrix J E , the uncontrolled motion (infinitesimal gain motion) of the moving platform can be obtained. This uncontrolled motion is characterized by a zero-pitch twist that is reciprocal to all constraint and actuation wrenches. It is denoted by s λ and is described in Eq. ( 36). This singularity posture is depicted in Fig. 4, the uncontrolled motion of the moving platform is along the purple line. s T λ = 1 1 1 0 0 0 (36) General Motion The set of eight constraint equations is written as a polynomial ideal I with variables x 0 , x 1 , x 2 , x 3 , y 0 , y 1 , y 2 , y 3 over the coefficient ring C[h 0 , h 1 , r 1 , r 2 , r 3 ]. Paper JMR-14-1262, corresponding author's last name: CARO I =< g 1 , g 2 , g 3 , g 4 , g 5 , g 6 , g 7 , g 8 > The general motion performed by the 3-RPS Cube parallel manipulator is characterized by solving the ideal I. The equations g 1 , g 2 , g 3 , g 4 , g 5 , g 6 , g 7 from ideal I can be solved linearly for variables y 0 , y 1 , y 2 , y 3 , R 1 , R 2 , R 3 [START_REF] Schadlbauer | A Complete Kinematic Analysis of the 3-RPS Parallel Manipulator[END_REF], R i being the square of the prismatic lengths, i.e., R i = r 2 i , and δ = x 2 0 + x 2 1 + x 2 2 + x 2 3 . Hence, the Study parameters become: y 0 = h 1 (x 2 1 x 2 + x 2 1 x 3 + x 1 x 2 2 + x 1 x 2 3 + x 2 2 x 3 + x 2 x 2 3 ) δ y 1 = - h 1 (x 2 0 x 2 -x 2 0 x 3 + x 0 x 2 2 + x 0 x 2 3 -x 2 2 x 3 + x 2 x 2 3 ) δ y 2 = h 1 (x 2 0 x 1 -x 2 0 x 3 -x 0 x 2 1 -x 0 x 2 3 -x 2 1 x 3 + x 1 x 2 3 ) δ y 3 = - h 1 (x 2 0 x 1 -x 2 0 x 2 + x 0 x 2 1 + x 0 x 2 2 -x 2 1 x 2 + x 1 x 2 2 ) δ (38) The terms R i 2 are also expressed in terms of x 0 , x 1 , x 2 , x 3 . The remaining Study parameters are still linked in equation g 8 : x 2 0 + x 2 1 + x 2 2 + x 2 3 -1 = 0, which amounts to a hypersphere equation in space (x 0 , x 1 , x 2 , x 3 ). Accordingly, the transformation matrix is obtained. However, only the translational part of the transformation matrix depends on parameters (x 0 , x 1 , x 2 , x 3 ). M T =       2h 1 (x 0 x 2 -x 0 x 3 + x 1 x 2 + x 1 x 3 ) -2h 1 (x 0 x 1 -x 0 x 3 -x 1 x 2 -x 2 x 3 ) 2h 1 (x 0 x 1 -x 0 x 2 + x 1 x 3 + x 2 x 3 )       (39) This parametrization provides us with an interpretation of the general motion performed by the manipulator. The moving platform of the manipulator is capable of all orientations determined by (x 0 , x 1 , x 2 , x 3 ). The translational motion is coupled to the orientations via Eq. (39). The position of any point in the moving platform ([1, x, y, z] T ) with respect to the fixed frame Σ 0 ([1, X, Y, Z] T ) during the motion is determined by: x 0 = 1, x 1 = x 2 = x 3 = 0 : r 0 C 0 = [1, -h 1 , -h 1 , -h 1 ] T x 1 = 1, x 0 = x 2 = x 3 = 0 : r 0 C 1 = [1, -h 1 , h 1 , h 1 ] T x 2 = 1, x 0 = x 1 = x 3 = 0 : r 0 C 2 = [1, h 1 , -h 1 , h 1 ] T x 3 = 1, x 0 = x 1 = x 2 = 0 : r 0 C 3 = [1, h 1 , h 1 , -h 1 ] T (42) C 0 , C 1 , C 2 and C 3 are the vertices of a tetrahedron C as shown in Fig. 5. Those points correspond to the poses of the moving platform subjected to the actuation singularities. The uncontrolled motions of the moving platform are characterized by zero-pitch twists that intersect the geometric center of the moving platform and the corresponding vertices. If two parameters are null, for instance x 2 = x 3 = 0, the motion of point Q will be determined by: X = -h 1 Y = -h 1 (x 2 0 -x 2 1 )/(x 2 0 + x 2 1 ) Z = -h 1 (x 2 0 -x 2 1 )/(x 2 0 + x 2 1 ) (43) This means that point Q moves along the edge C 0 C 1 , covering the closed interval between the two vertices. If only one parameter is zero, for instance if x 0 = 0, the point Q will occupy the closed triangle C 1 C 2 C 3 . Eventually, if none of the parameters is null, then point Q will move inside the tetrahedron C. Let us consider an arbitrary point R in the moving platform such that: (x + h 1 )(y + h 1 )(z + h 1 ) = 0 (44) For example, take a point at the geometric center of the triangular-shaped platform, of coordinates r 1 R = [1, -2 3 h 1 , -2 Figure 6: Pseudo-tetrahedron D. X = -2 3 h 1 Y = -2 3 h 1 (x 2 0 + x 0 x 1 -x 2 1 ) x 2 0 + x 2 1 Z = -2 3 h 1 (x 2 0 -x 0 x 1 -x 2 1 ) x 2 0 + x 2 1 ( 46 ) This represents an ellipse e 01 that passes through the vertices D 0 and D 1 and lies in the plane X = -2 3 h 1 . Accordingly, the four vertices of the pseudo-tetrahedron D are joined by six ellipses, as shown in Fig. 6. When only one parameter is equal to zero, for instance x 0 = 0, the trajectory of point R will follow a particular surface, called the Steiner surface F 03 . It passes through the Then the expressions of the trajectory of point R are given by: X = -2 3 h 1 (x 2 1 -x 1 x 2 -x 1 x 3 -x 2 2 -x 2 3 ) (x 2 1 + x 2 2 + x 2 3 ) Y = 2 3 h 1 (x 2 1 + x 1 x 2 -x 2 2 + x 2 x 3 + x 2 3 ) (x 2 1 + x 2 2 + x 2 3 ) Z = 2 3 h 1 (x 2 1 + x 1 x 3 + x 2 2 + x 2 x 3 -x 2 3 ) (x 2 1 + x 2 2 + x 2 3 ) (47) Therefore, the trajectory of an arbitrary point of the moving platform forms the shape of pseudo-tetrahedron D and contains four vertices D i (i = 0, 1, 2, 3). These vertices are joined by six ellipses and any three of the vertices are linked by a Steiner surface F j (j = 0, 1, 2, 3). Any two Steiner surfaces (F i and F j ) share one ellipse e ij in common. Let us analyse the motion of a special point S that does not fulfill Eq. (44). For instance, the point S is at one vertex of the triangular-shaped platform, B 3 (Fig. 1). If three parameters (among four parameters x i , i = 0, 1, 2, 3) are equal to zero, the positions of the point S are determined by: x 0 = 1, x 1 = x 2 = x 3 = 0 : r 0 E 0 = [1, -h 1 , -h 1 , 0] T x 1 = 1, x 0 = x 2 = x 3 = 0 : r 0 E 1 = [1, -h 1 , h 1 , 0] T x 2 = 1, x 0 = x 1 = x 3 = 0 : r 0 E 2 = [1, h 1 , -h 1 , 0] T x 3 = 1, x 0 = x 1 = x 2 = 0 : r 0 E 3 = [1, h 1 , h 1 , 0] T (48) Those points are coplanar and are the vertices of a rectangle as shown in Fig. 8. If two parameters are zero, for example x 2 = x 3 = 0, the path of point S is along the edge E 0 E 1 . Accordingly, in a general configuration the point S always moves in the plane Z = 0. Another special point which does not fulfill Eq. ( 44) is the origin of the moving frame P . According to Eq. ( 40), the positions of point P are given by: δ = x 2 0 + x 2 1 + x 2 2 + x 2 3 X = 1 δ 2h 1 (x 0 x 2 -x 0 x 3 + x 1 x 2 + x 1 x 3 ) Y = 1 δ 2h 1 (-x 0 x 1 + x 0 x 3 + x 1 x 2 + x 2 x 3 ) Z = 1 δ 2h 1 (x 0 x 1 -x 0 x 2 + x 1 x 3 + x 2 x 3 ) (49) If three parameters (among four parameters x i , i = 0, 1, 2, 3) are equal to zero, the positions of the point P will be always coincident with the origin of the fixed frame O. Paper JMR-14-1262, corresponding author's last name: CARO linked to the eighth equation in x 2 0 + 3x 2 1 -1 = 0, which is simply an ellipse equation in the space x 0 and x 1 . This ellipse equation can be parametrized by x 0 = cos(u) and x 1 = 1 3 sin(u) √ 3. As a result, the workspace of the manipulator performing the VDM is parametrized by the parameter u. Hence, the Study parameters are expressed as: x 0 = c(u) x 1 = 1 3 s(u) √ 3 y 0 = 2 3 s(u) 3 √ 3 y 1 = - 2 3 c(u)s(u) 2 x 2 = 1 3 s(u) √ 3 x 3 = 1 3 s(u) √ 3 y 2 = - 2 3 c(u)s(u) 2 y 3 = - 2 3 c(u)s(u) 2 (52) where s(u) = sin(u), c(u) = cos(u). Therefore, the possible poses of the moving platform can be expressed by the following transformation matrix: T =                 1 0 0 0 a 4 3 c(u) 2 -1 3 -2 3 s(u)(c(u) √ 3 -s(u)) -2 3 s(u)(c(u) √ 3 -s(u)) a -2 3 s(u)(c(u) √ 3 -s(u)) 4 3 c(u) 2 -1 3 -2 3 s(u)(c(u) √ 3 -s(u)) a -2 3 s(u)(c(u) √ 3 -s(u)) -2 3 s(u)(c(u) √ 3 -s(u)) 4 3 c(u) 2 -1 3                 (53) where a = 4 3 sin(u) 2 . Trajectory of the moving platform performing the Vertical Darboux Motion Let us consider the point B 1 moving in the plane X = 0 and the geometric center R of the moving platform as shown in Fig. 1. The paths followed by those two points are obtained by setting u = -π 2 . . . π 2 by using the transformation matrix T defined in Eq. ( 53). It appears that those two paths are different as shown in Fig. 10 4 . Point R moves 4 The animation of the trajectories is shown in: that is parallel to the plane X = 0. Let us take all segments joining point B 1 to any point of segment B 2 B 3 and plot the paths of all points on those segments. All those paths are planar ellipses, except the path followed by point R. Accordingly, the set of all paths forms a ruled surface called Right-conoid surface, which is illustrated in yellow in Fig. 11                 (55) The instantaneous screw axis of the moving platform is obtained from the components of matrix A as explained in [START_REF] Schadlbauer | Operation Modes in Lower Mobility Parallel Manipulators[END_REF], after normalization: ISA = 1 √ 3 1 √ 3 1 √ 3 All twists of the manipulator are collinear. As a consequence, the fixed axode generated by the ISA is a straight line of unit vector [1/ √ 3, 1/ √ 3, 1/ √ 3] T . In the moving coordinate frame, the moving axode corresponding to this motion is congruent with the fixed axode as depicted in Fig. 12. However, the moving axode does not appear clearly as it is congruent with the fixed axode. Indeed, the moving axode internally slides and rolls onto the fixed axode. Conclusions In this paper, an algebraic geometry method was applied to analyse the kinematics and the operation mode of the 3-RPS Cube manipulator. Primary decomposition of an ideal of eight constraint equations revealed that the manipulator has only one general operation mode. In this operation mode, the direct kinematics was solved and the number of solutions was obtained for arbitrary values of design parameters and joint variables. The singularity conditions were computed and represented in the joint space. It turns out that the manipulator reaches the singularity when the moving frame coincides with the fixed frame and all joint variables are equal. The uncontrolled motion of the moving platform in this singularity configuration was investigated and geometrically interpreted. Figure 1 : 1 Figure 1: The 3-RPS Cube Parallel Manipulator. r 2 = 2 2 m, and r 3 = 1.5 m. By considering only the real solutions, the manipulator has two solutions for those design parameters and joint variables. Fig. 2 ( 2 Fig.2(a) is along the axis A 1 . In Plücker coordinates, it is given by (p 0 : p 1 : p 2 : p 3 : p 4 : p 5 ) = (0.014 : 0.011 : -0.009 : 0.021 : -0.020 : 0.007). The rotational angle and translational distance along the screw axis A 1 are ϕ 1 = 5.725 rad and s 1 = -0.057 m, respectively. Figure 2 ( 2 Figure 2(b) illustrates the second solution of the direct kinematic problem, with (x 0 : x 1 : x 2 : x 3 : y 0 : y 1 : y 2 : y 3 ) = (0.962 : 0.056 : -0.021 : -0.265 : 0.001 : -0.293 : 0.232 : -0.076). The moving platform is transformed from the identity into the final pose via the axis A 2 in Fig. 2(b), with rotational angle ϕ 2 = 0.552 rad and translational distance s 2 = 0.01 m. The Plücker coordinate vector of the discrete screw motion is defined by (p 0 : p 1 : p 2 : p 3 : p 4 : p 5 ) = (-0.004 : 0.001 : 0.019 : -0.021 : 0.017 : -0.006). Figure 2 : 2 Figure 2: Solutions of the Direct Kinematics. Figure 3 : 3 Figure 3: Singularity Surface of L 1 . Figure 4 : 4 Figure 4: Singularity Pose at the Identity Condition. Figure 5 : 5 Figure 5: Tetrahedron C. Figure 7 : 7 Figure 7: Steiner Surface F 0 . Figure 8 : 8 Figure 8: Rectangle E. Figure 9 : 9 Figure 9: Steiner Surface G 0 . Figure 10 : 10 Figure 10: Trajectories of points B 1 and R. 5 . 5 This type of ruled surfaces is generated by moving a straight line such that it intersects perpendicularly a fixed straight line, called the axis of the Right-conoid surface. The fixed straight line followed by point R is axis of the Right-conoid surface. Figure 11 : 11 Figure 11: Right-conoid Surface of the VDM. 7. 2 2 Axodes of the manipulator performing the Vertical Darboux MotionHaving the parametrization of the VDM performed by the 3-RPS Cube parallel manipulator in terms of Study parameters, it is relatively easy to compute the ISA. The possible poses of the moving platform as functions of time in this special motion only allow the orientations that are given by one parameter u. The ISA are obtained from the entries of the velocity operator:A = Ṫ T -1(54)By setting u = t, matrix A becomes: Figure 12 : 12 Figure 12: ISA Axodes of VDM. 1 , y 2 , y 3 , which appear in matrix M, are called Study parameters. These parameters make it possible to parametrize SE(3) with dual quaternions. The Study kinematic mapping maps each spatial Euclidean displacement of SE(3) via transformation matrix M onto a projective point X [x 0 : x 1 : x 2 : x 3 : y 0 : y 1 : y 2 : y 3 ] in the 6-dimensional Study quadric S ∈ P 7 [14], such that: SE(3) → X ∈ P 7 Paper JMR-14-1262, corresponding author's last name: CARO h 1 , -2 3 h 1 ] T . If any of the three parameters is zero, then the corresponding positions of point R will become:x 0 = 1, x 1 = x 2 = x 3 = 0 : r 0 D 0 = [1, -2 3 h 1 , -2 3 h 1 , -2 3 h 1 ] T x 1 = 1, x 0 = x 2 = x 3 = 0 : r 0 D 1 = [1, -2 3 h 1 , 2 3 h 1 , 2 3 h 1 ] T x 2 = 1, x 0 = x 1 = x 3 = 0 : r 0 D 2 = [1, 2 3 h 1 , -2 3 h 1 , 2 3 h 1 ] T x 3 = 1, x 0 = x 1 = x 2 = 0 : r 0 D 3 = [1, 2 3 h 1 , 2 3 h 1 , -2 3 h 1 ] T(45)D 0 , D 1 , D 2 and D 3 are the vertices of a pseudo-tetrahedron D as shown in Fig.6and it was verified that these vertices amount to the singularities of the 3-RPS Cube manipulator. If two parameters are equal to zero, for instance x 2 = x 3 = 0, the point Q will move along the edge C 0 C 1 , while the path of point R will be given by:Paper JMR-14-1262, corresponding author's last name: CARO The motion animation of point R that is bounded by the Steiner surface, is shown in: http://www.irccyn.ec-nantes.fr/ ~caro/ASME_JMR/JMR_14_1262/animation_steiner.gif http://www.irccyn.ec-nantes.fr/ ~caro/ASME_JMR/JMR_14_1262/animation_trajectories.gif[START_REF] Huang | Analysis of Instantaneous Motions of Deficient Rank 3-RPS Parallel Manipulators[END_REF] The animation of the right-conoid surface is shown in: http://www.irccyn.ec-nantes.fr/ ~caro/ASME_JMR/JMR_14_1262/animation_rightconoid.gif Paper JMR-14-1262, corresponding author's last name: CARO Acknowledgments The authors would like to acknowledge the support of the Österreichischer Austauschdienst/OeAD, the French Ministry for Foreign Affairs (MAEE) and the French Ministry for Higher Education and Research (MESR) (Project PHC AMADEUS). Moreover, Prof. Manfred Husty acknowledges the support of FWF grant P 23832-N13, Algebraic Methods in Collision Detection and Path Planning. T , which is a special point in the cube of a moving frame Σ 1 as shown in Fig. 5. Then, its positions with respect to the fixed frame Σ 0 according to Eq. ( 40) are: The coordinates of point Q depend on (x 0 , x 1 , x 2 , x 3 ). There are four possible positions corresponding to the three parameters (among four parameters x i , i = 0, 1, 2, 3) are equal to zero. These corresponding positions of Q are: Paper JMR-14-1262, corresponding author's last name: CARO Vertical Darboux Motion The condition for the manipulator to generate the VDM is that all prismatic lengths are equal, i.e., r 1 = r 2 = r 3 . By solving the direct kinematics of the manipulator with the same prismatic lengths, the Study parameters obtained to perform the VDM yield By substituting those values into the ideal I, the set of eight constraint equations becomes: It follows from Eq. (50) that the first three constraint equations are the same. Likewise, the next three equations are identical. Mathematically, one has to find the case of 1-dof motion, as known as cylindrical motion, with one parameter that describes the VDM. Equation (50) can be solved linearly for the variables R i , y 0 , y 1 in terms of x 0 , x 1 , as follows: From Eq. (51), it is apparent that the manipulator can perform the VDM if and only if all prismatic lengths are the same. The remaining Study parameters x 0 and x 1 are still
43,319
[ "949366", "10659", "949367", "16879" ]
[ "21439", "490478", "481388", "473973", "490478", "473973" ]
01757510
en
[ "spi" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01757510/file/ARK2018_Rasheed_Long_Marquez_Caro.pdf
Tahir Rasheed email: tahir.rasheed@ls2n.fr Philip Long email: p.long@northeastern.edu David Marquez-Gamez email: david.marquez-gamez@irt-jules-verne.fr Stéphane Caro email: stephane.caro@ls2n.fr Stéphane Caro Kinematic Kinematic Modeling and Twist Feasibility of Mobile Cable-Driven Parallel Robots Keywords: Cable-Driven Parallel Robots, Mobile Bases, Kinematic Modeling, Available Twist Set come L'archive ouverte pluridisciplinaire Introduction A Cable-Driven Parallel Robot (CDPR) is a type of parallel manipulator with limbs as cables, connecting the moving-platform with a fixed base frame. The platform is moved by appropriately controlling the cable lengths or tensions. CDPRs contains numerous advantages over conventional robots, e.g, high accelerations [START_REF] Kawamura | High-speed manipulation by using parallel wire-driven robots[END_REF], large payload capabilities [START_REF] Albus | The nist robocrane[END_REF], and large workspace [START_REF] Lambert | Implementation of an aerostat positioning system with cable control[END_REF]. However, a major drawback in classical CDPRs having fixed cable layout, i.e, fixed exit points and cable configuration, is the potential collisions between the cables and the surrounding environment which can significantly reduce the robot workspace. Better performances can be achieved with an appropriate CDPR architecture. Cable robots with a possibility of undergoing a change in their geometric structure are known as Reconfigurable Cable-Driven Parallel Robots (RCDPRs). Different strategies have been proposed for maximizing the robot workspace or increasing platform stiffness in the recent work on RCDPRs [START_REF] Gagliardini | Discrete reconfiguration planning for cable-driven parallel robots[END_REF]. However, reconfigurability is typically performed manually for most existing RCDPRs. To achieve autonomous reconfigurability of RCDPRs, a novel concept of Mobile Cable-Driven Parallel Robots (MCDPRs) was introduced in [START_REF] Rasheed | Tension distribution algorithm for planar mobile cable-driven parallel robots[END_REF]. The first MCDPR prototype has been designed and built in the context of Echord++ FASTKIT project 1 . The targeted application for such MCDPR prototype is logistics. Some papers deals with the velocity analysis of parallel manipulators [START_REF] Merlet | Efficient computation of the extremum of the articular velocities of a parallel manipulator in a translation workspace[END_REF]. However, few focus on the twist analysis of CDPRs [START_REF] Lessanibahri | Twist feasibility analysis of cabledriven parallel robots[END_REF]. This paper deals with the kinematic modeling of MCDPRs that is required to analyze the kinematic performance of the robot. The paper is organized as follows. Section 2 presents the kinematic model of MCDPRs. Section 3 deals with the determination of the Available Twist Set (ATS) for MCDPRs using the kinematic modeling of the latter. The ATS can be used to obtain the twist capacities of the moving-platform. Section 4 presents the twist capacities of the moving-platform for the MCDPRs under study. Finally, conclusions are drawn and future work is presented in Section 5. Kinematic modeling A MCDPR is composed of a classical CDPR with m cables and a n degree-offreedom (DoF) moving-platform mounted on p Mobile Bases (MBs). The jth mobile base is denoted as M j , j = 1, . . . , p. The ith cable mounted onto M j is named as C i j , i = 1, . . . , m j , where m j denotes the number of cables carried by M j . u i j denotes the unit vector along the cable C i j . Each jth mobile base along with its m j number of cables is denoted as jth PD (pd j ) module. Each pd j consists of a proximal (prox j ) and a distal (dist j ) module. dist j consists of m j cables between M j and the moving-platform. In this paper, cables are assumed to be straight and massless, thus can be modeled as a Universal-Prismatic-Spherical (UPS) kinematic chain. Generally MBs are four-wheeled planar robots with two-DoF translational motions and one-DoF rotational motion, thus, prox j can be modeled as a virtual RPP kinematic chain between the base frame F 0 and the frame F b j attached to M j . An illustrative example with p = 4 MBs and m = 8 cables is shown in Fig. 1a. A general kinematic architecture of a MCDPR is shown in Fig. 1b. Kinematics of the Distal Module A classical CDPR is referred as distal modules in MCDPR. The twist 0 t dist P of the moving-platform due to the latter is expressed as [START_REF] Gouttefarde | Interval-analysis-based determination of the wrench-feasible workspace of parallel cable-driven robots[END_REF][START_REF] Roberts | On the inverse kinematics, statics, and fault tolerance of cable-suspended robots[END_REF]: A 0 t dist P = l, (1) where A is the (m × n) parallel Jacobian matrix, containing the actuation wrenches due to the cables on the mobile platform. The twist 0 t P = [ω ω ω, ṗ] T is composed of the platform angular velocity vector ω ω ω = [ω x , ω y , ω z ] T and linear velocity vector ṗ = [ ṗx , ṗy , ṗz ] T , expressed in F 0 . 0 t dist P denotes the platform twist due to the distal module motion. l is a m-dimensional cable velocity vector. Here, Eq. ( 1) can be expressed as:           A 1 A 2 . . . A j . . . A p           0 t dist P =           l1 l2 . . . l j . . . lp           , (2) where l j = [ l1 j , l2 j , . . . , lm j j ] T . A j is expressed as: A j =      [( 0 b 1 j -0 p) × u i j ] T u T 1 j [( 0 b 2 j -0 p) × u 2 j ] T u T 2 j . . . [( 0 b m j j -0 p) × u m j j ] T u T m j j      , (3) where ith row of A j is associated with the actuation wrench of the ith cable mounted onto M j . 0 b i j denotes the Cartesian coordinate vector of the anchor points B i j in F 0 . 0 p denotes the platform position in F 0 . Kinematic modeling of a MCDPR The twist 0 t j P of the moving-platform due to pd j can be expressed in F 0 as: 0 t j P = 0 t prox j P + 0 t dist j P (4) where 0 t prox j P ( 0 t dist j P , resp.) is the twist of the moving-platform due to the motion of the proximal (distal, resp.) module of pd j expressed in F 0 . Equation ( 4) take the form: 0 t j P = b j Ad P 0 t prox j b j + 0 R b j b j t dist j P (5) where b j Ad P is called the adjoint matrix, which represents the transformation matrix between twists expressed in F b j and twist expressed in F P , b j Ad P = I 3 0 3 -b j rP I 3 [START_REF] Kawamura | High-speed manipulation by using parallel wire-driven robots[END_REF] where b j rP is the cross-product matrix of vector --→ 0 b j P expressed in F 0 . b j t dist j P is the moving-platform twist due to dist j expressed in F b j . The augmented rotation matrix ( 0 R b j ) is used to express b j t dist j P in F 0 : 0 R b j = 0 R b j 0 3 0 3 0 R b j (7) where 0 R b j is the rotation matrix between frames F b j and F 0 . As the Proximal module is being modeled as a virtual RPP limb, 0 t prox j b j from Eq. ( 4) can be expressed as: 0 t prox j b j = J b j qb j (8) where J b j is a (6 × 3) serial Jacobian matrix of prox j and qb j is the virtual joint velocities of the latter, namely, 0 t prox j b j = k 0 0 3 0 3 k 0 × 0 p 0 R b j i 0 0 R b j j 0   θ j ρ1 j ρ2 j   (9) where i 0 , j 0 and k 0 denotes the unit vector along x 0 , y 0 and z 0 respectively. Upon multiplication of Eq. ( 5) with A j : A j 0 t j P = A j b j Ad P J b j qb j + A j 0 R b j b j t dist j P . (10) As A j 0 R b j b j t dist j P represents the cable velocities of the dist j (see Eq. ( 2)), Eq. ( 10) can also be expressed as: A j 0 t j P = A j b j Ad P J b j qb j + l j . ( 11 ) The twist of the moving-platform t P and the twists generated by the limbs are the same, namely, 0 t 1 P = 0 t 2 P = 0 t 3 P . . . = 0 t j P . . . = 0 t p P = t P (12) Thus, the twist of the moving-platform in terms of all the p number of limbs can be expressed as:      A 1 A 2 . . . A p      t P =      A 1 b1 Ad P J b1 0 0 • • • 0 0 A 2 b2 Ad P J b2 0 • • • 0 . . . . . . . . . . . . 0 0 0 • • • A p bp Ad P J bp      qb + l (13) where qb = [ qb1 , qb2 , . . . , qbp ] T and l = [ l1 , l2 , . . . , lp ] T . Equation ( 13) can be expressed in the matrix form as: At P = B b qb + l ( 14 ) At P = B q (15) where B = [B b I m ] is a (m × (3p + m))-matrix while q = [ qb l] T is a (3p + m)dimensional vector containing all joint velocities. Equation (15) represents the first order kinematic model of MCDPRs. Available Twist Set of MCDPRs This section aims at determining the set of available twists for MCDPRs that can be generated by the platform. For a classical CDPR, the set of twist feasible poses of its moving platform are known as Available Twist Set (ATS). A CDPR posture is called twist-feasible if all the twists within a given set, can be produced at the platform, within given joint velocity limits [START_REF] Lessanibahri | Twist feasibility analysis of cabledriven parallel robots[END_REF]. According to [START_REF] Lessanibahri | Twist feasibility analysis of cabledriven parallel robots[END_REF], ATS of a CDPR corresponds to a convex polytope that can be represented as the intersection of the half-spaces bounded by its hyperplanes known as Hyperplane-Shifting Method (HSM) [START_REF] Bouchard | On the ability of a cable-driven robot to generate a prescribed set of wrenches[END_REF][START_REF] Gouttefarde | Characterization of parallel manipulator available wrench set facets[END_REF]. Although HSM can be utilized to determine the ATS of MCDPRs, the approach in [START_REF] Lessanibahri | Twist feasibility analysis of cabledriven parallel robots[END_REF] is not directly applicable due to the difference in the kinematic models of CDPR (Eq. 1) and MCDPR (Eq. 15) as matrix B = I. The kinematic model of the MCDPRs is used to determine the ATS of the moving-platform. In case m = n, A is square, Eq. ( 15) can be expressed as: t P = A -1 B q =⇒ t P = J q (16) where J is a Jacobian matrix mapping the joint velocities onto the platform twist. The ATS will correspond to a single convex polytope, constructed under the mapping of Jacobian J. In case m = n, matrix A is not square, however there exist in total C n m (n × n) square sub-matrices of matrix A, denoted by A k , k = 1, . . . ,C n m , obtained by removing mn rows from A. For each sub-matrix we can write: tk p = A k -1 B k =⇒ tk p = J k q, k = {1, . . . ,C n m } (17) where tk p is the twist generated by the kth sub-matrix A k out of C n m (n × n) square sub-matrices of matrix A. B k is a sub matrix of B using corresponding rows that are chosen in A k from A. HSM in [START_REF] Bouchard | On the ability of a cable-driven robot to generate a prescribed set of wrenches[END_REF][START_REF] Gouttefarde | Characterization of parallel manipulator available wrench set facets[END_REF] is directly applicable to compute all the hyperplanes for C n m convex polytopes knowing the minimum and maximum joint velocity limits. Thus, the ATS of MCDPRs is the region bounded by all of the foregoing hyperplanes. Results This section deals with the twist feasibility analysis of two different case studies. From the ATS acquired using the kinematic model of a given MCDPR configuration, we aim to study the difference in the moving platform twist considering fixed and moving MBs. The first case study is a planar MCDPR with a point mass end-effector shown in Fig. 2a. The MBs have only one degree of freedom along i 0 . The joint velocity limits are defined as: -0.8 m.s -1 ≤ ρ1 j ≤ 0.8m.s -1 , -2m.s -1 ≤ li j ≤ 2m.s -1 , i = {1, 2}, j = {1, 2}, (18) x Matrix A has six 2 × 2 sub-matrices Thus, ATS is the region bounded by the hyperplanes formed by these six convex polytopes. The difference of ATS between fixed (correspond to a classical CDPR) and moving MBs can be observed in Figs. 2b and2c. To illustrate the difference, a Required Twist Set (RTS) equal to [1.15m.s -1 , 1.675m.s -1 ] T is considered, depicted by a red point in Figs. 2b and2c. For fixed MBs, it should be noted that RTS is outside the ATS. By considering moving MBs, RTS is within the ATS. The same approach is adopted to determine the ATS for a given FASTKIT configuration in Fig. 3a. The joint velocity limits are defined as: -0.2 m.s -1 ≤ θ j , ρ1 j , ρ2 j ≤ 0.2 m.s -1 , j = {1, 2}, (19) -2 m.s -1 ≤ li j ≤ 2 m.s -1 , j = {1, 2}, i = {1, . . . , 4}, (20) The maximum absolute twist that the platform can achieve in each Cartesian direction by considering fixed and moving MBs are illustrated in Figs. 3b and3c. The maximum absolute wrench of the moving platform is illustrated in red where f x , f y , f z and m x , m y , m z represent the forces and the moments that can be generated by the cables onto the moving platform. For the analysis, the cable tensions are bounded between 0 and 20 N respectively. It can be observed the twist capacity of the moving-platform is increased when MBs are moving. On the contrary, high velocity capability of the moving-platform in certain directions also results in very less wrench capability in the respective directions. Thus, this velocity is unattainable outside certain dynamic conditions. Conclusion This paper dealt with the kinematic modeling of Mobile Cable-Driven Parallel Robots (MCDPRs) that can be used to analyze its kinematic performance. The developed kinematic model was utilized to determine the Available Twist Set (ATS) of MCDPRs. It considers the joint velocity limits for cables and the Mobile Bases (MBs). Using ATS, the twist capacities of the moving-platform was determined. Two case studies have been used in order to illustrate the effect of the moving MBs onto the platform twist. Future work will focus the trajectory planning of of MCD-PRs and experimental validations with FASTKIT prototype. Fig. 1 : 1 Fig. 1: (a) MCDPR Parameterization (b) Kinematic Architecture of MCDPRs, active joints are highlighted in gray, passive joints are highlighted in white 4. 1 1 Case study: p = 2, m = 4 and n = 2 DoF MCDPR Fig. 2 : 2 Fig. 2: (a) Configuration under study of p = 2, m = 4 and n = 2 MCDPR (b) ATS in green for fixed MBs (c) ATS in green for moving MBs 4. 2 2 Case study: p = 2, m = 8 and n = 6 DoF MCDPR Fig. 3 : 3 Fig. 3: (a) A FASTKIT configuration (b, c) Maximum absolute twist and wrenches that FASTKIT platform can generate about each Cartesian direction https://www.fastkit-project.eu/ Acknowledgements This research work is part of the European Project ECHORD++ "FASTKIT" dealing with the development of collaborative and mobile cable-driven parallel robots for logistics.
14,650
[ "10659" ]
[ "481388", "473973", "29479", "235335", "481388", "473973" ]
01757514
en
[ "spi" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01757514/file/3RUUanalysis_final.pdf
Thomas Stigger email: thomas.stigger@uibk.ac.at Abhilash Nayak email: abhilash.nayak@ls2n.fr Stéphane Caro email: stephane.caro@ls2n.fr Philippe Wenger email: philippe.wenger@ls2n.fr Martin Pfurner email: martin.pfurner@uibk.ac.at Manfred Husty email: manfred.husty@uibk.ac.at Algebraic Analysis of a 3-RUU Parallel Manipulator Keywords: 3-RUU, kinematic analysis, direct kinematics, algebraic geometry à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction For theoretical and practical purposes, the kinematic analysis of a parallel manipulator (PM) is essential to understand its motion behavior. Kinematic constraints can be transformed via Study's kinematic mapping into algebraic constraint equations. Every configuration of the PM is thereby mapped to a point in a projective space, P 7 [START_REF] Husty | 21st Century Kinematics, chap. Kinematics and Algebraic geometry[END_REF][START_REF] Husty | Algebraic methods in mechanism analysis and synthesis[END_REF]. Consequently, well developed concepts of algebraic geometry [START_REF] Cox | Ideals, Varieties, and Algorithms: An Introduction to Computational Algebraic Geometry and Commutative Algebra[END_REF] can be used to interpret the algebraic constraint equations to obtain necessary information about the PM. In that vein, many PMs were investigated using algebraic geometry concepts. Resultant methods were adopted to solve the direct kinematics of Stewart-Gough platforms [START_REF] Husty | An algorithm for solving the direct kinematics of general Stewart-Gough platforms[END_REF]. A complete kinematic analysis including the characterization of operation modes, solutions to direct kinematics and determination of singular poses was performed for the 3-RPS PM [START_REF] Schadlbauer | Operation Modes in Lower-Mobility Parallel Manipulators[END_REF][START_REF] Schadlbauer | The 3-RPS parallel manipulator from an algebraic viewpoint[END_REF], the 3-RPS cube PM [START_REF] Nurahmi | Kinematic analysis of the 3-RPS Cube Parallel Manipulator[END_REF] and 3-PRS PMs with different arrangements of prismatic joints [START_REF] Nurahmi | Operation modes and singularities of 3-PRS parallel manipulators with different arrangements of P-joints[END_REF]. In the foregoing papers, the prismatic joints were considered to be actuated, which makes the analysis inherently algebraic. A more challenging kinematic analysis of an over-constrained 4-RUU PM with square base and moving platform was accomplished by decomposing it into two 2-RUU PMs [START_REF] Nurahmi | Reconfiguration analysis of a 4-RUU parallel manipulator[END_REF]. The constraint equations of a 3-RUU PM are derived in this paper and its direct kinematics problem is solved. Nevertheless, a complete characterization of the manipulator operation modes has not been obtained yet. The paper is organized as follows: Section 2 describes the manipulator architecture. Section 3 deals with the derivation of algebraic constraint equations with two approaches and their comparison. Section 4 presents the solutions to direct kinematics for arbitrary design parameters and hints the recognition of a translational operation mode. Manipulator Architecture The 3-RUU PM is shown in Figure 1a. Each limb consists of a revolute joint and two universal joints mounted in series with the first revolute joint as the active joint. The moving platform and the fixed base form equilateral triangles with vertices C i and A i , respectively, i = 1, 2, 3. The unit vectors of the revolute joint axes within the i-th limb are denoted as s i j , i = 1, 2, 3; j = 1, ..., 5. s i5 and s i1 are tangent to the circumcircles (with centers P and O) of the moving platform and the base triangles, respectively. Vectors s i1 and s i2 are always parallel, so are vectors s i3 and s i4 . The origin of the fixed coordinate frame, F O is at O and the z O -axis lies along the normal to the base plane whereas the origin of the moving coordinate frame F P is at P and the z P -axis lies along the normal to the moving platform plane. x O and x P axes are directed along OA 1 and PC 1 , respectively. r 0 and r 1 are the circumradii of base and the moving platform, respectively. a 1 and a 3 are the link lengths. θ i1 is the angle of rotation of the first revolute joint about the axis represented by vector s i1 measured from the base plane whereas θ i2 is the angle of rotation of the second revolute joint about the axis represented by vector s i2 measured from the first link. s 35 A 3 B 3 C 3 s x O z O y O O x P z P y P O P F P A 1 B 1 C 1 A 2 B 2 C 2 θ 11 θ 12 h 1 h 2 a 1 F O a 3 (a) The 3-RUU PM in a general configuration s 3 s 2 s 1 s 4 s 5 x 0 y 0 z 0 x 1 y 1 z 1 θ 1 θ 2 A B C F 1 F 0 a 1 a 3 (b) A RUU limb Constraint Equations The constraint equations of the 3-RUU PM are derived using a geometrical approach and the Linear Implicitization Algorithm (LIA) [START_REF] Walter | On implicitization of kinematic constraint equations[END_REF]. First, canonical constraint equations for a limb of the PM are derived by attaching fixed and moving coordinate frames to the two extreme joints of a RUU limb as shown in Fig. 1b. Each U-joint is characterized by two revolute joints with orthogonal and intersecting axes and Denavit-Hartenberg (DH) convention is used to parameterize each limb. F 0 and F 1 are the fixed and the moving coordinate frames with their corresponding z-axes along the first and the last revolute joint axes, respectively. Later on, general constraint equations are derived for the whole manipulator. Derivation Using a Geometrical Approach Canonical Constraints In order to derive the geometric constraints for a RUU limb, the homogeneous coordinates4 of points A, B,C (a, b, c, respectively) and vectors s j , j = 1, ..., 5, shown in Fig. 1b are expressed as follows: 0 a = [1, 0, 0, 0] T 0 b = [1, a 1 cos(θ 1 ), a 1 sin(θ 1 ), 0] T 1 c = [1, 0, 0, 0] T 0 s 1 = [0, 0, 0, 1] T 0 s 2 = [0, 0, 0, 1] T 0 s 3 = [0, cos(θ 1 + θ 2 ), sin(θ 1 + θ 2 ), 0] T 0 s 4 = [0, cos(θ 1 + θ 2 ), sin(θ 1 + θ 2 ), 0] T 1 s 5 = [0, 0, 0, 1] T (1) where θ 1 and θ 2 are the angles of rotation of the first and the second revolute joints. Study's kinematic mapping is used to express the vectors c and s 5 in the fixed coordinate frame F 0 , using the transformation matrix 0 T 1 consisting of Study parameters x i and y i , i = 0, 1, 2, 3: 0 c = 0 T 1 1 c and 0 s 5 = 0 T 1 1 s 5 , where 0 T 1 = 1 ∆        ∆ 0 0 0 d 1 x 0 2 + x 1 2 -x 2 2 -x 3 2 -2 x 0 x 3 + 2 x 1 x 2 2 x 0 x 2 + 2 x 1 x 3 d 2 2 x 0 x 3 + 2 x 1 x 2 x 0 2 -x 1 2 + x 2 2 -x 3 2 -2 x 0 x 1 + 2 x 2 x 3 d 3 -2 x 0 x 2 + 2 x 1 x 3 2 x 0 x 1 + 2 x 2 x 3 x 0 2 -x 1 2 -x 2 2 + x 3 2        (2) with ∆ = x 0 2 + x 1 2 + x 2 2 + x 3 2 = 0 and d 1 = -2 x 0 y 1 + 2 x 1 y 0 -2 x 2 y 3 + 2 x 3 y 2 , d 2 = -2 x 0 y 2 + 2 x 1 y 3 + 2 x 2 y 0 -2 x 3 y 1 , d 3 = -2 x 0 y 3 -2 x 1 y 2 + 2 x 2 y 1 + 2 x 3 y 0 . All vectors are now expressed in the base coordinate frame F 0 and hence the geometric constraints can be derived. The following constraints are already satisfied: 1. The first and the second revolute joint axes are parallel: s 1 = s 2 2. Third and fourth revolute joint axes are parallel: s 3 = s 4 3. -→ AB is perpendicular to the first and the second revolute joint axes: (b -a) T s 1 = 0 4. The second revolute joint axis is perpendicular to the third revolute joint axis: s T 2 s 3 = 0 5. Length of the link AB is a 1 : ||b -a|| 2 = a 1 The remaining geometric constraints are derived as algebraic equations 5 : The second revolute joint axis, the fifth revolute joint axis and link BC lie in the same plane. In other words, the scalar triple product of the corresponding vectors is null: g 1 : (b -c) T (s 2 × s 5 ) = 0 (3) Vector -→ BC is perpendicular to the third and the fourth revolute joint axes: g 2 : (b -c) T s 4 = 0 (4) The fourth and the fifth revolute joint axes are perpendicular: g 3 : s T 4 s 5 = 0 (5) Length of the link BC is a 3 : g 4 : ||b -c|| -a 3 = 0 (6) Furthermore, Study's quadric equation S : x 0 y 0 + x 1 y 1 + x 2 y 2 + x 3 y 3 = 0 must be taken into account. The five geometric relations g 1 , g 2 , g 3 , g 4 , S describe the RUU limbs of the PM under study. As a matter of fact, when the first revolute joint is actuated, each limb has four DoF and it should be possible to describe it by only two constraint equations. Eqs. ( 4) and ( 5) contain the passive joint variable v 2 along with the active joint variable v 1 . Eliminating v 2 from g 2 and g 3 results in an equation that amounts to g 1 . Therefore, the two constraint equations in addition to the Study quadric describing a RUU limb are g 1 and g 4 , namely Eqs. ( 3) and [START_REF] Nurahmi | Operation modes and singularities of 3-PRS parallel manipulators with different arrangements of P-joints[END_REF]. The polynomials g 1 , g 4 and S define an ideal, which is a subset of all polynomials in the Study parameters: I 1 = g 1 , g 4 , S ⊆ k[x 0 , x 1 , x 2 , x 3 , y 0 , y 1 , y 2 , y 3 ]. (7) Explicitly these polynomials take the form: g 1 := (x 0 x 1 -x 2 x 3 ) (v 1 2 -1) + (-2x 0 x 2 -2x 1 x 3 ) v 1 (x 2 0 + x 2 1 + x 2 2 + x 2 3 )a 1 -2((x 2 0 + x 2 3 )(x 1 y 1 + x 2 y 2 ) + 2(x 2 1 + x 2 2 )(x 0 y 0 + x 3 y 3 ))(v 2 1 -1) = 0, (8) g 4 := -x 0 2 + x 1 2 + x 2 2 + x 3 2 v 1 2 + 1 a 1 2 + 4 (y 1 x 0 -y 0 x 1 + y 3 x 2 -y 2 x 3 ) v 1 2 + 8 (-x 0 y 2 + x 1 y 3 + x 2 y 0 -x 3 y 1 ) v 1 + 4 (y 2 x 3 -y 3 x 2 -y 1 x 0 + y 0 x 1 )) a 1 + x 0 2 + x 1 2 + x 2 2 + x 3 2 a 3 2 -4 y 2 2 + y 3 2 + y 0 2 + y 1 2 v 1 2 + 1 = 0. ( 9 ) General Constraints g 1 and g 4 are the constraint equations of an RUU limb with specially adapted coordinate systems. To assemble the PM one has to transform these equations so that the limbs get into the positions of Fig. 1a. It is well known [9] that 5 cosine and sine of angles are substituted by tangent half-angles to render the equations algebraic; cos (θ i ) = 1-v 2 i 1+v 2 i sin(θ i ) = 2v i 1+v 2 i where v i = tan(θ i /2), i = 1, 2 the necessary transformations are linear in the image space coordinates. Due to lack of space these transformations are only shown for the derivation of the constraint equations using the LIA in Sec.3.2 (Eq.14). One ends with six constraint equations g i1 , g i4 , i = 1, 2, 3 which form together with S = 0 and the normalization condition N : x 2 0 + x 2 1 + x 2 2 + x 2 3 -1 = 0 an ideal I = g 11 , g 14 , g 21 , g 24 , g 31 , g 34 , S , N ⊆ k[x 0 , x 1 , x 2 , x 3 , y 0 , y 1 , y 2 , y 3 ] (10) Derivation Using a Linear Implicitization Algorithm Canonical Constraints The canonical pose of a RUU limb is chosen such that the rotation axes coincide with the z-axes and the common normals of these axes are in the directions of the x-axes of the coordinate systems in order to derive the canonical constraint equations using LIA. It computes implicit equations of lowest possible degree out of parametric equations by comparing coefficients with an arbitrary system of implicit equations with the same degree. An extended explanation is given in [START_REF] Walter | On implicitization of kinematic constraint equations[END_REF]. To describe the RUU kinematic chain using the usual Denavit-Hartenberg (DH) parameters, the following 4 × 4 matrices are defined: T = M i .G i , i = 1, . . . , 5, where the M i -matrices describe a rotation about the z-axis with u i as the rotation angle. The G i -matrices describe the transformation of one joint coordinate system to the next. M i =     1 0 0 0 0 cos (u i ) -sin (u i ) 0 0 sin (u i ) cos (u i ) 0 0 0 0 1     , G i =     1 0 0 0 a i 1 0 0 0 0 cos (α i ) -sin (α i ) d i 0 sin (α i ) cos (α i )     . (11) The parameters in G i are DH parameters encoding the distance along x-axis a i , the offset along z-axis d i and the twist angle between the axes α i . The DH parameters for the RUU limb are α 2 = π 2 , α 4 = -π 2 , d 1 = a 2 = d 2 = d 3 = a 4 = d 4 = α 1 = α 3 = 0. Computing the Study-Parameters based on the transformation matrix T yields the parametric representation of the limb [START_REF] Husty | Algebraic methods in mechanism analysis and synthesis[END_REF]. Applying LIA yields the following quadratic canonical constraint equations S , f 1 and f 2 : J = f 1 , f 2 , S ⊆ k[x 0 , x 1 , x 2 , x 3 , y 0 , y 1 , y 2 , y 3 ], (12) where f 1 := (x 0 x 1 -x 2 x 3 ) (v 1 2 -1) -(2x 0 x 2 + 2x 1 x 3 ) v 1 a 1 + 2 v 1 2 + 1 (x 0 y 0 + x 3 y 3 ) = 0 f 2 := -x 0 2 + x 1 2 + x 2 2 + x 3 2 v 1 2 + 1 a 1 2 + 4 (y 1 x 0 -y 0 x 1 + y 3 x 2 -y 2 x 3 ) v 1 2 + 8 (-x 0 y 2 + x 1 y 3 + x 2 y 0 -x 3 y 1 ) v 1 + 4 (y 2 x 3 -y 3 x 2 -y 1 x 0 + y 0 x 1 )) a 1 + x 0 2 + x 1 2 + x 2 2 + x 3 2 a 3 2 -4 y 2 2 + y 3 2 + y 0 2 + y 1 2 v 1 2 + 1 = 0 (13) General Constraints To obtain the constraint equations of the whole mechanism from the canonical constraint equations, coordinate transformations are applied in the base and moving platform. To facilitate the comparison of the constraint equations derived by two different approaches, the coordinate transformations should be consistent with the global frames F O and F P as shown in Fig. 1a. The necessary transformations can be done directly in the image space P 7 [START_REF] Pfurner | Analysis of spatial serial manipulators using kinematic mapping[END_REF] by the mapping             x 0 x 1 x 2 x 3 y 0 y 1 y 2 y 3             →             2 v 0 2 + 1 x 0 -2 v 0 2 x 1 + 4 v 0 x 2 + 2 x 1 2 v 0 2 + 1 x 3 2 v 0 2 x 2 + 4 v 0 x 1 -2 x 2 ((r 0 -r 1 ) x 1 + 2 y 0 ) v 0 2 -2 x 2 (r 0 -r 1 ) v 0 + (-r 0 + r 1 ) x 1 + 2 y 0 ((r 0 -r 1 ) x 0 -2 y 1 ) v 0 2 + 4 v 0 y 2 + (r 0 -r 1 ) x 0 + 2 y 1 ((-r 0 -r 1 ) x 2 + 2 y 3 ) v 0 2 -2 (r 0 + r 1 ) x 1 v 0 + (r 0 + r 1 ) x 2 + 2 y 3 ((r 0 + r 1 ) x 3 + 2 y 2 ) v 0 2 + 4 v 0 y 1 + (r 0 + r 1 ) x 3 -2 y 2             , (14) where v 0 = tan(γ i ), i = 1, 2, 3, γ 1 = 0, γ 2 = 2π 3 and γ 3 = 4π 3 . The general constraint equations are obtained by transforming the f i of Eq.12 with Eq.14. The transformed equations are denoted f i1 = f i2 = 0, i = 1, 2, 3, and determine together with S = 0 and N = 0, the ideal J : J = f 11 , f 12 , f 21 , f 22 , f 31 , f 32 , S , N ⊆ k[x 0 , x 1 , x 2 , x 3 , y 0 , y 1 , y 2 , y 3 ] (15) Ideal Comparison A careful observation of the ideals spanned by the canonical constraint polynomials of both approaches reveals that g 4 = f 2 and g 1 = f 1 (x 2 0 + x 2 1 + x 2 2 + x 2 3 ) -2(x 2 0 + x 2 2 )(v 2 1 + 1)S . Since x 2 0 + x 2 1 + x 2 2 + x 2 3 cannot be null, these ideals are the same. Thus, it follows that the ideals I and J spanned by the constraint equations of the whole manipulator are also contained in each other: I ⊆ J ⊆ I . Since I and J determine the same ideal, the variety of the constraint polynomials must be the same [START_REF] Cox | Ideals, Varieties, and Algorithms: An Introduction to Computational Algebraic Geometry and Commutative Algebra[END_REF]. Therefore, the set of constraint equations derived in Section 3.2 is used for further computations as it contains only quadratic equations. Direct Kinematics: Numerical Examples Because of the complexity of the manipulator, it is not possible to compute the direct kinematics without using some numerical values. In the following subsections, the following arbitrary values are assigned to the design parameters of the manipulator, a 1 = 3, a 3 = 5, r 0 = 11, r 1 = 7. Identical Actuated Joints Assuming the actuated joint angles are equal, θ i1 = π 2 , i = 1, 2, 3 for simplicity, the system of constraint equations in Eq. (15) yields the following real solutions and the corresponding manipulator poses are shown in Fig. 2. (a) x 0 = √ 23023 154 , y 3 = - 3 2 x 0 , x 3 = - 3 √ 77 154 , y 0 = 3 2 x 3 , x 1 = x 2 = y 1 = y 2 = 0 , (b) x 0 = √ 23023 154 , y 3 = - 3 2 x 0 , x 3 = 3 √ 77 154 , y 0 = 3 2 x 3 , x 1 = x 2 = y 1 = y 2 = 0 , x O z O y O x P z P y P (a) x O z O y O x P z P y P (b) x O, x P z O, z P y O, y P (c) x O z O y O x P z P y P (d) Fig. 2: A numerical example: solutions to direct kinematics corresponding to (16) (c) {x 0 = 1, x 1 = x 2 = x 3 = y 0 = y 1 = y 2 = y 3 = 0} , (d) {x 0 = 1, x 1 = x 2 = x 3 = y 0 = y 1 = y 2 = 0, y 3 = -3} . (16) Different Actuated Joints Substituting distinct arbitrary inputs, setting x 0 = 1 and computing a Groebner basis of the resulting polynomials with pure lexicographic ordering yields a univariate polynomial x 3 • P(x 3 ) = 0, where degree(P(x 3 )) = 80. Translational Operation Mode The univariate polynomial of the previous section shows that this manipulator exhibits two operation modes. The one corresponding to x 3 = 0 yields pure translational motions of the moving platform with the identity as the orientation, similar to the motion of the famous delta robot [START_REF] Clavel | Delta, a fast robot with parallel geometry[END_REF]. From S follows also y 0 = 0. The set of original constraint equations reduces to [ 3y 3 -y 1 2 -y 2 2 -y 3 2 -4y 1 t 1 2 -6 (y 1 + 2)t 1 -y 1 2 -y 2 2 -y 3 2 -4y 1 -3y 3 , -2t 2 2 + 3t 2 + 2 y 2 √ 3 + -y 1 2 -y 2 2 -y 3 2 + 2y 1 + 3y 3 t 2 2 + (3y 1 -12)t 2 -y 1 2 -y 2 2 -y 3 2 + 2y 1 -3y 3 , 2t 3 2 + 3t 3 + 2 y 2 √ 3 + -y 1 2 - This system of equations yields a quadratic univariate in one of the y i variables, which gives a parametrization of the motion as a function of the input variables v i1 = tan(θ i1 /2), i = 1, 2, 3. Conclusion In this paper, the constraint equations of a 3-RUU PM were derived by two different approaches: geometrical approach, where all possible constraints were listed based on the geometry of the manipulator and through LIA, which yields the constraints by specifying the parametric equations and the desired degree. Both approaches have benefits and disadvantages such that it is possible to miss a constraint by merely observing the manipulator geometry while it is hard to interpret the physical meaning of the equations derived through LIA. However, it turns out that the ideals spanned by the constraint polynomials with both approaches are the same. As a result, the simplest set of equations was chosen for further analysis. Due to the complexity of the mechanism, a primary decomposition of these ideals is not possible and therefore a final answer to possible operation modes can not be given. However, the factorization of the final univariate polynomial of the direct kinematics algorithm gives strong evidence that this manipulator has a translational and a general three DoF operation mode. left superscript k denotes the vector expressed in coordinate frame F k , k ∈ {0, 1} Acknowledgements This work was supported by the Austrian Science Fund (FWF I 1750-N26) and the French National Research Agency (ANR Kapamat #ANR-14-CE34-0008-01).
18,936
[ "1307880", "10659", "16879", "949367" ]
[ "226037", "111023", "481388", "473973", "481388", "473973", "441569", "473973", "226037", "226037" ]
01757535
en
[ "spi" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01757535/file/ROMANSY_2018_Baklouti_Caro_Courteille.pdf
Sana Baklouti email: sana.baklouti@insa-rennes.fr Stéphane Caro email: stephane.caro@ls2n.fr Eric Courteille email: eric.courteille@insa-rennes.fr Elasto-Dynamic Model-Based Control of Non-Redundant Cable-Driven Parallel Robots This paper deals with a model-based feed-forward torque control strategy of non-redundant cable-driven parallel robots (CDPRs). The proposed feed-forward controller is derived from an inverse elastodynamic model of the CDPR to compensate for the dynamic and oscillatory effects due to cable elasticity. A PID feedback controller ensures stability and disturbance rejection. Simulations confirm that tracking errors can be reduced by the proposed strategy compared to conventional rigid body model-based control. Introduction Cable-driven parallel robots (CDPRs) contain a set of flexible cables that connect a fixed frame to an end-effector (EE) with a coiling mechanism for each cable. They have been used in many applications like pick-and-place [START_REF] Dallej | Towards vision-based control of cable-driven parallel robots[END_REF], rehabilitation [START_REF] Hernandez | Design Optimization of a Cable-Driven Parallel Robot in Upper Arm Training-Rehabilitation Processes[END_REF], painting and sandblasting of large structures [START_REF] Gagliardini | Discrete reconfiguration planning for cable-driven parallel robots[END_REF]. Thanks to their low inertia, CDPRs can reach high velocities and accelerations in large workspaces [START_REF] Lamaury | Dualspace adaptive control of redundantly actuated cable-driven parallel robots[END_REF]. Several controllers have been proposed in the literature to improve CDPR accuracy locally or on trajectory tracking [START_REF] Jamshidifar | Adaptive Vibration Control of a Flexible Cable Driven Parallel Robot[END_REF], [START_REF] Zi | Dynamic modeling and active control of a cable-suspended parallel robot[END_REF], [START_REF] Pott | IPAnema: a family of cable-driven parallel robots for industrial applications[END_REF]. In [START_REF] Cuevas | Assumed-Mode-Based Dynamic Model for Cable Robots with Non-straight Cables[END_REF], the control of CDPR in the operational space is presented, where the CDPR model is derived using Lagrange equations of motion for constrained systems, while considering non elastic but sagging cables through the Assumed Mode Method. In [START_REF] Merlet | Simulation of Discrete-Time Controlled Cable-Driven Parallel Robots on a Trajectory[END_REF], a discrete-time control strategy is proposed to estimate the position accuracy of the EE by taking into account the actuator model, the kinematic and static behavior of the CDPR. Multiple papers deal with the problem of controlling CDPRs while considering cable elongations and their effect on the dynamic behavior. A robust H ∞ control scheme for CDPR is described in [START_REF] Laroche | A Preliminary Study for H∞ Control of Parallel Cable-Driven Manipulators[END_REF] while considering the cable elongations into the dynamic model of the EE and cable tension limits.A control strategy is proposed for CDPRs with elastic cables in [START_REF] Khosravi | Dynamic modeling and control of parallel robots with elastic cables: singular perturbation approach[END_REF], [START_REF] Khosravi | Stability analysis and robust PID control of cable driven robots considering elasticity in cables[END_REF], [START_REF] Khosravi | Dynamic analysis and control of cable driven robots with elastic cables[END_REF]. It consists in adding an elongation compensation term to the control law of a CDPR with rigid cables, using singular perturbation theory. It requires the measurement of cables length and the knowledge of the EE pose real-time through exteroceptive measurements. Feed-forward model-based controllers are used to fulfillaccuracy improvement by using a CDPR reference model. This latter predicts the mechanical behavior of the robot; and then generates an adequate reference signal to be followed by the CDPR. This type of control provides the compensation of the desirable effects without exteroceptive measurements. A model-based control scheme for CDPR used as a high rack storage is presented in [START_REF] Bruckmann | Design and realization of a high rack and retrieval machine based on wire robot technology[END_REF]. This research work takes into account the mechanical properties of cables, namely their elasticity. This strategy, integrating the mechanical behavior of cables in the reference signal, enhances the CDPR performances. However, it compensates for the EE positioning errors due to its rigid body behavior. The mechanical response of the robot is predicted when the mechanical behavior of the cables is not influenced by their interaction with the whole system, namely, the cable elongation is estimated. As a consequence, the main contribution of this paper deals with the coupling of a model-based feed-forward torque control scheme for CDPR with a PID feedback controller. The feed-forward controller is based on the elasto-dynamic model of CDPR to predict the full dynamic and oscillatory behavior of the CDPR and to generate the adequate reference signal for the control loop. This paper is organized as follows: Section 2 presents the feed-forward modelbased control strategy proposed in this paper in addition to the existing rigid and elasto-static models. In Section 3, the proposed control strategy is implemented for a CDPR with three cables, a point-mass EE and three Degree-Of-Freedom (DOF) translational motions. Simulation results are presented to confirm the improvement of trajectory accuracy when using the proposed control strategy compared to conventional approaches. Conclusions and future work are drawn in Section 4. Feed-forward model-based controller The control inputs are mostly obtained by a combination of feed-forward inputs, calculated from a reference trajectory using a reference model of CDPR, and feedback control law, as in [START_REF] Lamaury | Control of a large redundantly actuated cable-suspended parallel robot[END_REF], [START_REF] Bayani | On the control of planar cable-driven parallel robot via classic controllers and tuning with intelligent algorithms[END_REF]. The used control scheme shown in Fig. [START_REF] Dallej | Towards vision-based control of cable-driven parallel robots[END_REF], is composed of a feed-forward block in which the inverse kinematic model is determined based on a CDPR reference model (Red block in Fig. [START_REF] Dallej | Towards vision-based control of cable-driven parallel robots[END_REF]). This latter is a predictive model of the dynamic behavior of the mechanism. Its input is the motor torque vectorζ rg ∈ R n , and its output is the reference winch rotation angle vectorq ref ∈ R n ;n being the number of actuators. The relationship between q ref and the cable length vectorl ref ∈ R n is expressed as: l ref -l 0 = R (q 0 -q ref ) , (1) whereR ∈ R n×n is a diagonal matrix that is a function of the gear-head ratios and winch radius.l 0 ∈ R n is the cable length vector at the static equilibrium andq 0 ∈ R n is an offset vector corresponding to the cable length when q ref = 0. This offset is compensated at the rest time. The unwound cable length of the ith cable is calculated using the CDPR inverse geometric model. x rg , x rg + - Reference model of CDPR q ref CDPR PID controller + + ¡ m ¡ rg Trajectory generation ¡ corr e q .. Dynamic model (Eq. ( 2)) q m Fig. 1: Feed-forward model-based PID control ζ rg is calculated using the following dynamic model of the CDPR, which depends on the desired EE pose x rg and acceleration ẍrg : ζ rg = R τ rg , τ rg = W -1 rg ( w g + w e -M ẍrg ) , (2) whereτ rg ∈ R n is a set of positive tensions.W rg ∈ R m×n is the CDPR wrench matrix,m being the DOF of its moving-platform. It is a function of x rg and maps the EE velocities to the cable velocity vector. M ∈ R m×m is the EE mass matrix,w g ∈ R m is the wrench vector due to gravity acceleration andw e ∈ R m denotes the external wrench. In each drive, a feedback PID controller sets the vector of corrected motor torqueζ corr ∈ R n . This latter is added to ζ rg to get the motor torqueζ m ∈ R n , which results in the measured winch angular displacementsq m ∈ R n . The PID feedback control law is expressed as follows: ζ m = ζ rg + K p e q + K d ėq + K i t + i ti e q (t) dt, (3) whereK p ∈ R n×n is the proportional gain matrix,K d ∈ R n×n is the derivative gain matrix,K i ∈ R n×n is the integrator gain matrix, e q = q refq m is the error to minimize, leading to the correction torque vector: ζ corr = K p e q + K d ėq + K i t + i ti e q (t) dt. (4) It is noteworthy that ζ corr depends on the CDPR reference model used to calculate the vector q ref . the best of our knowledge, two CDPR models have been used in the literature for the feed-forward model-based control of CDPRs with non-sagging cables: (i) rigid model and (ii) elasto-static model. As a consequence, one contribution of this paper deals with the determination of the elasto-dynamic model of CDPRs to be used for feed-forward control. Rigid model The CDPR rigid model considers cables as rigid links. It is assumed that while applying the motor torque ζ rg , the cables tension vector is equal to τ rg and the reference signal q ref anticipates neither the cable elongation nor the oscillatory motions of the EE. The PID feedback controller uses the motor encoders response q m , which is related to the rolled or unrolled cable length l rg , which corresponds to the winch angular displacement q rg . It should be noted that the cable elongations and EE oscillatory motions are not detected here, and as a consequence, cannot be rejected. Elasto-static model The CDPR elasto-static model integrates a feed-forward cable elongation compensation [START_REF] Bruckmann | Design and realization of a high rack and retrieval machine based on wire robot technology[END_REF]. It is about solving a static equilibrium at each EE pose while considering cable elasticity. Assuming that each cable is isolated, the cable elongation vector δl es is calculated knowing the cable tension vector τ rg . The elastostatic cable tension vector τ es is equal to τ rg . The relationship between δl i es and τ i es on the ith cable takes the following form with a linear elastic cable model: τ i es = τ i rg = ES δl i es δl i es + l i rg , ( 5 ) where E is the cable modulus of elasticity and S is its cross-section area. When q rg is used as a reference signal in the feedback control scheme, the EE displacement δx es is obtained from cable elongation vector δl es . To compensate for the cable elongation effects, δl es is converted into δq es , which corrects the angular position q rg . Thus, the elasto-static reference angular displacement q es ref becomes: q es ref = q rg -δq es . (6) As the CDPR cables tension is always positive, δl es > 0 corresponding to δq es < 0. The reference signal q es ref corresponds to a fake position of the EE for the cable elongation compensation. Here, under the effect of cable elongations, the reference EE pose is estimated to achieve the desired pose. Although the elasto-static reference model takes into account the cable elongations, the non-compensation for the EE pose errors due to the mechanism dynamic and elasto-dynamic behavior is not considered. Elasto-dynamic model The CDPR elasto-dynamic model takes into account the oscillatory and dynamic behavior of the EE due to cable elongations. Here, the cables are no-longer isolated and are affected by the EE dynamic behavior. Cable elongations make the EE deviate from its desired pose x rg . The real EE pose is expressed as: x ed = x rg + δx ed . The EE displacement leads to some variations in both cable lengths and cable tensions. Indeed, the ith cable tension τ i ed obtained from the elasto-dynamic model differs from τ i rg : τ i ed = τ i rg + δτ i ed = ES δl i ed δl i ed + l i rg , (7) where δl i ed is the ith cable elongation assessed by considering cable elasticity and oscillations. The CDPR elasto-dynamic model takes the form: M ẍed = W ed τ ed + w g + w e , (8) where W ed is CDPR wrench matrix expressed at EE pose x ed . Once x ed and the cable tension vector τ ed are calculated, the cable elongation vector δl ed can be determined. This latter is converted into δq ed , which corrects the angular position vector q rg . The reference angular displacement q ed ref becomes: q ed ref = q rg -δq ed . (9) The proposed control strategy based on the elasto-dynamic model leads to a feed-forward controller for EE oscillatory motion compensation in addition to the conventional rigid body feedback while considering the measurements from motor encoders. It should be noted that this feed-forward controller will not disrupt the rigid body feedback stability. 3 Control of a spatial CDPR with a point-mass end-effector A spatial CDPR with three cables and three translational-DOF is considered in this section. This CDPR is composed of a point-mass EE, which is connected to three massless and linear cables. A configuration of the CREATOR prototype (Fig. (2a)), being developed at LS2N, is chosen such that the cables are tensed along a prescribed trajectory. The Cartesian coordinate vectors of the cable (a) B 2 -2 -1 B 1 P 1 P 2 0 y (m) 1 = [2.0, -2.0, 3.5] T m, b 2 = [-2.0, -2.0, 3.5] T m, b 3 = [0.0 , 2.0 , 3.5] T m. The EE mass is equal to 20 kg. The cables diameter is equal to 1 mm. Their modulus of elasticity is equal to 70 GPa. Trajectory generation A circular helical trajectory, shown in Fig. (2b), is used from static equilibrium to steady state to evaluate the efficiency of the feed-forward model-based controller while considering three CDPR reference models. The EE moves from point P 1 of Cartesian coordinate vector p 1 = [0.5, -1.0, 0.25] T m to point P 2 of Cartesian coordinate vector p 2 = [0.5, -1.0, 1.5] T m along a circular helix. The latter is defined by the following equations: The coefficients of the five-order polynomial t α are chosen in such a way that the EE Cartesian velocities and accelerations are null at the beginning and the end of the trajectory. R is the helix radius, p t is helix pitch. β 0 , β 1 and β 2 are constants. Here, x(t) = R cos(t α ) + β 0 , y(t) = R sin(t α ) + β 1 , z(t) = p t t α + β 2 , (10) a 5 = 24π, a 4 = -60π, a 3 = 40π, a 2 = a 1 = a 0 = 0, p t = 0.1 m, β 0 = 0.5 m, β 1 = -1.0 m, β 2 = 0. 25 m, R = 0.5 m and t sim = 15 s. The velocity maximum value is 0.8 m/s. The acceleration maximum value is 1.2 m/s 2 . Controller tuning The PID feedback controller is tuned using the Matlab3 PID tuning tool. This latter aims at finding the values of proportional, integral, and derivative gains of a PID controller in order to minimize error e q and to reduce the EE oscillations. In the PID tuner work-flow, a plant model is defined from the simulation data, where the input is e q and the output is ζ corr . The gains obtained for the three control schemes are the following: • Rigid model based: K p =3, K i =1.5 and K d =1.5 • Elasto-static model based: K p =0.53, K i =0.2 and K d =0.18 • Elasto-dynamic model based: K p =0.33, K i =0.16 and K d =0. [START_REF] Lamaury | Control of a large redundantly actuated cable-suspended parallel robot[END_REF] It is noteworthy that the gains decrease from the rigid model to the elasto-dynmic reference model. End-effector position errors The EE position error is defined as the the difference between its desired position x rg and its real one. This latter should be normally determined experimentally. As experimentations are not yet done, a good CDPR predictive model should be used to estimate the EE real pose. The CDPR elasto-dynamic model is the closest to the real CDPR with non sagging cables; so, it is used to predict the real behavior of the CDPR. The input of this model is ζ m , which leads to x m ed . The position error is defined as δp = x rgx m ed . To analyze the relevance of the proposed control strategy, the three control schemes under study were simulated through Matlab-Simulink . Figure (3a) shows the norm of the EE position error δp when the proposed feed-forward control law is applied while using successively the three CDPR models to generate the reference signal. Figure (3b) illustrates the EE position error along the z-axis δz, which is the main one as the CDPR under study is assembled in a suspended configuration. The red (gree, blue, resp.) curve depicts the EE position error when the elastodynamic (elasto-static, rigid, resp.) model is used as a reference. The root-meansquare (RMS) of δp is equal to 8.27 mm when the reference signal is generated Conclusions and future work This paper proposed a model-based feed-forward control strategy for non-redundant CDPRs. An elasto-dynamic model of CDPR was proposed to anticipate the full dynamic behavior of the mechanism. Accordingly, a contribution of this paper deals with a good simulation model of the CDPR, including the vibratory effects, cable elongations and their interaction with the whole system, used as a reference control model. The comparison between the position errors obtained when using the proposed elasto-dynamic model or the classical rigid and elasto-static ones as control references shows meaningful differences. These differences reveal that the proposed control strategy guarantees a better trajectory tracking when adopting the proposed elasto-dynamic model to generate the reference control signal for a non-redundant CDPR. Experimental validations will be carried later on. Future work will also deal with the elasto-dynamic model-based control of redundant actuated CDPRs. Fig. 2 : 2 Fig. 2: (a) CREATOR prototype CAD diagram (b) End-effector desired path Fig. 3 : 3 Fig. 3: (a) Position error norm (b) Position error along z-axis of the end-effector Fig. 4 : 4 Fig. 4: Histogram of the RMS of δp and δz with tα = a5 t tsim 5 + a4 t tsim 4 + a3 t tsim 3 + a2 t tsim 2 + a1 t tsim + a0. www.mathworks.com/help/slcontrol/ug/introduction-to-automatic-pid-tuning.html
18,121
[ "173154", "10659", "173925" ]
[ "25157", "481388", "473973", "441569", "25157" ]
01757541
en
[ "spi" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01757541/file/Romansy2018_Wu_Caro.pdf
Guanglei Wu Stéphane Caro email: stephane.caro@ls2n.fr Torsional Stability of a U-joint based Parallel Wrist Mechanism Featuring Infinite Torsion Keywords: dynamic stability, parallel wrist mechanism, monodromy matrix, Floquet theory, torsional vibrations In this paper, the dynamic stability problem of a parallel wrist mechanism is studied by means of monodromy matrix method. This manipulator adopts a universal joint as the ball-socket mechanism to support the mobile platform and to transmit the motion/torque between the input shaft and the end-effector. The linearized equations of motion of the mechanical system are established to analyze its stability according to the Floquet theory. The instable regions are presented graphically in various parametric charts. Introduction The parallel wrist mechanisms are intended for camera-orientating [START_REF] Gosselin | The Agile Eye: a high-performance three-degree-of-freedom camera-orienting device[END_REF], minimally invasive surgical robots [START_REF] Li | Design of spherical parallel mechanisms for application to laparoscopic surgery[END_REF] and robotic joints [START_REF] Asada | Kinematic and static characterization of wrist joints and their optimal design[END_REF], thanks to their large orientation workspace and high payload capacity. Besides, another potential application is that they can function as a tool head for complicated surface machining [START_REF] Wu | Design and transmission analysis of an asymmetrical spherical parallel manipulator[END_REF], where an unlimited torsional motion is desired to drive the cutting tools in some common material processing such as milling or drilling. For this purpose, a wrist mechanism [START_REF] Wu | Design and transmission analysis of an asymmetrical spherical parallel manipulator[END_REF] as shown in Fig. 1 was proposed with a number of advantages compared to its symmetrical counterparts, such as enhanced positioning accuracy [START_REF] Wu | Mobile platform center shift in spherical parallel manipulators with flexible limbs[END_REF], infinite rotation [START_REF] Asada | Kinematic and static characterization of wrist joints and their optimal design[END_REF], structural compactness and low dynamic inertia [START_REF] Wu | Dynamic modeling and design optimization of a 3-DOF spherical parallel manipulator[END_REF]. The design of the manipulator is simplified by using a universal (U) joint supported by an input shaft to generate infinite input/output rotational motion. On the other hand, the U joint suffers from one major problem, namely, it transforms a constant input speed to a periodically fluctuating one, which may induce vibrations and wear. This paper will investigate the dynamic stability problem, focusing on the aspect of the torsional stability. To the best of the authors' knowledge, Porter [START_REF] Porter | A theoretical analysis of the torsional oscillation of a system incorporating a hooke's joint[END_REF] was the first to investigate this problem, where a single-degree-of-freedom linearized model was built to plot the stability chart by using the Floquet theory [START_REF] Floquet | Sur les équations différentielles linéaires à coefficients périodiques[END_REF]. Later, similar modeling approaches were adopted to derive the nonlinear equations for the stability analysis of U joint [START_REF] Porter | Non-linear torsional oscillation of a system incorporating a hooke's joint[END_REF][START_REF] Éidinov | Torsional vibrations of a system with hooke's joint[END_REF][START_REF] Asokanthan | Torsional instabilities in a system incorporating a hooke's joint[END_REF][START_REF] Chang | Torsional instabilities and non-linear oscillation of a system incorporating a hooke's joint[END_REF][START_REF] Bulut | Dynamic stability of a shaft system connected through a hooke's joint[END_REF]. Moreover, multi-shaft system consisting of multiple shafts interconnected via Hooke's joints can also be handled using the previous various approaches [START_REF] Zeman | Dynamik der drehsysteme mit kardagelenken[END_REF][START_REF] Kotera | Instability of torsional vibrations of a system with a cardan joint[END_REF]. Besides, lateral and coupled stability problem of the universal joint were studied [START_REF] Ota | Lateral vibrations of a rotating shaft driven by a universal joint: 1st report, generation of even multiple vibrations by secondary moment[END_REF][START_REF] Saigo | Self-excited vibration caused by internal friction in universal joints and its stabilizing method[END_REF][START_REF] Desmidt | Coupled torsion-lateral stability of a shaft-disk system driven through a universal joint[END_REF], too. According to the literature, the previous studies focus on single or multiple Ujoint mechanisms. On the other hand, a U joint working as a transmitting mechanism in a parallel mechanism has not received the attention, which will be the subject in this work. From the reported works, common approaches to analyze the stability problem of the linear/nonlinear dynamic model of the system include Floquet theory, Krylov-Bogoliubov method, Poincaré-Lyapunov method, etc.. As the relationship between the input and output shaft rotating speeds of the U joint is periodic, the Floquet theory will be an effective approach to analyze the stability problem, which will be adopted in this work. This paper investigates the dynamic stability analysis problem of the wrist mechanism by means of a monodromy matrix method. To this end, a linear model consisting of input and output shafts interconnected via a Hooke's joint is considered, and the linearized equations of motion of the system are obtained. Numerical study is carried out to assess the system stability and the effects of the parameters. Instable regions are identified from various parametric charts. Wrist Mechanism Architecture Figure 1 depicts the wrist mechanism, an asymmetrical spherical parallel manipulator (SPM). The mobile platform is composed of an outer and inner rings connected to each other with a revolute joint, the revolute joint being realized with a revolve bearing. The orientation of the outer ring is controlled by two limbs in-parallel, and it is constrained by a fully passive leg that is offset from the center of the mobile platform to eliminate the rotational motion around the vertical axis. Through a universal joint, the decoupled rotation of the inner ring is generated by the center shaft, which also supports the mobile platform to improve the positioning accuracy. The architecture of the wrist mechanism is displayed in Fig. 2. Splitting the outer ring and the two parallel limbs as well as the passive one, the remaining parts of the manipualtor can be equivalent to a U-joint mechanism. The center shaft is treated as the driving shaft and the inner ring is treated as a driven disk. The bend angle, i.e., the misalignment angle, is denoted by β , and the input/output angles are named as γ 1 /γ 2 , respectively. Equation of Motion of Torsional Vibrations The equations of motion for the U-joint mechanism shown in Fig. 2 is deduced via a synthetical approach [START_REF] Bulut | Dynamic stability of a shaft system connected through a hooke's joint[END_REF]. In accordance, the driving shaft and the driven disk are considered as two separate parts, as displayed in Fig. 3, where the cross piece connecting the input/output elements is considered as massless. The equation of motion of torsional vibrations of the driving part can be written as J I γ1 = -c 1 γ1 -k 1 γ 1 + M I (1) where γ 1 is the rotational coordinate of J I , and M I is the reaction torque of the input part of the Hooke's joint. Moreover, k 1 and c 1 depict the torsional stiffness and viscous damper of the driving shaft, respectively. On the other hand, the driven part is under the effect of the reaction torque M O , for which the dynamic equation is written as where γ 2 is the rotational coordinate of J O , and k 2 , c 2 stand for the torsional stiffness and viscous damper of the driven shaft. Moreover, the relationship between the input torque and the output one of the Hooke's joint can be written as c 2 γ2 + k 2 γ 2 = M O = -J O ( γ2 + ωo ) (2) M O = M I η(t) , η(t) = cos β 1 -sin 2 β sin 2 (Ω 0 t + γ 1 ) (3) where Ω 0 denotes the constant velocity of the driving shaft, henceforth, the following equations of motion are derived J I γ1 + c 1 γ1 + k 1 γ 1 -η(t)c 2 γ2 -η(t)k 2 γ 2 = 0 (4) J O ( γ2 + ωo ) + c 2 γ2 + k 2 γ 2 = 0 (5) with ωo = η(t) γ1 + η(t)(Ω 0 t + γ1 ) (6) Let τ be equal to Ω 0 t + γ 1 . Some dimensionless parameters are defined as follows: Ω = Ω 0 k 1 /J I , ζ = c 1 2 √ k 1 J I , µ = c 2 c 1 , ν = J O J I , χ = k 2 k 1 = 1 η(τ) 2 (7) Equations. ( 4) and ( 5) can be linearized and cast into a matrix form by discarding all the nonlinear terms, namely, γ1 γ2 + 2ζ Ω -2µζ Ω η(τ) 2η (τ) -2ζ Ω η(τ) 2µζ Ω 1 ν + η 2 (τ) γ1 γ2 + 1 Ω 2 -χ Ω 2 η(τ) η (τ) -1 Ω 2 η(τ) χ Ω 2 1 ν + η 2 (τ) γ 1 γ 2 = 0 -η (τ) (8) where primes denote differentiation with respect to τ, thus, Eq. ( 8) consists of a set of linear differential equations with π-periodic coefficients. Dynamic Stability Analysis The homogeneous parts of Eq. ( 8) should be considered sequentially to analyze the dynamic stability of the manipulator. Equation ( 8) can be expressed as:: γ γ γ + D γ γ γ + Eγ γ γ = 0 (9) with γ γ γ = γ1 γ2 T , γ γ γ = γ1 γ2 T , γ γ γ = γ 1 γ 2 T (10a) D = 2ζ Ω -2µζ Ω η(τ) 2η (τ) -2ζ Ω η(τ) 2µζ Ω 1 ν + η 2 (τ) (10b) E = 1 Ω 2 -χ Ω 2 η(τ) η (τ) -1 Ω 2 η(τ) χ Ω 2 1 ν + η 2 (τ) (10c) which can be represented by a state-space formulation, namely, ẋ(t) = A(t)x(t) (11) with x(t) = γ γ γ γ γ γ , A(t) = 0 2 I 2 -E -D (12) whence A(t) is a 4 × 4 π-periodic matrix. According to Floquet theory, the solution to equation system [START_REF] Kotera | Instability of torsional vibrations of a system with a cardan joint[END_REF] can be expressed as Φ Φ Φ(τ) = P(τ)e τR ( 13 ) where P(τ) is a π-periodic matrix and R is a constant matrix, which is related to another constant matrix H, referred to as monodromy matrix, with R = ln H/π. If the fundamental matrix is normalized so that P(0) = I 4 , then H = P(π). The eigenvalues λ i , i = 1, 2, 3, 4, of matrix H, referred to as Floquet multipliers, govern the stability of the system. The system is asymptotically stable if and only if the real parts of all the eigenvalues λ i are non-positive [START_REF] Chicone | Ordinary Differential Equations with Applications[END_REF]. Here, the matrix H is obtained numerically with the improved Runge Kutta Method [START_REF] Szymkiewicz | Numerical Solution of Ordinary Differential Equations[END_REF] with a step size equal to 10 -6 , and and its eigenvalues are calculated to assess stability of the system. The monodromy matrix method is a simple and reliable method to determine the stability of parametrically excited systems. Numerical Study on Torsional Stability This section is devoted to numerical stability analysis, where the stability charts are constructed on the Ω 0 -β and k 1 -β parametric planes to study the effect of parameters onto the system stability. From the CAD model of the robotic wrist, µ = 1, ν = 10, J I = 0.001 kg • m 2 , c 1 = 0.001 Nm/(rad/s). Figure 4 depicts the stability chart Ω 0 -β to detect the instability of the U-joint mechanism, with a constant stiffness k 1 = 10 Nm/rad, where the dotted zones represent the unstable parametric regions. When the rotating speed Ω 0 of the driving shaft is lower than 11π rad/s, this system is always stable when the misalignment angle β is between 0 and 30 • . On the contrary, angle β should be smaller than 5 • to guarantee dynamic stability of the parallel wrist mechanism when Ω 0 is equal to 19π rad/s. Similarly, the influence of the torsional stiffness of the driving shaft and the misalignment angle to the stability is illustrated in Fig. 5, with the driving shaft speed Ω 0 = 19π rad/s. It is apparent that the higher the torsional stiffness of the input shaft, the more stable the parallel robotic wrist. The system is stable when k 1 > 25 Nm/rad. Conclusion This paper dealt with the dynamic torsional stability analysis of a parallel wrist mechanism that contains a universal joint. Differing from the symmetrical counterparts, the asymmetrical architecture of this robotic wrist ensures an infinite torsional movement of the end-effector under a certain tilt angle. This unique feature allows the wrist mechanism to function as an active spherical joint or machine tool head, with a simple architecture. The stability problem of the wrist mechanism due to the nonlinear input-output transmission of the universal joint is studied, where a linear model consisting of input and output shafts interconnected via a Hooke's joint is considered. The linearized equations of motion of the system are obtained, for which the stability problem is investigated by resorting to a monodromy matrix method. The approach used to analyze the torsional stability of the parallel robotic wrist is numerically illustrated, wherein the instable regions are presented graphically. Moreover, some critical parameters, such as torsional stiffness and rotating speeds, are identified. Future work includes the complete parametric stability analysis of the system as well as its lateral stability. Fig. 1 . 1 Fig. 1. CAD model of the parallel wrist mechanism. Fig. 2 . 2 Fig. 2. Kinematic architectures of the wrist mechanism and its U joint. Fig. 3 . 3 Fig. 3. The driving and driven parts of the U-joint mechanism. Fig. 4 . 4 Fig. 4. Effects of the driving shaft speed Ω 0 and bend angle β onto the torsional stability of the parallel wrist with stiffness k 1 = 10 Nm/rad (blue point means torsional dynamic instability). Fig. 5 . 5 Fig. 5. Effects of the driving shaft stiffness k 1 and bend angle β onto the dynamic torsional stability with speed Ω 0 = 19π rad/s. Acknowledgement The reported work is supported by the Doctoral Scientific Research Foundation of Liaoning Province (No. 20170520134) and the Fundamental Research Funds for the Central Universities (No. DUT16RC(3)068).
14,162
[ "10659" ]
[ "230563", "481388", "473973", "441569" ]
01757553
en
[ "spi" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01757553/file/ReMAR2018_Nayak_Caro_Wenger.pdf
Abhilash Nayak Stéphane Caro Philippe Wenger A Dual Reconfigurable 4-rRUU Parallel Manipulator des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. A Dual Reconfigurable 4-rRUU Parallel Manipulator Abhilash Nayak1 , Stéphane Caro2 and Philippe Wenger 2 Abstract-The aim of this paper is to introduce the use of a double Hooke's joint linkage to reconfigure the base revolute joints of a 4-RUU parallel manipulator (PM). It leads to an architecturally reconfigurable 4-rRUU PM whose platform motion depends on the angle between the driving and driven shafts of the double-Hooke's joint linkage in each limb. Even when the angle is fixed, the manipulator is reconfigurable by virtue of the different operation modes it can exhibit. By keeping the angle as a variable, the constraint equations of the 4-rRUU PM are derived using Study's kinematic mapping. Subsequently, the ideal of constraint polynomials is decomposed as an intersection of primary ideals to determine the operation modes of the 4-rRUU PM for intersecting and parallel revolute joint axes in the base and the moving platform. I. INTRODUCTION Reconfigurability in a PM extends its employability for a variety of applications. A lower mobility PM with dof< 6 is reconfigurable if it has different configuration space regions with possibly different type or number of degrees of freedom. These regions are known as the operation modes of the PM and were first exemplified by Zlatanov et al. [START_REF] Zlatanov | Constraint Singularities as C-Space Singularities[END_REF] for the 3-URU DYMO robot exhibiting five different types of platform motion. Its closely related SNU 3-UPU [START_REF] Walter | A complete kinematic analysis of the SNU 3-UPU parallel robot[END_REF] and Tsai 3-UPU [START_REF] Walter | Kinematic analysis of the TSAI 3-UPU parallel manipulator using algebraic methods[END_REF] PMs were analyzed by Walter et al. with a complete characterization of their operation modes along with the transition poses. Most of the operation modes of the 3-URU or the 3-UPU PMs are physically distinguishable unlike the 3-[PP]S PMs for which the first two joints in each limb generate a motion equivalent to two coplanar translations followed by a spherical joint. 3-RPS [START_REF] Schadlbauer | The 3-rps parallel manipulator from an algebraic viewpoint[END_REF], 3-PRS [START_REF] Nurahmi | Operation modes and singularities of 3-PRS parallel manipulators with different arrangements of p-joints[END_REF] and 3-SPR PMs [START_REF] Nayak | Comparison of 3-RPS and 3-SPR Parallel Manipulators Based on Their Maximum Inscribed Singularity-Free Circle[END_REF] are such manipulators that exhibit two operation modes each with coupled motion. Other reconfigurable PMs include the 3-RER PM (E denotes a planar joint) found to have 15 3-dof operation modes [START_REF] Kong | Reconfiguration analysis of a 3-DOF parallel mechanism using euler parameter quaternions and algebraic geometry method[END_REF] and the 4-RUU PM with vertical base and platform revolute joint axes possessing three operation modes [START_REF] Nurahmi | Reconfiguration analysis of a 4-RUU parallel manipulator[END_REF]. Besides, a PM can be reconfigurable also by changing the position and/or orientation of one or more of its constituent joints. This type of reconfigurability is named architectural reconfigurability in this paper. MaPaMan [START_REF] Srivatsan | On the position kinematic analysis of mapaman: a reconfigurable three-degrees-of-freedom spatial parallel manipulator[END_REF] is one such manipulator in which the strut can be oriented to have two different architectures of the same PM where it can transition between roll-pitch-heave and roll-pitch-yaw degrees of freedom. Gan et al. [START_REF] Gan | Optimal design of a metamorphic parallel mechanism with reconfigurable 1T2R and 3R motion based on unified motion/force transmissibility[END_REF] introduced a novel reconfigurable {stephane.caro, philippe.wenger}@ls2n.fr revolute joint and proposed a metamorphic 3-rRPS PM. In this paper, a double Hooke's joint linkage is used as a reconfigurable revolute joint and is used to demonstrate different types of reconfigurability of a 4-RUU PM. The double Hooke's joint linkage is a well-known special over-constrained 6R-mechanism, where the first three and the last three joint axes are mutually perpendicular. It is also known as a Double Cardan Joint and is ubiquitous as a steering column in automobiles. Its architecture is fairly simple compared to a general over-constrained 6R mechanism which makes it easier to derive the input-output relations. There have been different approaches in the literature to derive its input-output relations proving that it is a constant velocity transmitter when the angle between the input shaft and the central yoke is equal to the angle between the central yoke and the output shaft [START_REF] Baker | Displacementclosure equations of the unspecialised doublehooke's-joint linkage[END_REF], [START_REF] Dietmaier | Simply overconstrained mechanisms with rotational joints[END_REF], [START_REF] Mavroidis | Analysis of overconstrained mechanisms[END_REF]. The constant-velocity transmission property of a double-Hooke's joint is exploited in this paper to reconfigure the first revolute joint axis in each limb of a 4-RUU PM. The organization of the paper is as follows: Section II presents the architecture of the dual reconfigurable 4-rRUU PM along with the architecture of its constituent double-Hooke's joint linkage. Section III deals with the derivation of constraint equations and the determination of operation modes of 4rRUU PMs for some specific orientations of the base and revolute joint axes. Section IV concludes the paper and puts forth some open issues associated with the construction of a functional 4-rRUU PM prototype. II. THE 4-rRUU PARALLEL MANIPULATOR A. Manipulator Architecture The architecture of the dual reconfigurable 4-rRUU PM with a square base and a platform is shown in Fig. 1 and its constituent double-Hooke's joint linkage is shown in Fig. 2. A reconfigurable revolute joint (rR) and two universal joints (UU) mounted in series constitute each limb of the 4-rRUU PM. Point L i , i = 1, 2, 3, 4 lies on the pivotal axis of the double-Hooke's joint linkage as shown in Fig. 2. Point A i lies on the first revolute joint axis of the 4-rRUU PM and it can be obtained from point L i by traversing a horizontal distance of l i along the first revolute joint axis. Points B i and C i are the geometric centers of the first and the second universal joints, respectively. Points L i and C i form the corners of the square base and the platform, respectively. F O and F P are the coordinate frames attached to the fixed base and the moving platform such that their origins O and P lie at the centers of the respective squares. The revolute-joint axes vectors in i-th limb are marked s ij , i = 1, 2, 3, 4; j = 1, ..., 5. Vectors s i1 and s i2 are A 3 B 3 C 3 A 1 B 1 C 1 A 2 B 2 C 2 B 4 C 4 z P y P P F P x P s 44 s 45 A 4 L 4 L 1 L 2 L 3 A A z O y O O F O x O r 0 r 1 p q s 42 φ 1 φ 6 O 0 O 6 β L i A i M O T O R l i Fig. 2: Double Hooke's joint always parallel, so are vectors s i3 and s i4 . For simplicity, it is assumed that the orientation of vector s i1 expressed in coordinate frame F O is the same as that of s i5 expressed in coordinate frame F P . The position vectors of points L i , A i , B i and C i expressed in frame F k , k ∈ O, P are denoted as k l i , k a i , k b i and k c i , respectively. r 0 and r 1 are half the diagonals of the base and the moving platform squares, respectively. p and q are the link lengths. B. Double-Hooke's joint linkage The double Hooke's joint linkage is shown in Fig. 2. The first three and the last three revolute joint axes intersect at points O 0 and O 6 , respectively. The first revolute joint is driven by a motor with an input angle of φ 1 and the last revolute joint rotates with an output angle of φ 6 and their axes intersect at point L i , i = 1, 2, 3, 4. It is noteworthy that for a constant-velocity transmission, the triangle O 0 O 6 L i must be isosceles with O 0 L i = O 6 L i . The angle between the input and the output shafts is denoted as β ∈ [0, π]. Since the double-Hooke's joint is known to be a constantvelocity transmitter, the following input-output relationship holds [START_REF] Baker | Displacementclosure equations of the unspecialised doublehooke's-joint linkage[END_REF], [START_REF] Dietmaier | Simply overconstrained mechanisms with rotational joints[END_REF], [START_REF] Mavroidis | Analysis of overconstrained mechanisms[END_REF]: φ 6 = -φ 1 (1) Figure 3 shows the top view of the 4-rRUU PM without the links. For architectural reconfigurability, the reconfigurable β 3 β 1 β 2 β 4 y O O F O x O A 3 A 1 A 2 A 4 s 31 s 21 s 11 s 41 L 4 L 1 L 2 L 3 Fig. 3: Possible orientations of the base revolute joint revolute joint axis in the base is allowed to have a horizontal orientation β i , i = 1, 2, 3, 4. It is noteworthy that β i will be changed manually in the prototype under construction. III. OPERATION MODE ANALYSIS A. Constraint Equations Since the reconfigurable revolute joint is actuated, a RUU limb must satisfy the following two constraints: 1) The second revolute joint axis, the fifth revolute joint axis and link BC must lie in the same plane. In other words, the scalar triple product of the corresponding vectors must be null: g i : (b i -c i ) T (s i2 × s i5 ) = 0, i = 1, 2, 3, 4 (2) 2) The length of link BC must be q: g i+4 : ||b i -c i || -q = 0, i = 1, 2, 3, 4 (3) Since the length of link BC does not affect the operation modes of the 4-rRUU PM, only the principal geometric constraint from Eq. ( 2) is considered. To express it algebraically, the homogeneous coordinates of the necessary vectors are listed below: 0 l i = R z (λ i ) [1, r 0 , 0, 0] T (4a) 0 a i = 0 l i + R z (λ i + β i ) [0, 0, l i , 0] T (4b) 0 b i = 0 a i + R z (λ i + β i ) [0, p cos(θ i ), 0, p sin(θ i )] T (4c) 0 c i = F R z (λ i )[1, r 1 , 0, 0] T , ( 4d ) 0 s i2 = R z (λ i + β i ) [0, 0, 1, 0] T , ( 4e ) 0 s i5 = F R z (λ i + β i )[0, 0, 1, 0] T , i = 1, 2, 3, 4. (4f) where R z (•) is the homogeneous rotation matrix about the zaxis, λ i for the i-th limb is given by λ 1 = 0, λ 2 = π 2 , λ 3 = π, λ 4 = 3π 2 and θ i is the actuated joint angle. F is the transformation matrix consisting of Study parameters x j and y j , j = 0, 1, 2, 3: F = 1 ∆        ∆ 0 0 0 d 1 r 11 r 12 r 13 d 2 r 21 r 22 r 23 d 3 r 31 r 32 r 33        (5) with ∆ = x 0 2 + x 1 2 + x 2 2 + x 3 2 = 0 and r 11 = x 0 2 + x 1 2 -x 2 2 -x 3 2 r 12 = -2 x 0 x 3 + 2 x 1 x 2 r 13 = 2 x 0 x 2 + 2 x 1 x 3 r 21 = 2 x 0 x 3 + 2 x 1 x 2 r 22 = x 0 2 -x 1 2 + x 2 2 -x 3 2 r 23 = -2 x 0 x 1 + 2 x 2 x 3 r 31 = -2 x 0 x 2 + 2 x 1 x 3 r 32 = 2 x 0 x 1 + 2 x 2 x 3 r 33 = x 0 2 -x 1 2 -x 2 2 + x 3 2 d 1 = -2 x 0 y 1 + 2 x 1 y 0 -2 x 2 y 3 + 2 x 3 y 2 , d 2 = -2 x 0 y 2 + 2 x 1 y 3 + 2 x 2 y 0 -2 x 3 y 1 , d 3 = -2 x 0 y 3 -2 x 1 y 2 + 2 x 2 y 1 + 2 x 3 y 0 . Thus, Eq. ( 2) is derived for each limb algebraically by substituting t i = tan( θi 2 ) and w i = tan( βi 2 ), i = 1, 2, 3, 4. The constraint polynomials g i , i = 1, 2, 3, 4 form the following ideal 1 : I = g 1 , g 2 , g 3 , g 4 ⊆ k[x 0 , x 1 , x 2 , x 3 , y 0 , y 1 , y 2 , y 3 ] (6) To simplify the determination of the operation modes, the 4-rRUU PM is split into two 2-rRUU PMs [START_REF] Nurahmi | Reconfiguration analysis of a 4-RUU parallel manipulator[END_REF] by considering two ideals: I (I) = g 1 , g 3 (7a) I (II) = g 2 , g 4 (7b) Even after substituting the design parameters, it was impossible to calculate the primary decomposition of ideals I (I) and I (II) for a general β i and it remains an open issue. Consequently, some special configurations of the 4-rRUU PM are considered and their operation modes are determined as follows: B. Operation Modes of some Specific 4-RUU PMs 1) β 1 = β 2 = β 3 = β 4 = π 2 : For the PM shown in Fig. 4, the constraint equations are derived from Eqs. ( 2) and (4) as 1 The ideal generated by the given polynomials is the set of all combinations of these polynomials using coefficients from the polynomial ring k [x 0 , x 1 , x 2 , x 3 , y 0 , y 1 , y 2 , y 3 ] [14]. g 1 := -pt 1 2 + p x 0 x 2 + 2 pt 1 x 0 x 3 + 2 pt 1 x 1 x 2 + pt 1 2 -p x 1 x 3 + 2 t 1 2 + 2 x 2 y 2 + (2 t 1 2 + 2)x 3 y 3 = 0 (8a) g 2 := -pt 2 2 + r 0 t 2 2 -r 1 t 2 2 + p + r 0 -r 1 x 0 x 2 + 2 t 2 px 0 x 3 + 2 t 2 px 1 x 2 + (pt 2 2 -r 0 t 2 2 -r 1 t 2 2 -p -r 0 -r 1 )x 1 x 3 + 2 t 2 2 + 2 x 2 y 2 + (2 t 2 2 + 2)x 3 y 3 = 0 (8b) g 3 := -pt 3 2 + p x 0 x 2 + 2 pt 3 x 0 x 3 + 2 pt 3 x 1 x 2 + pt 3 2 -p x 1 x 3 + 2 t 3 2 + 2 x 2 y 2 + (2 t 3 2 + 2)x 3 y 3 = 0 (8c) g 4 := -pt 4 2 -r 0 t 4 2 + r 1 t 4 2 + p -r 0 + r 1 x 0 x 2 + 2 t 4 px 0 x 3 + 2 t 4 px 1 x 2 + (pt 4 2 + r 0 t 4 2 + r 1 t 4 2 -p + r 0 + r 1 )x 1 x 3 + 2 t 4 2 + 2 x 2 y 2 + (2 t 4 2 + 2)x 3 y 3 = 0 (8d) The primary decomposition of ideals I (I) and I (II) shown in Eq. ( 7) leads to three sub-ideals each. Among them, the third sub-ideals I 3(I) and I 3(II) correspond to a mixed mode and are of little importance in this context. The other two sub-ideals I k(I) and I k(II) , k = 1, 2 are as follows: I (I) = I 1(I) ∩ I 2(I) ∩ I 3(I) , where I 1(I) = x 0 , x 1 , x 2 y 2 + x 3 y 3 I 2(I) = x 2 , x 3 , x 0 y 0 + x 1 y 1 (9a) I (II) = I 1(II) ∩ I 2(II) ∩ I 3(II) , where I 1(II) = x 0 , x 1 , x 2 y 2 + x 3 y 3 I 2(II) = x 2 , x 3 , x 0 y 0 + x 1 y 1 (9b) As a result, the first two operation modes of the 4-rRUU PM shown in Fig. 4 are: I 1 =I 1(I) ∪ I 1(II) = x 0 , x 1 , x 2 y 2 + x 3 y 3 (10a) I 2 =I 2(I) ∪ I 2(II) = x 2 , x 3 , x 0 y 0 + x 1 y 1 (10b) Substituting the condition x 0 = x 1 = x 2 y 2 + x 3 y 3 = 0 in the transformation matrix in Eq. ( 5) yields: F 1 =         1 0 0 0 - 2y 3 x 2 -1 0 0 2(x 2 y 0 -x 3 y 1 ) 0 x 2 2 -x 2 3 2x 2 x 3 2(x 2 y 1 + x 3 y 0 ) 0 2x 2 x 3 -x 2 2 + x 2 3         (11) From the transformation matrix, it can be observed that the operation mode is a 4-dof Schönflies mode in which the translational motions are parametrized by y 0 , y 1 and y 3 and the rotational motion is parametrized by x 2 , x 3 and x 2 2 + x 2 3 = 1. In this operation mode, the platform is upside down with the z P -axis pointing in a direction opposite to the z O -axis. The rotational motion is about x O -axis. Similarly, substituting the condition x 0 = x 1 = x 2 y 2 + x 3 y 3 = 0 in the transformation matrix in Eq. ( 5) yields: F 2 =         1 0 0 0 - 2y 1 x 0 1 0 0 -2(x 0 y 2 -x 1 y 3 ) 0 x 0 2 -x 1 2 -2 x 0 x 1 -2(x 0 y 3 + 2 x 1 y 2 ) 0 2 x 0 x 1 x 0 2 -x 1 2         (12 ) In this case, the operation mode is also a 4-dof Schönflies mode in which the translational motions are parametrized by y 1 , y 2 and y 3 and the rotational motion is parametrized by x 0 , x 1 and x 2 0 + x 2 1 = 1. The platform is in upright position with rotational motion about x O -axis. 2) and (4) as follows: g 1 := -pt 1 2 + p x 0 x 2 + 2 pt 1 x 0 x 3 + 2 pt 1 x 1 x 2 + pt 1 2 -p x 1 x 3 + 2 t 1 2 + 2 x 2 y 2 + (2 t 1 2 + 2)x 3 y 3 = 0 (13a) g 2 := -pt 2 2 + p x 0 x 1 + 2 pt 2 x 0 x 3 -2 pt 2 x 1 x 2 + -pt 2 2 + p x 2 x 3 + 2 t 2 2 + 2 x 1 y 1 + (2 t 2 2 + 2)x 3 y 3 = 0 (13b) g 3 := -pt 3 2 + p x 0 x 2 + 2 pt 3 x 0 x 3 + 2 pt 3 x 1 x 2 + pt 3 2 -p x 1 x 3 + 2 t 3 2 + 2 x 2 y 2 + (2 t 3 2 + 2)x 3 y 3 = 0 (13c) g 4 := -pt 4 2 + p x 0 x 1 + 2 pt 4 x 0 x 3 -2 pt 4 x 1 x 2 + -pt 4 2 + p x 2 x 3 + 2 t 4 2 + 2 x 1 y 1 + (2 t 4 2 + 2)x 3 y 3 = 0 (13d) The primary decomposition of ideals I (I) and I (II) shown in Eq. ( 7) leads to three sub-ideals each, of which the two sub-ideals I k(I) and I k(II) , k = 1, 2 are as follows: I (I) = I 1(I) ∩ I 2(I) ∩ I 3(I) , where I 1(I) = x 0 , x 1 , x 2 y 2 + x 3 y 3 I 2(I) = x 2 , x 3 , x 0 y 0 + x 1 y 1 (14a) I (II) = I 1(II) ∩ I 2(II) ∩ I 3(II) , where I 1(II) = x 0 , x 2 , x 1 y 1 + x 3 y 3 I 2(II) = x 1 , x 3 , x 0 y 0 + x 2 y 2 (14b) As a result, the first two operation modes of the 4-RUU PM are: I 1 =I 1(I) ∪ I 1(II) = x 0 , x 1 , x 2 , y 3 (15a) I 2 =I 2(I) ∪ I 2(II) = x 1 , x 2 , x 3 , y 0 (15b) Substituting the condition x 0 = x 1 = x 2 = y 3 = 0 in the transformation matrix in Eq. ( 5) yields: F 1 =            1 0 0 0 2y 2 x 3 -1 0 0 - 2y 1 x 3 0 -1 0 2y 0 x 3 0 0 1            (16) From the transformation matrix, it can be deduced that the operation mode is a 3-dof pure translational mode parametrized by y 0 , y 1 and y 2 when x 3 = 1. In this operation mode, the platform is upside down with the z P -axis pointing downwards. Similarly, substituting the condition x 1 = x 2 = x 3 = y 0 = 0 in the transformation matrix in Eq. ( 5) yields: F 2 =            1 0 0 0 - 2y 1 x 0 1 0 0 - 2y 2 x 0 0 1 0 - 2y 3 x 0 0 0 1            (17) In this case, the operation mode is also a 3-dof translational mode parametrized by y 1 , y 2 and y 3 when x 0 = 1. Since the rotation matrix is identity, the platform is in upright position with z P -axis pointing upwards. Fig. 6: A dual reconfigurable 4-rRUU PM with vertical base revolute joint axes 3) Vertical base and platform revolute joint axes: The double-Hooke's joint allows a planar transmission and hence the 4-rRUU PM can have any orientation of the base revolute joints such that β i ∈ [0, π]. Additionally, with the help of a L-fixture, it is possible to have a vertical orientation of the base revolute joint axes as shown in Fig. 6. Reconfiguration analysis of this mechanism already exists in the literature [START_REF] Nurahmi | Reconfiguration analysis of a 4-RUU parallel manipulator[END_REF], where it was shown to have three operation modes. The first operation mode is a 4-dof Schönflies mode in which the platform is upside down and the rotational axis is parallel to z O -axis. The second operation mode is a 4-dof Schönflies mode with the rotational axis parallel to z Oaxis, but in this case, the posture of the platform is upright. The third operation mode is a 2-dof coupled motion mode and is less relevant from a practical view point. IV. CONCLUSIONS AND FUTURE WORK A dual reconfigurable 4-rRUU PM consisting of a reconfigurable revolute joint based on the double-Hooke's joint linkage was presented in this paper. It was shown how a double-Hooke's joint linkage can be exploited to impart an architectural reconfigurability to a 4-RUU PM. The resulting dual reconfigurable 4-rRUU PM was shown to exhibit at least the following operation modes: a pure translational operation mode and Schönflies motion modes with different axes of rotation depending on the orientation of base revolute joint axes. As a part of the future work, the operation modes will be determined as a function of the angle β which will assist in recognizing all possible platform motion types of the 4-rRUU PM for β ∈ [0, π]. Furthermore, a detailed design of the 4-rRUU PM will be performed in order to construct a functional prototype exhibiting different types of reconfigurability. It should be noted that in the prototype, the orientation of the revolute joint axes in the moving platform will have to be changed manually and the choice of a joint to have all planar orientations is still an open issue. Fig. 1 : 1 Fig. 1: A 4-rRUU parallel manipulator Fig. 4 : 4 Fig. 4: A 4-rRUU PM with horizontal and intersecting base revolute joint axes Fig. 5 : 5 Fig. 5: A 4-RUU PM with horizontal and parallel base revolute joint axes 2) β 1 = β 3 = π 2 , β 3 = β 4 = 0: For the PM shown in Fig. 5 with w 1 = w 3 = 1 and w 2 = w 4 = 0, the Abhilash Nayak is with École Centrale de Nantes, Laboratoire des Sciences du Numérique de Nantes (LS2N), 1 rue de la Noë, UMR CNRS 6004, 44321 Nantes, France abhilash.nayak@ls2n.fr Stéphane Caro and Philippe Wenger are with CNRS, Laboratoire des Sciences du Numérique de Nantes (LS2N), École Centrale de Nantes, 1 rue de la Noë, UMR CNRS 6004, 44321 Nantes, France ACKNOWLEDGMENT This work was conducted with the support of both Centrale Nantes and the French National Research Agency (ANR project "Kapamat" #ANR-14-CE34-0008-01). We would also like to show our gratitude to Rakesh Shantappa, a former Master's student at Centrale Nantes for his help with the CAD models.
20,388
[ "1307880", "10659", "16879" ]
[ "473973", "481388", "473973", "441569", "473973" ]
01757577
en
[ "info" ]
2024/03/05 22:32:10
2015
https://inria.hal.science/hal-01757577/file/370579_1_En_15_Chapter.pdf
Renan Sales Barros Jordi Borst Steven Kleynenberg Céline Badr Rama-Rao Ganji Hubrecht De Bliek Landry-Stéphane Zeng-Eyindanga Henk Van Den Brink Charles Majoie Henk Marquering Sílvia Delgado Olabarriaga Remote Collaboration, Decision Support, and On-Demand Medical Image Analysis for Acute Stroke Care Keywords: Acute Care, Cloud Computing, Decision Support, High Performance Computing, Medical Image Analysis, Remote Collaboration, Stroke, Telemedicine des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction Acute ischemic stroke is the leading cause of disability and fourth cause of death [START_REF] Go | Heart disease and stroke statistics -2013 update: a report from the American Heart Association[END_REF]. In acute ischemic stroke, a blood clot obstructs blood flow in the brain causing part of the brain to die due to the lack of blood supply. The amount of brain damage and the patient outcome is highly related to the duration of the lack of blood flow ("time is brain"). Therefore, fast diagnosis, decision making, and treatment are crucial in acute stroke management. Medical data of a stroke patient is collected during the transport by ambulance to the hospital (e.g. vital signs, patient history, and medication). At arrival, various types of image data are acquired following protocols that involve opinions and decisions from various medical experts. Sometimes, a patient needs to be transferred to a specialized hospital and, in this case, it is important that all the data collected in the ambulance and at the referring hospital is available to the caregivers that will continue the treatment. Often, various medical specialists need to collaborate based on available information for determining the correct diagnosis and choosing the best treatment. Usually, this collaboration is based on tools that are not connected to each other and, because of that, they may not deliver the necessary information rapidly enough. In addition to these challenges, the amount of patient medical data is growing fast [START_REF] Hallett | Cloud-based Healthcare: Towards a SLA Compliant Network Aware Solution for Medical Image Processing[END_REF]. This fast increase is especially observed in radiological image data, which is also a consequence of new medical imaging technologies [START_REF] Alonso-Calvo | Cloud computing service for managing large medical image data-sets using balanced collaborative agents[END_REF][START_REF] Shini | Cloud based medical image exchange-security challenges[END_REF]. The management, sharing, and processing of medical image data is a great challenge for healthcare providers [START_REF] Alonso-Calvo | Cloud computing service for managing large medical image data-sets using balanced collaborative agents[END_REF][START_REF] Shini | Cloud based medical image exchange-security challenges[END_REF] and they can be greatly improved by the usage of cloud technologies [START_REF] Kagadis | Cloud computing in medical imaging[END_REF]. Cloud technologies also enable collaboration and data exchange between medical experts in a scalable, fast, and cost-effective way [START_REF] Kagadis | Cloud computing in medical imaging[END_REF]. Mobile devices, remote collaboration tools, and on-demand computing models and data analysis tools supported by cloud technologies may play an important role to help in optimizing stroke treatment and, consequently, improve outcome of patients suffering from stroke. In this paper, we present a cloud-based platform for Medical Distributed Utilization of Services & Applications (MEDUSA). This platform aims at improving current acute care settings by allowing fast medical data exchange, advanced processing of medical image data, automated decision support, and remote collaboration between physicians through a secure responsive virtual space. We discuss a case study implemented using the MEDUSA platform for supporting the treatment of acute stroke patients, presenting the technical details of the prototype implementation and commenting on its initial evaluation 2 Related Work The development of cloud-based platforms for collaboration and processing of medical data is a challenging task. Many authors [START_REF] Shini | Cloud based medical image exchange-security challenges[END_REF][START_REF] Kagadis | Cloud computing in medical imaging[END_REF][START_REF] Jeyabalaraja | Cloud Computing in Medical Diagnosis for improving Health Care Environment[END_REF][START_REF] Pino | A Survey of Cloud Computing Architecture and Applications in Health[END_REF] put forward that these platforms hold the potential to define the future of healthcare services. Also, the analysis of medical data can be an important way to improve quality and efficiency in healthcare [START_REF] Jee | Potentiality of big data in the medical sector: focus on how to reshape the healthcare system[END_REF][START_REF] Murdoch | The inevitable application of big data to health care[END_REF]. The work presented in [START_REF] Kanagaraj | Proposal of an open-source cloud computing system for exchanging medical images of a hospital information system[END_REF][START_REF] Yang | Implementation of a medical image file accessing system on cloud computing[END_REF] focuses on the development of a cloud-based solution aimed at only the storage and sharing of medical data. In other words, they propose solutions based on cloud infrastructures to facilitate medical image data exchange between hospitals, imaging centers, and physicians. A similar solution is presented in [START_REF] Koufi | Ubiquitous access to cloud emergency medical services[END_REF], however focusing on medical data sharing during emergency situations. A cloudbased system is presented in [START_REF] Zhuang | Efficient and robust large medical image retrieval in mobile cloud computing environment[END_REF] for storage of medical data with an additional functionality that enables content-based retrieval of medical images. Still focusing on cloud-based data storage and sharing, [START_REF] Hua | A Cloud Computing Based Collaborative Service Pattern of Medical Association for Stroke Prevention and Treatment[END_REF] presents a solution to help managing medical resources for the prevention and treatment of chronic stroke patients. In addition to storage and sharing, some studies also include the possibility of using the cloud infrastructure for processing of medical data. A simple cloud-based application is presented in [START_REF] Sharieh | Using cloud computing for medical applications[END_REF] to monitor oxygenated hemoglobin and deoxygenated hemoglobin concentration changes in different tissues. Cloud computing is also used in [START_REF] Parsonson | A cloud computing medical image analysis and collaboration platform[END_REF] not only to support data storage and sharing, but also to visualize and render medical image data. In [START_REF] Dorn | A cloud-deployed 3D medical imaging system with dynamically optimized scalability and cloud costs[END_REF] the authors also propose a cloud application for rendering of 3D medical imaging data. This application additionally manages the cloud deployment by considering scalability, operational cost, and network quality. Complete cloud-based systems for medical image analysis are presented in [START_REF] Chiang | Bulding a cloud service for medical image processing based on service-orient architecture[END_REF][START_REF] Huang | Medical information integration based cloud computing[END_REF][START_REF] Ojog | A Cloud Scalable Platform for DICOM Image Analysis as a Tool for Remote Medical Support[END_REF]. However, in these systems, image upload and download is manually performed by the user, while the system focuses on the remote processing, storage, and sharing of medical image data. The MEDUSA platform not only provides cloud-based storage, sharing, and processing of medical image data, but also real-time communication between medical experts, real-time collaborative interaction of the medical experts with the medical data, and a real-time decision support system that continuously processes patient data and displays relevant notifications about the patient condition. The MEDUSA platform also includes a cloud management layer that coordinates the use of resources in the cloud infrastructure. Other studies also present some cloud management features. In [START_REF] Ahn | Autonomic computing architecture for real-time medical application running on virtual private cloud infrastructures[END_REF] the authors propose a cloud architecture that reserves network and computing resources to avoid problems regarding load-balancing mechanisms of cloud infrastructures and to reduce the processing delays for the medical applications. Also, [START_REF] Hallett | Cloud-based Healthcare: Towards a SLA Compliant Network Aware Solution for Medical Image Processing[END_REF] proposes an algorithm to optimize the organization of medical image data and associated processing algorithms in cloud computing nodes to increase the computing performance. Finally, [START_REF] Alonso-Calvo | Cloud computing service for managing large medical image data-sets using balanced collaborative agents[END_REF] presents a cloud-based multi-agent system for scalable management of large collections of medical image data. The project presented in [START_REF] Holtmann | Medical Opportunities by Mobile IT Usage-A Case Study in the Stroke Chain of Survival[END_REF] tries to speed up current stroke care by integrating and sharing data from stroke patients using mobile networks. In this scenario, a hospital can, for instance, be prepared with the right resources before the arrival of the patient. This project also includes decision support, which suggests a predefined path through the emergency procedures according to the structure of mandatory and other supplementary healthcare protocols. However, differently from MEDUSA, this project does not include any image processing based feature. Acute Stroke Care Currently, treatment decision of stroke patients is increasingly driven by advanced imaging techniques. These imaging techniques consist of non-contrast computed tomography (ncCT), computed tomography angiography (CTA), and computed tomography perfusion (CTP). Because of the extensive usage of imaging techniques, it is common to produce gigabytes of image data per patient. The primary treatment for patients with acute ischemic stroke is intravenous administration of alteplase (thrombolysis). Patients who are not eligible for treatment with alteplase or do not respond to the treatment can be treated by mechanical removal of the blood clot via the artery (thrombectomy). Thrombectomy is only available in specialized hospitals and often a patient must be transferred for treatment. This transfer is arranged via telephone and imaging data created in the initial hospital is not available for the caregivers in the specialized hospital until the patient and imaging data arrive via the ambulance. On a regular basis it happens that the imaging data was wrongly interpreted in the initial hospital and that the patient is not eligible for thrombectomy. Also, often new imaging acquisitions have to be redone due to broken DVDs, wrong data, or insufficient quality. These problems result in futile transfers and loss of valuable time. MEDUSA Platform The MEDUSA platform was designed to support remote collaboration and high performance processing of medical data for multiple healthcare scenarios. The platform is accessible to final users through the MEDUSA Collaboration Framework (MCF), which is a web application that is compatible with any web browser that supports HTML5. The MCF is a special type of MEDUSA application that provides to the users an entry point to access other MEDUSA applications. A cloud management layer controls the deployment and execution of all MEDUSA applications in one or more cloud providers. Figure 1 illustrates the architectural design of the MEDUSA platform. MEDUSA Cloud Applications The MEDUSA platform has a number of cloud applications that are available in all healthcare scenarios: Audit Trail, which reports the events generated by the other MEDUSA applications; User Manager, which allows assigning roles to users and defining which MEDUSA applications they can use; and Video Call, which allows communication between users of the MEDUSA platform. Cloud Management Layer MEDUSA Cloud Applications MEDUSA Collaboration Framework User Manager Video Call Audit Trail … Cloud Provider The MEDUSA applications are started as part of a MEDUSA session. Multiple users in a session can interact with these applications, and these interactions are visible to all the users in the session. The handling of multiple user interactions is done by each MEDUSA application. The applications in the MEDUSA platform can be web applications or regular desktop applications. The desktop applications are integrated in the MEDUSA platform through a virtualization server that uses the technologies described in [START_REF] Joveski | Semantic multimedia remote display for mobile thin clients[END_REF] and [START_REF] Joveski | MPEG-4 solutions for virtualizing RDP-based applications[END_REF]. The multi-user interaction of the desktop applications is handled by the virtualization server. Cloud Provider The MEDUSA applications can be deployed in different cloud providers. Currently, these applications are being deployed in the High Performance Real-time Cloud for Computing (HiPeRT-Cloud) of Bull. The HiPeRT-Cloud is mainly designed for realtime computationally-intensive workloads. This solution is fully compatible with the Cloud Computing Reference Architecture of the National Institute of Standards and Technology (NIST) and provides infrastructure services under any cloud broker solution. The HiPeRT-Cloud is used in the MEDUSA platform because it provides solutions for handling complex applications in the field of real-time computational and dataintensive tasks in the cloud. Cloud Management Layer In order to take advantage of the on-demand, flexible, high-performance, and cost-effective options that cloud providers can offer, the cloud management layer, implemented by Prologue, manages the cloud deployment in the MEDUSA platform. This layer orchestrates the allocation and release of resources on the cloud provider's infrastructure. It also oversees the lifecycle of the deployed resources, ensures their availability and scalability, and links the desktop applications from the virtualization server back to the MCF. The cloud management layer is designed according to the Service-Oriented Architecture model and its functionalities are accessible through a Representational State Transfer Application Programming Interface (REST API). The cloud management layer also incorporates a monitoring service that operates by accessing directly the deployed virtual machines (VMs). The technology behind the cloud management layer is aligned with the NIST architecture and based on the Open Cloud Computing Interface specifications. In the MEDUSA context, technical requirements for computing, storage, network, and security resources have been identified for each MEDUSA application to be deployed. All requirements are then translated into machine-readable code that is used to provision the cloud resources. The components of the MEDUSA platform are hosted on the cloud through a security-aware, need-based provisioning process. By supporting on-demand hybrid and multi-cloud deployments, as well as monitoring, load balancing, and auto-scaling services through an agent embedded in each VM, the cloud management layer thus ensures a high resilience of the MEDUSA platform. Security The security of the MEDUSA platform is currently mainly based in the use of digital certificates, which are used to authenticate MEDUSA applications (VMs), to secure the data exchanges through the network, and to provide strong authentication of MEDUSA users. The VMs containing the applications are deployed dynamically, and thus server certificates need to be created dynamically, during the deployment. A web service was developed to provide dynamic generation of server certificates for the different VMs in the MEDUSA platform. These server certificates must be created during the deployment of the VMs and there must be one certificate per application and VM (identified by the IP address). Regarding the user authentication, an authentication module is called when a user opens a MEDUSA session. This module authenticates a user by checking the provided credentials against the user management component, which has access to a special internal directory containing the certificates used for strong authentication of MEDUSA users. The MEDUSA platform also uses robust image watermarking and fingerprinting methods to prevent and detect unauthorized modification and leaking of medical images by authorized users by. However, due to legal regulations, an important requirement when dealing with medical images is the capability reconstructing the original image data. Because of this, reversible or semantic-sensitive techniques for watermarking and fingerprinting can be used in the MEDUSA platform. These techniques enable to completely recover the original image data or at least the recovery of the regions of these images that are relevant for the user or application. MEDUSA Stroke Prototype The MEDUSA platform was designed to support various medical scenarios. Here, we focus on a prototype for supporting acute stroke care. The MEDUSA Stroke Prototype (MSP) is built by combining the default MEDUSA applications with three applications specifically configured to support the treatment of stroke patients: Advanced Medical Image Processing, Decision Support System, and 3D Segmentation Renderer. All the applications of the MSP are executed in VMs running on the HiPeRT-Cloud. The cloud management layer is in charge of the deployment of these VMs. Advanced Medical Image Processing For supporting the assessment of the severity of a stroke, several medical image processing algorithms (MIPAs) have been developed. These algorithms perform quantitative analysis of the medical image data and the result of these analyses can be used to support the treatment decisions. The output of these algorithms are, for example, the segmentation of a hemorrhage in the brain [START_REF] Boers | Automatic Quantification of Subarachnoid Hemorrhage on Noncontrast CT[END_REF], the segmentation of a blood clot [START_REF] Santos | Development and validation of intracranial thrombus segmentation on CT angiography in patients with acute ischemic stroke[END_REF], and the segmentation of the infarcted brain tissue [START_REF] Boers | Automated cerebral infarct volume measurement in follow-up noncontrast CT scans of patients with acute ischemic stroke[END_REF]. The MIPAs are linked together into processing pipelines with well-defined input, output, and policies that control their execution. The execution of these pipelines is automatically orchestrated to deliver the lowest execution time based on a set of optimization strategies (e.g. task parallelism, data parallelism, and GPU computing). The MIPAs are implemented as plugins for the IntelliSpace Discovery (ISD) platform, an enterprise solution for research, developed by Philips Healthcare. Figure 2 shows the output of the plugin for infarct volume calculation in the ISD. The collection of MIPAs specially developed to support acute stroke care that are included in the ISD constitutes the Advanced Medical Image Processing application of the MSP. The ISD is a Windows desktop application developed by using the .NET Framework. The development of the MIPAs is also based in the .NET Framework. For GPUbased computations, OpenCL 1.1 was used. OpenCL is a framework for the development and execution of programs across platforms consisting of different types of processors such as CPUs, GPUs, etc. OpenCL.NET was used to integrate OpenCL with the .NET. Framework. The data generated by the MIPAs are exported to the DSS by using JavaScript Object Notation (JSON) files through WebSockets. (Anonymized) Patient information is sent to the MIPAs by using the tags of the medical image data used as input. The information about the current session is directly sent to the ISD and forwarded to the MIPAs. Decision Support System The Decision Support System (DSS) by Sopheon provides real-time process support to medical professionals collaborating on the stroke case. The DSS is rule-based: the rules specify the conditions under which actions are to be advised (delivered as notifications). The Decision Support rules are part of a medical protocol and thus defined and approved by medical professionals. In the MSP, the DSS runs a set of rules specifically designed for dealing with stroke patients. It gathers real-time input from vital sign sensors and MIPAs. For instance, a rule could state that an infarct volume larger than 70 milliliters is associated with a poor outcome for the patient. When the DSS detects an infarct volume value of e.g. 80 milliliters, it will display the notification associated with this condition. DSS also selects relevant information from the data generated by the MIPAs and forwards it to the audit trail and to the 3D Segmentation Renderer. The DSS runs on Node.js, which is a platform built on Google Chrome's JavaScript runtime. The DSS is deployed on Fedora, which is an operating system based on the Linux kernel. 3D Segmentation Renderer The 3D Segmentation Renderer by Sopheon is responsible for displaying 3D segmentations generated by the MIPAs. This application was developed by using the WebGL library, which enables to render 3D graphics in the browser without installing additional software. Figure 3 shows the GUI of this application rendering the segmentation of brain tissue (in green and blue) and the segmentation of the infarcted region (in red). Initial Evaluation As this is an on-going project, the discussion presented below is based upon an evaluation of the first fully-integrated prototype. The MSP integrates very heterogeneous applications, which run on different operational systems (Windows, Linux) and use different development technologies (Java, OpenCL, C#, C++). These applications are seamlessly available for the user from a single interface. Also, the deployment of the applications is transparently handled by the platform. This solution is provided in a smooth and transparent manner, hiding the complex details from the user. In the MEDUSA platform, the data and user input need to cross several software layers, which might introduce overheads and decrease performance. However, such poor performance was not noticed in the initial MSP prototype. For instance, the Advanced Medical Image Processing application, which requires data exchange between different architectural components, was almost instantaneously ready for use without noticeable interaction delays. The MSP implements a complete acute stroke use case, which has been demonstrated live in various occasions. Impressions have been collected informally to assess the potential value of this prototype system. Table 1 compares the current stroke care situation in the Netherlands versus the stroke care that could be supported by the MEDUSA platform based on the functionalities currently present in the MSP. Because of its complexity, a detailed and quantitative evaluation of the MEDUSA platform involves several software components and requires a careful planning. The design of this evaluation was already defined in the first year of the project. It is scheduled to take place during the last 6 months of the MEDUSA project (end of 2015). Concerning the image processing functionality, most of the MIPAs included in the MSP are too computationally expensive to be executed on a local machine according to the time constraints of an acute stroke patient. HPC capabilities delivered by cloud computing were crucial to improve the processing of these algorithms from hours to minutes, making them suitable for acute stroke care. For instance, the time to run the method used to reduce noise in CTP data was reduced from more than half an hour to less than 2 minutes [START_REF] Barros | High Performance Image Analysis of Compressed Dynamic CT Perfusion Data of Patients with Acute Ischemic Stroke[END_REF]. Discussion and Conclusion The development of the MEDUSA platform started in 2013. Back then, this kind of cloud-based solutions was not common. Today, however, there is a clear trend in the healthcare industry towards the usage of cloud computing, collaboration, and automated analyses of medical data. In addition, when dealing with processing of medical data constrained by the requirements of acute care situations, a lot of benefits can be derived from the use of cloud computing: scalability, pay-per-use model, high performance computing capabilities, remote access, etc. There are innumerous technical challenges for enabling the execution and communication of software components in a platform like MEDUSA. Regarding stroke care, the software components execute in different computing devices (CPUs, GPUs, etc.) and based on different software platforms (web, Linux, Windows, etc.). In the MEDUSA platform these challenges are tackled using SOA approach and a virtualized infrastructure. Because of the variety of application types, a uniform way of establishing communication between the MEDUSA applications has not been developed yet. Nevertheless, the direct communication between applications based on the exchange of well-defined file formats through WebSockets was demonstrated to be effective, without a negative impact in the development and integration of these applications. The current functionalities present in the MSP have the potential to improve several aspects of current stroke care. The MEDUSA platform is still under development. Thus, most of the components to implement security are still not completely integrated in the platform yet. Defining and developing the security aspects of a platform like MEDUSA is also a very challenging task, since it is necessary to cope with different legal constraints, in particular across countries. The development process of the MEDUSA platform includes the implementation and validation of the platform in three different hospitals. This validation is currently being carried out in one hospital. Preliminary evaluation of the platform indicates that the solution is promising and has potential large value for improving treatment of these patients. Fig. 1 . 1 Fig. 1. The MEDUSA platform architecture. Fig. 2 . 2 Fig. 2. Plugin for automated measurement of the cerebral infarct volume in the ISD. Fig. 3 . 3 Fig. 3. 3D segmentation renderer showing the segmentation of brain tissue (green and blue) and the infarction in the brain (red). Table 1 . 1 Current stroke care vs. stroke care with MEDUSA. current with MEDUSA Data availability images are not available images are available online Time to access data transport by car of physical media (minutes to hours) online data transfer (few seconds) Potential value for automated quantitative analysis not used results of MIPAs readily available as decision yet for clinical decision decision parameters Infrastructure static, proprietary, fixed scale pay-per-use, scalable, and portable to different cloud providers Remote collaboration by phone by video-conference with access to the patient data Acknowledgments. This work has been funded by ITEA2 10004: MEDUSA.
27,825
[ "1030149" ]
[ "120654", "120654", "531502", "447038", "563936", "11436", "531503", "301799", "407824", "120654", "120654", "120654" ]
01757656
en
[ "info" ]
2024/03/05 22:32:10
2018
https://cea.hal.science/cea-01757656/file/main.pdf
Maha Kooli Henri-Pierre Charles Clement Touzet Bastien Giraud Jean-Philippe Noel Smart Instruction Codes for In-Memory Computing Architectures Compatible with Standard SRAM Interfaces This paper presents the computing model for In-Memory Computing architecture based on SRAM memory that embeds computing abilities. This memory concept offers significant performance gains in terms of energy consumption and execution time. To handle the interaction between the memory and the CPU, new memory instruction codes were designed. These instructions are communicated by the CPU to the memory, using standard SRAM buses. This implementation allows (1) to embed In-Memory Computing capabilities on a system without Instruction Set Architecture (ISA) modification, and (2) to finely interlace CPU instructions and in-memory computing instructions. I. INTRODUCTION In-Memory Computing (IMC) represents a new concept of data computation that has been introduced to overcome the von Neumann bottleneck in terms of data transfer rate. This concept aimes to reduce the traffic of the data between the memory and the processor. Thus, it offers significant reduction of energy consumption and execution time compared to the conventional computer system where the computation units (ALU) and the storing elements are separated. Hardware security improvements can also be expected thanks to this system architecture (e.g., side channel attacks, etc). The IMC concept has just started to be the focus of recent research works. The objective of our research works is to focus on different technological layers of an IMC system: silicon design, system architecture, compilation and programming flow. This enables to build a complete IMC system that can be then industrialized. In previous publications, we introduced our novel In-Memory Power Aware CompuTing (IMPACT) system. In [START_REF] Akyel | DRC 2: Dynamically Reconfigurable Computing Circuit based on memory architecture[END_REF], we presented the IMPACT concept based on a SRAM architecture and the possible in-memory arithmetic and logic operations. In [START_REF] Kooli | Software Platform Dedicated for In-Memory Computing Circuit Evaluation[END_REF], we proposed a dedicated software emulation platform to evaluate the IMPACT system performance. The results achieved in these papers show a significant performance improvement of the IMPACT system compared to conventional systems. In the present research work, we focus on a new important step of the design of a complete IMPACT system, in particular the communication protocol between the memory and the Computation Processor Unit (CPU). Fig. 1 presents a comparison of the communication protocol for a conventional system, for a GPU system and for our IMPACT system. In a conventional system based on von Neumann architecture, the traffic between the memory and the CPU is very crowded. Several instruction fetches and data transfers occupy the system buses during the computation (Fig. 1.a). In systems that are integrating accelerators (e.g., GPUs), the computation is performed in parallel, whereas only a single instruction fetch is needed. However, data transfers are still required over the data bus (Fig. 1.b). The traffic of our IMPACT system is completely different from the previous systems. No data transfer over the system buses is required since the computation is performed inside the memory. In addition, only one instruction transfer towards the memory is required (Fig. 1.c). Indeed, the IMPACT system presents a new concept that completely changes the memory features by integrating computation abilities inside the memory boundary. Therefore, the usual communication protocol between the memory and CPU is not fully compatible with the specific IMPACT system architecture. Thus, it has to be redefined to manage the new process of instruction executions. In this paper, we push one step further our research works on IMPACT system by: • Introducing a novel communication protocol between the CPU and the memory that is able to manage the transfer of the IMPACT instructions to the memory. • Defining the ISA that corresponds to this protocol. The reminder of this paper is organized as follows. Section II provides a summary of the architecture and the communication protocol used in conventional system. Section III discusses related works. Section IV introduces the IMPACT instruction codes and the communication protocol. In section V, we provide a possible solution to integrate the proposed IMPACT instructions inside an existing processor ISA. Finally, section VI concludes the paper. II. BACKGROUND In most traditional computer architecture, the memory and the CPU are tightly connected. Conventionally, a microprocessor presents a number of electrical connections on its pins dedicated to select an address from the main memory, and another set of pins to read/write the data stored from/or into that location. The buses which connect the CPU and the memory are one of the defining characteristics of a system. These buses need to handle the communication protocol between the memory and the CPU. The buses transfer different types of data between components. In particular, we distinguish, as shown in Fig. 2, three types: • Data bus: It has a bidirectional functionality. It enables the transfer of data that is stored in the memory towards the CPU, or vice versa. • Address bus: It is an unidirectional bus that enables the transfer of addresses from CPU to the memory. When the CPU needs a data, it sends its corresponding memory location via the address bus, the memory then sends back the data via the data bus. When the processor wants to store a data in the memory, it sends the memory location where it will be stored via the address bus, and the data via the data bus. When the program is executed, for each instruction the processor proceeds by the following steps: 1) Fetch the instruction from memory: The CPU transmits the instruction address via the address bus, the memory forwards then the instruction stored in that location via the data bus. 2) Decode the instruction using the decoder: The decoding process allows the CPU to determine which instruction will be performed. It consists in fetching the input operands and the opcode, and moving them to the appropriate registers in the register file of the processor. 3) Access memory (in case of read/write instructions): For 'read' instruction, this step consists in sending a memory address on the address bus and receiving the value on the data bus; The 'write' instruction consists in sending a data with the data bus. Then, this data is copied into a memory address, sent by the address bus. The control bus is used activate the write or read mode. 4) Execute the instruction. 5) Write-back (in case of arithmetic/logic instructions): the ALU performs the computation and write back the result in the corresponding register. III. RELATED WORKS A. In-Memory Computing Processing in-Memory (PiM), Logic in-Memory (LiM) and IMC architectures have been widely investigated in the context of integrating processor and memory as close as possible, in order to reduce memory latency and increase the data transfer bandwidth. All these architectures attempt to reduce the physical distance between the processor and the memory. In Fig. 3, we represent the main differences between PiM, LiM and IMC architectures. PiM [START_REF] Gokhale | Processing in memory: The Terasys massively parallel PIM array[END_REF] [4] [START_REF]UPMEM[END_REF] consists in putting the computation unit next to the memory while keeping the two dissociated. It is generally implemented in stand alone memories fabricated with a DRAM process. LiM and IMC architectures are based on embedded memories fabricated with a CMOS process. LiM [START_REF] Matsunaga | MTJ-based nonvolatile logic-in-memory circuit, future prospects and issues[END_REF] enables distributing non-volatile memory elements over a logic-circuit plane. IMC consists in integrating computation units inside the memory boundary, and represents a different concept that completely changes the memory behavior by integrating some in-situ computation functions located either before or after sens amplifiers circuits. As a result, the communication protocol between the memory and the processor has to be redefined. Compared to LiM, IMC enables non-destructive computing in the memory, i.e., the operand data are not lost after computation. Some recent research works start to explore and evaluate the performance of this concept. It has been applied both on volatile memories [START_REF] Jeloka | A 28 nm configurable memory (TCAM/BCAM/SRAM) using push-rule 6T bit cell enabling logic-in-memory[END_REF] [8] [START_REF] Aga | Compute caches[END_REF] and non volatile memories [START_REF] Wang | DW-AES: A Domain-Wall Nanowire-Based AES for High Throughput and Energy-Efficient Data Encryption in Non-Volatile Memory[END_REF] [START_REF] Li | Pinatubo: A processing-in-memory architecture for bulk bitwise operations in emerging non-volatile memories[END_REF]. Most of the existing IMC studies focus on the IMC hardware design. The system buses have never been presented, nor interactions between the CPU and the memory. Moreover, no ISA to implement the IMC system architecture has already been defined. All these points clearly limit the conception of a complete IMC system. In this paper, we focus on the communication protocol between the memory and the CPU for the IMPACT system and we define the ISA. This study is a basic step in the development of a complete IMC system on different technological layers from the hardware to the software. In addition, the IM-PACT system is able to support operations with multi-operand selection. Thus, the classic format of instruction (opcode + two source operand addresses + a destination address) cannot be used due to limitations in the bus size. CPU SRAM bus Additional Logic On-chip Memory CPU Additional Logic Off-chip Memory Logic in Memory B. Communication Protocols in Computing Systems The communication protocol between the memory and CPU has been widely presented and discussed in different works [START_REF] Vahid | Embedded system design: a unified hardware/software introduction[END_REF] [13] [START_REF] Null | The essentials of computer organization and architecture[END_REF]. This protocol is implemented and managed using buses. Existing system buses (data bus, address bus, control bus) are generally used to transfer the data or the addresses, but not the instructions. For the IMPACT system, the CPU should communicate the instruction to the memory so that the memory executes/computes this instruction. In the existing computer architecture, the buses are designed to enable only read and write operations, but not arithmetic and logic operations. This paper introduces a new communication protocol that is able to efficiently manage the interaction between the CPU and the IMPACT memory with a full compatibility with the existing buses. IV. IMPACT MEMORY INSTRUCTION CODES In this Section, we define the different IMPACT memory instruction codes that will be transmitted by the processor to the memory via the system buses. The challenge is to make the IMPACT system architecture as close as possible to a conventional system architecture, i.e., we aim to bring the less possible changes to the conventional system implementation in order to facilitate the integration of the IMPACT system to existing system architectures. In addition, it allows to propose a system that is able to interweave the execution of conventional CPU instructions and IMPACT instructions. A. IMPACT Specific Operations IMPACT system is based on the SRAM architecture to perform operations inside the memory macro thanks to an array composed of bitcells with dedicated read ports. The IMPACT system circuitry enables the same logic and arithmetic operations as a basic ALU. It also presents new specific features that are: • Multi-Operand Operations: The IMPACT system circuitry is able to perform logic and memory operations not only on two operands as for conventional systems, but on multiple operands. In fact, this feature is achieved thanks to the multi-row selector, which enables to generate a defined selection pattern (e.g., one line out of four, halftop of the rows, etc). • Long-Word Operations: The IMPACT system circuitry is able to perform arithmetic/logic/memory operations on words whose size can be up to the physical row size of the memory. The operand size is no longer limited by the register size (that is much more smaller that the maximum memory row size). B. IMPACT Instruction Formats Regarding these specific features of the IMPACT operations, we propose two new formats that enable to build the IMPACT memory instruction codes. (a) Multi-Operand Instruction Format: (b) Two-Operand Instruction Format: Opcode Address Mask SP Output SI Opcode Address 1 Address 2 SP Output SI 1) Multi-Operand Instruction Format: The multi-operand format enables to define the structure of the instruction that performs a multi-operand operation. In fact, in conventional system architecture, the instruction size is usually of 32 bits [START_REF]MIPS32 Architecture[END_REF]. Thus, they do not enable to encode all the addresses of the multiple operands. Therefore, we propose to define a pattern that enables to select the lines that store the operand data of the given operation. This pattern is built thanks to a pattern code (defined by both an address and a mask) driving a specific row-selector. To implement this instruction, we propose a multi-operand format, as shown in Fig. 4.a, that encodes: -The opcode of the operation. In Fig. 6, we provide the list of the logic multi-operand operations that the IMPACT system is able to execute. -The row selector address. -The row selector mask. -The output address, where the computation result is stored. -A Smart Instruction (SI) bit to inform the memory about the instruction type: an IMPACT or a conventional instruction. -A Select Pattern (SP) bit to enable/disable the pattern construction using the row-selector. In Fig. 5, we provide an example of the operating mode of the IMPACT system when it is executing a logic 'OR' operation with multi-operand. As input, the system take the instruction composants (opcode, pattern code, etc). Based on the bits of the pattern code address and mask, the specific row selector of the IMPACT memory builds a regular pattern to select the multiple memory lines. In this row selector, we create a sort of path (i.e., a tree) filled regularly by '0' and '1' bits. Then, the rule consists in looking after the bits of the mask: if the bit is '1', we select the two branches in the tree, if the bit is '0', only the branch corresponding to the address bit is selected. This method allows then to build regular output patterns. These patterns can then be refined by adding/deleting a specific line. For that, we define specific IMPACT operations ('PatternAdd' and 'PatternSub'). As shown in Fig. 5, the pattern can be also stored in the pattern register in case we require to refine it or to use it in the future. We assume that this refinement process could take some additional clock cycles to build the required pattern, however for certain applications where the pattern is used several times, the gain would be considerable. Once the pattern is build, the last step of the instruction execution consists in selecting the lines in the SRAM memory array that correspond to '1' in the pattern, and performing the operation. The advantage of this format consists in not explicitly encoding all the operand addresses inside the instruction. To the best of our knowledge, there is no computer architecture that defines such instructions using this pattern methodology. 2) Two-Operand Instruction Format: The two-operand instruction format represents the conventional format of instructions with maximum two source addresses. This format is used for the long-word operations. The source address represents the address of the memory row on which the operation will be performed. As shown in Fig. 4.b, the two-operand instruction format encodes: -The opcode of the operation. In Fig. 6, we provide the list of all the operations that the IMPACT system is able to execute. -The addresses of the first and second operand. -The output address, where the computation result is stored. -SI bit to inform the memory about the instruction type: an IMPACT or a conventional instruction. -SP bit to activate/dis-activate the pattern construction using the row-selector. In this format the pattern construction should be disable for all the instructions. C. IMPACT Communication Protocol In mainstream memories, the system buses are used to communicate between the memory and the CPU during the execution of different instructions of the program (that are stored in the code segment of the memory). In the IMPACT system, the instructions are built on the fly during the compilation by the processor respecting the formats defined in Subsection IV-A. Then, they are transferred from the processor to the memory via the standard SRAM buses (data and address buses). The compilation aspect and communication between the program code and the processor are not detailed in the present paper. In this Section, we present the implementation of the IMPACT instructions on the data and the address buses. For the proposed communication protocol, we consider a data bus of 32-bits size, an address bus of 32-bits size. Then, we propose a specific encoding of the instruction elements over the two buses. As shown in Fig. 7, we make use of the data and the address buses to encode the opcode, the source addresses, the output address, and additional one-bit SP and SI signals. implementation does not change the implementation of conventional system. The communication protocol is able to address both the IMPACT and the SRAM memories. 1) Data Bus: The data bus encodes the opcode on 7 bits. In fact, the IMPACT operations are hierarchically ranked as shown in Fig. 6. Then, the data bus encodes the source addresses, each over 12 bits, leading to a maximum 4096 words of IMPACT memory. In case of two-operand format, we encode successively the two operand addresses. In case of multi-operand format, we encode the address and the mask of the pattern code. The last bit of the data bus is occupied by the SP signal as described in Subsection IV-B. 2) Address Bus: The address bus encodes the output address over 12 bits in the Least Significant Bit (LSB). It also reserves one bit, in the Most Significant Bit (MSB) for the smart instruction signal in order to inform the memory about the arriving of an IMPACT instruction. V. IMPLEMENTATION AT ISA LEVEL We propose, in this Section, a possible solution to built the proposed IMPACT instruction codes from the instruction set architecture (ISA) of a given processor. The solution consists in using an existing processor ISA without integrating new instructions or opcodes. In particular, we use the store instruction to monitor the IMPACT operations (arithmetic/logic). The IM-PACT opcode, as well as its operands, will be encoded inside the operands (i.e., registers) of the conventional instruction. In Fig. 8, we provide, for an IMPACT addition operation, the compilation process to create the corresponding IMPACT assembly code. First, the compiler generates instructions that encodes the addresses of the IMPACT opcode as well as the operands inside specific registers. These instructions will be transferred via the system buses respecting the conventional communication protocol. Then, the compiler generates the store instruction with the previously assembled specific registers. This store instruction is then transferred to the memory through the system buses respecting the IMPACT communication protocol defined in Subsection IV-C. The advantage of this solution consists in its facility to be compatible with the processor ISA (no problem in case of version changes). However, the compilation process will be quite complex since it requires to generate a preliminary list of instructions needed then to generate IMPACT instruction using the store instruction. Further solutions to integrate the proposed IMPACT memory instruction code in the ISA are possible. However, they require to change the processor ISA by integrating one or more new instructions (e.g., 'IMPACTAdd'). The compilation process will be then simpler since it does not require to generate the preliminary list of instructions. However, this solution could have some problems of compatibility with future ISA versions. VI. CONCLUSION This paper discusses the integration of the In-Memory Computing capabilities into a system composed of a processor and a memory without changing the processor implementation and instruction set. This is acheived by inverting the von Neuman model: instead of reading instructions from memory, the CPU communicates the instructions, in certain formats, to the IMPACT memory via the standard SRAM buses. The proposed approch allows to benefit from the huge speedup in terms of execution time and energy consumption offered by the IMPACT system, but also to easily interweave conventional CPU instructions and in-memory computing instructions. One main advantage of this approach is to have a similar data layout view on both CPU and IMPACT side. Whereas, other conventional approaches (e.g., GPUs) need to copy data and change the layout at run-time. As future works, we aim to continue characterizing applications on high level, and to develop the compiler for this system. High level optimizations of classical programming languages or new programming paradigms are also under investigation. Fig. 1 . 1 Fig. 1. Comparison between the Communication Protocols of (a) Conventional System (von Neumann), (b) System with Accelerators GPUs (von Neumann with Single Instruction Multiple Data (SIMD)) and (c) IMPACT Systems (non von Neumann). Fig. 3 . 3 Fig. 3. Comparison between IMC, LiM and PiM Architectures. Fig. 4 . 4 Fig. 4. Description of IMPACT Instruction Formats. Fig. 5 .Fig. 6 . 56 Fig. 5. Illustration of a Multi-Operand Instruction (OR) requiring a Pattern Code. Fig. 7 . 7 Fig. 7. Example of Data and Address Buses Use to Encode IMPACT and Conventional Instructions. Fig. 8 . 8 Fig. 8. Implementation of IMPACT Store Instruction. Address Bus Central Memory CPU ALU Decoder Reg PC Control Bus Data Bus • Control bus: It is a set of additional signals defining the operating mode, read/write, etc. Fig. 2. Conventional Computing System Architectures.
22,776
[ "1021863", "4257", "746991" ]
[ "40214", "255534", "40214", "40214", "40214" ]
01757390
en
[ "chim", "spi" ]
2024/03/05 22:32:10
2009
https://hal.science/hal-01757390/file/Kamar-Arcachon-2009-HAL.pdf
Martial SAUCEAU* Karl Kamar Martial Sauceau email: martial.sauceau@mines-albi.fr Élisabeth Rodier Jacques Fages Elisabeth Rodier Biopolymer foam production using a (SC CO2 est destinée au dépôt INTRODUCTION Polymers are widely used in several areas. However, due to their slow degradation and the predicted exhaustion of the world petroleum reserves, significant environmental problems have arisen. Therefore, it is necessary to replace them with bioplastics that degrade in a short time when exposed to a biologically active environment [START_REF] Bucci | PHB packaging for the storage of food products[END_REF]. Biopolymers like PHAs (polyhydroxyalkanoates) are marketed as the natural substitutes for common polymers, as they are 100% biodegradable polymers. PHAs are polyesters of various HAs which are synthesised by numerous microorganisms as energy reserve materials in the presence of excess carbon source. Poly(3-hydroxybutyrate) (PHB) and its copolymers with hydroxyvalerate (PHB-HV) are the most widely found members of this biopolymer group and were also the first to be discovered, and most widely studied PHA [START_REF] Khanna | Recent advances in microbial poly-hydroxyalkanoates[END_REF]. They possess properties similar to various synthetic thermoplastics like polypropylene and hence can be used alternatively. Specifically, PHB exhibits properties such as melting point, a degree of cristallinity and glass transition temperature, similar to polypropylene (PP). Although, PHB is stiffer and more brittle than PP, the copolymerization of PHB with 3-hydroxyvalerate (PHB-HV) produces copolymers which are less stiff and tougher. That is to say that there is a wide range of applications for these copolymers [START_REF] Gunaratne | Multiple melting behaviour of poly(3-hydroxybutyrate-cohydroxyvalerate) using step scan DSC[END_REF]. The properties of this copolymer depend on the HV content, which determines the polymer crystallinity [START_REF] Peng | Isothermal crystallization of poly(3hydroxybutyrate-co-hydroxyvalerate)[END_REF]. Extrusion is a process converting a raw material into a product of uniform shape and density by forcing it through a die under controlled conditions [START_REF] Rauwendaal | Polymer Extrusion[END_REF]. It has extensively been applied in the plastic and rubber industries, where it is the most important manufacturing process. A particular application concerns the generation of polymeric foams. Polymeric foams are expanded materials with large applications in the packaging, insulating, pharmaceutical and car industries because of their high strength/weight ratio or their controlled release properties. Conventional foams are produced using either chemical or physical blowing agents. Various chemical blowing agents, which are generally low molecular weight organic compounds, are mixed with a polymer matrix and decompose when heated beyond a threshold temperature. This results in the release of a gas, and thus the nucleation of bubbles. This implies however the presence of residues in the porous material and the need for an additional stage to eliminate them. Injection of scCO 2 in extrusion process modifies the rheological properties of the polymer in the barrel of the extruder and scCO 2 acts as a blowing agent during the relaxation when flowing through the die [START_REF] Sauceau | Improvement of extrusion processes using supercritical carbon dioxide[END_REF]. The pressure drop induces a thermodynamic instability in the polymer matrix, generating a large number of bubbles. The growth of cells continues until the foam is rigidified (when T<T g ). Moreover, its relatively high solubilisation in the polymer results in extensive expansion at the die. The reduction of viscosity decreases the mechanical constraints and the operating temperature within the extruder. Thus, coupling extrusion and scCO 2 would allow the use of fragile or thermolabile molecules, like pharmaceutical molecules. The absence of residues in the final material is also an advantage for a pharmaceutical application. Our lab has developed a scCO 2 -assisted extrusion process that leads to the manufacturing of microcellular polymeric foams and already elaborated microcellular foams using a biocompatible amorphous polymer [START_REF] Sauceau | Effect of supercritical carbon dioxide on polystyrene extrusion[END_REF][START_REF] Nikitine | Residence time distribution of a pharmaceutical grade polymer/supercritical CO2 melt in a single screw extrusion process[END_REF][START_REF] Nikitine | Controlling the structure of a porous polymer by coupling supercritical CO2 and single screw extrusion process[END_REF]. In this work, this process has been applied to PHB-HV. Foam production of semi-crystalline polymer is less frequent in the literature. Crystallinity hinders the solubility and diffusion of CO 2 into the polymer and leads consequently to less uniform porous structure [START_REF] Doroudiani | Processing and characterization of microcellular foamed high-density polyethylene/isotactic polypropylene blends[END_REF]. Moreover, it has been shown that a large volume expansion ratio could be achieved by freezing the extrudate surface of the polymer melt at a reasonably low temperature [START_REF] Park | Low density microcellular foam processing in extrusion using CO2[END_REF]. Thus, in this work, in order to control and improve the porous structure of the PHB-HV, the influence of melt and die temperatures have been studied. MATERIALS AND METHODS PHB-HV (M w =600 kDa), with a HV content of 13 % and plasticized with 10 % of a copolyester was purchased from Biomer (Germany). Melting temperature was measured at 159°C by DSC (ATG DSC 111, Setaram). The solid density € ρ P , determined by helium pycnometry (Micromeretics, AccuPYC 1330) is about 1216 kg.m -3 . A rheological study at atmospheric pressure has been performed in oscillatory mode (MARS, Thermo Scientific). The polymer viscosity decreases when temperature and shear rate increase, which is a characteristic behaviour of a pseudoplastic fluid (Figure 1). This characterization step helped in choosing the operating conditions to process PHB-HV by extrusion. These conditions have to ensure that the polymer flows well enough through the barrel without being thermally degraded. Figure 2 shows the experimental set up, which has previously been detailed elsewhere [START_REF] Nikitine | Residence time distribution of a pharmaceutical grade polymer/supercritical CO2 melt in a single screw extrusion process[END_REF][START_REF] Nikitine | Controlling the structure of a porous polymer by coupling supercritical CO2 and single screw extrusion process[END_REF]. The single-screw extruder has a 30 mm-screw diameter and a length to diameter ratio (L/D) of 35 (Rheoscam, SCAMEX). A great L/D ratio generally indicates a good capacity of mixing and melting but important energy consumption. The screw is divided into three parts. The first one has a length to diameter ratio of 20 and the two others have a length to diameter ratio of 7.5. Between each part, a restriction ring has been fitted out in order to obtain a dynamic gastight which prevents scCO 2 from backflowing. The first conical part allows the transport of solid polymers and then, their melting and plasticizing. Then, the screw has a cylindrical geometry from the first gastight ring to the die. This die has a diameter of 1 mm and a length of 11.5 mm. The temperature inside the barrel is regulated at five locations: T a and T b before the CO 2 injection, T c and T d after the injection and T e in the die. Figure 1: Evolution of viscosity with pulsation There are three pressure and two temperature sensors: P 1 after the CO 2 injector, P 2 and T 1 before the second gastight ring and P 3 and T 2 by the die. This allows measuring the temperature and the pressure of the polymer inside the extruder. Errors associated to pressure and temperature sensors were about 0.2 MPa and 3.3°C respectively. CO 2 (N45, Air liquide) is pumped from a cylinder by a syringe pump (260D, ISCO) and then introduced at constant volumetric flow rate. The pressure in the CO 2 pump is kept slightly higher than the pressure P 1 . The CO 2 injector is positioned at a length to diameter ratio of 20 from the feed hopper. It corresponds to the beginning of the metering zone, that is to say the part where the channel depth is constant and equal to 1.5 mm. The pressure, the temperature and the volumetric CO 2 flow rate are measured within the syringe pump. CO 2 density, obtained on NIST website by Span and Wagner equation of state [START_REF] Span R | A New Equation of State for Carbon Dioxide Covering the Fluid Region from the Triple-Point Temperature to 1100 K at Pressures up to 800 MPa[END_REF], is used to calculate mass flow rate and thus the CO 2 mass fraction € w CO 2 . For each experiment, only the temperature of the metering zone T d and of the die T e were changed. The three other temperatures T a , T b and T c were kept constant at 160°C. CO 2 mass fraction € w CO 2 was also kept constant at 1 %, which is much less than solubility [START_REF] Cravo | Solubility of carbon dioxide in a natural biodegradable polymer: determination of diffusion coefficientsJ[END_REF]. Three series of experiments were carried out. T d was fixed at 140°C, 135°C and 130°C respectively and T e varied from 140 down to 110°C. At lower values of T e , the extruder stopped due to too high a pressure P 3 according to the established alarm value. Once steady state conditions are reached, extrudates were collected and water-cooled at ambient temperature in order to freeze the extrudate structure. Several samples were collected during the experiment in order to check the homogeneity of the extrudates. To calculate the apparent porosity € ρ app , samples were weighed and their volumes were evaluated by measuring their diameter and length with a vernier (Facom). To obtain this apparent density with a good enough precision, the mean of 6 measurements was carried out. Porosity, defined as the ratio of the pore volume to total volume is calculated by equation 1: € ε = 1- ρ app ρ P (1) € ρ P is the PHB-HV density and € ρ app the apparent density of the extrudates. The theoretical maximum porosity € ε max is obtained if all the dissolved CO 2 would become gaseous inside the extrudate at ambient conditions and would thus create porosity. It could be calculated by the following equation: € ε max = w CO 2 ρ P w CO 2 ρ P + ρ CO 2 (atm) (2) € w CO 2 is the CO 2 mass fraction and € ρ CO2 (atm) is the CO 2 density at ambient conditions. To complete the characterization of the porosity structure, samples were examined by scanning electron microscopy (ESEM, FEG, Philips). RESULTS The results of porosity are presented in Figure 3. It is noticeable that, for all experiments, the obtained porosity is lower than the theoretical maximal porosity € ε max , which is estimated at about 90 %. The higher porosity, obtained at the lowest T d and T e , 130 and 110°C respectively, is about 70 %. The porosity increases with decreasing temperature T d . The evolution of porosity with the temperature T e depends on the value of T d . At T d equal to 140°C, the porosity is constant, whereas at T d lower than 140°C, the porosity decreases with increasing die temperature. It was previously observed for polystyrene that, at a reasonably low temperature of polymer melt, there exists an optimal die temperature for which large volume expansion ratio are achieved by freezing the extrudate surface [START_REF] Park | Low density microcellular foam processing in extrusion using CO2[END_REF]. This effect is explained because more gas is retained in the foam at lower temperature and used for cell nucleation and growth. However, when the nozzle temperature was further decreased, the volume expansion ratio decreased because of the increased stiffness of the frozen skin layer. Our experiments might be explained in the same way. Thus T e would be still too high to obtain higher porosity. As porosity increases, it seems thus that growth phenomena occur at lower temperature. This evolution is opposite to previous results in which coalescence and growth phenomena occurred when temperature increased and led to larger porosity [START_REF] Sauceau | Effect of supercritical carbon dioxide on polystyrene extrusion[END_REF][START_REF] Park | Low density microcellular foam processing in extrusion using CO2[END_REF]. Indeed, it was believed that the polymer melt should be cooled substantially in order to increased melt strength and thus prevent cell coalescence. Figure 2 2 Figure 2: Experimental device Figure 1 : 1 Figure 1: Porosity evolution Figure 2 2 Figure 2 presents pictures at two different values of T d . It could be observed that the pores are large (more than 200 µm) and that they become fewer and larger when T d decreases.As porosity increases, it seems thus that growth phenomena occur at lower temperature. This evolution is opposite to previous results in which coalescence and growth phenomena occurred when temperature increased and led to larger porosity[START_REF] Sauceau | Effect of supercritical carbon dioxide on polystyrene extrusion[END_REF][START_REF] Park | Low density microcellular foam processing in extrusion using CO2[END_REF]. Indeed, it was believed that the polymer melt should be cooled substantially in order to increased melt strength and thus prevent cell coalescence. Figure 2 : 2 Figure 2: SEM pictures (a) Td=130°C (b) Td=140°C
13,606
[ "19511", "789292", "3639" ]
[ "242220", "242220", "242220", "242220" ]
01757793
en
[ "spi" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01757793/file/ICRA18_2547_Rasheed_Long_Marquez_Caro_HAL.pdf
Tahir Rasheed email: tahir.rasheed@ls2n.fr Philip Long email: p.long@northeastern.edu David Marquez-Gamez email: david.marquez-gamez@irt-jules-verne.fr Stéphane Caro email: stephane.caro@ls2n.fr Available Wrench Set for Planar Mobile Cable-Driven Parallel Robots published or not. The documents may come L'archive ouverte pluridisciplinaire Available Wrench Set for Planar Mobile Cable-Driven Parallel Robots Tahir Rasheed 1 , Philip Long 2 , David Marquez-Gamez 3 and Stéphane Caro 4 , Member, IEEE Abstract-Cable-Driven Parallel Robots (CDPRs) have several advantages over conventional parallel manipulators most notably a large workspace. CDPRs whose workspace can be further increased by modification of the geometric architecture are known as Reconfigurable Cable Driven Parallel Robots(RCDPRs). A novel concept of RCDPRs, known as Mobile CDPR (MCDPR) that consists of a CDPR carried by multiple mobile bases, is studied in this paper. The system is capable of autonomously navigating to a desired location then deploying to a standard CDPR. In this paper, we analyze the Static equilibrium (SE) of the mobile bases when the system is fully deployed. In contrast to classical CDPRs we show that the workspace of the MCDPR depends, not only on the tension limits, but on the SE constraints as well. We demonstrate how to construct the Available Wrench Set (AWS) for a planar MCDPR wih a point-mass end-effector using both the convex hull and Hyperplane shifting methods. The obtained results are validated in simulation and on an experimental platform consisting of two mobile bases and a CDPR with four cables. I. INTRODUCTION A Cable-Driven Parallel Robot (CDPR) is a type of parallel manipulator whose rigid links are replaced by cables. The platform motion is generated by an appropriate control of the cable lengths. Such robots hold numerous advantages over conventional robots e.g. high accelerations, large payload to weight ratio and large workspace [START_REF] Pott | IPAnema: a family of cable-driven parallel robots for industrial applications[END_REF]. However, one of the biggest challenges in classical CDPRs which have a fixed cable layout, i.e. fixed exit points and cable configuration, is the potential collisions between the cables and the surrounding environment that can significantly reduce the workspace. By appropriately modifying the robot architecture, better performance can be achieved [START_REF] Gagliardini | A reconfiguration strategy for reconfigurable cable-driven parallel robots[END_REF]. CD-PRs whose geometric structure can be changed are known as Reconfigurable Cable-Driven Parallel Robots (RCDPRs). Different strategies, for instance maximizing workspace or increasing platform stiffness, have been proposed to optimize cable layout in recent work on RCDPRs [3]- [START_REF] Rosati | On the design of adaptive cable-driven systems[END_REF]. However, reconfigurability is typically performed manually, a costly and time consuming task. Recently a novel concept of Mobile Cable-Driven Parallel Robots (MCDPRs) has been introduced in [START_REF] Rasheed | Tension Distribution Algorithm for Planar Mobile Cable-Driven Parallel Robots[END_REF] to achieve an autonomous reconfiguration of RCDPRs. A MCDPR is composed of a classical CDPR with q cables and a n degreeof-freedom (DoF) moving-platform mounted on p mobile bases (MBs). The MCDPR prototype that has been designed and built in the context of Echord++ 'FASTKIT' project is shown in Fig. 1. FASTKIT addresses an industrial need for flexible pick-and-place operations while being easy to install, keeping existing infrastructures and covering large areas. The prototype is composed of eight cables (q = 8), a six degree-of-freedom moving-platform (n = 6) and two MBs (p = 2). The overall objective is to design and implement a system capable of interacting with a high level task planner for logistic operations. Thus the system must be capable of autonomously navigating to the task location, deploying the system such that the task is within the reachable workspace and executing a pick-and-place task. In spite of the numerous advantages of the mobile deployable system, the structural stability must be considered. In [START_REF] Rasheed | Tension Distribution Algorithm for Planar Mobile Cable-Driven Parallel Robots[END_REF], a real time continuous tension distribution scheme that takes into account the dynamic equilibrium of the moving-platform and the static equilibrium (SE) of the MBs has been proposed. In this paper, we focus on the workspace analysis of MCDPRs. The classical techniques used to analyze the workspace of CDPRs are wrench-closure workspace (WCW) [START_REF] Gouttefarde | Analysis of the wrenchclosure workspace of planar parallel cable-driven mechanisms[END_REF], [START_REF] Lau | Wrench-Closure Workspace Generation for Cable Driven Parallel Manipulators using a Hybrid Analytical-Numerical Approach[END_REF] and wrench-feasible workspace (WFW) [START_REF] Gouttefarde | Interval-analysisbased determination of the wrench-feasible workspace of parallel cable-driven robots[END_REF]. In this paper, WFW is chosen as it is more relevant from a practical viewpoint. WFW is defined as the set of platform poses for which the required set of wrenches can be balanced with wrenches generated by the cables, while maintaining the cable tension within the defined limits [START_REF] Gouttefarde | Interval-analysisbased determination of the wrench-feasible workspace of parallel cable-driven robots[END_REF]. For a given pose, the set of wrenches a mechanism can generate is defined as available wrench set (AWS), denoted as A . For classical CDPRs, AWS depends on the robot geometric architecture and the tension limits. The set of wrenches required to complete a task, referred to as the required wrench set (RWS), denoted as R, will be generated if it is fully included by A : R ⊆ A . (1) For MCDPRs, the classical definition of AWS for CDPRs must additionally consider the Static Equilibrium (SE) constraints associated with the MBs. The two main approaches used to represent the AWS for CDPRs are the Convex hull method and the Hyperplane shifting method [START_REF] Grünbaum | Convex Polytopes[END_REF]. Once the AWS is defined, WFW can be traced using the Capacity Margin index [START_REF] Guay | Measuring how well a structure supports varying external wrenches[END_REF], [START_REF] Ruiz | Arachnis: Analysis of robots actuated by cables with handy and neat interface software[END_REF]. This paper deals with the determination of the AWS required to trace the workspace for planar MCDPRs with point-mass end-effector. Figure 2 illustrates a Planar MCDPR with p = 2 MBs, q = 4 number of cables and n = 2 DoF point mass end-effector. In this paper, wheels are assumed to form a simple contact support with the ground and friction is sufficient to prevent the MBs from sliding. This paper is organized as follows. Section II presents the parameterization of a MCDPR. Section III deals with the SE conditions of the MBs using free body diagrams. Section IV is about the nature of AWS for MCDPRs by considering the SE of the platform as well as the SE of the MBs. Section V discusses how to trace the workspace using the capacity margin index. Section VI shows some experimental validations of the concept. Finally, conclusions are drawn and future work is presented in Section VII. II. PARAMETERIZATION OF A MCDPR Let us denotes the jth Mobile Base (MB) as M j , j = 1, . . . , p. The ith cable mounted onto M j is named as C i j , i = 1, . . . , q j , where q j denotes the number of cables carried by M j . The total number of cables of the MCDPR is equal to q = p ∑ j=1 q j . (2) Let u i j be the unit vector of C i j pointing from the pointmass effector to the cable exit point A i j . Let t i j be the cable tension vector along u i j , expressed as t i j = t i j u i j , (3) where t i j is the tension in the ith cable mounted on M j . The force f i j applied by the ith cable onto M j is expressed as f i j = -t i j u i j . (4) III. STATIC EQUILIBRIUM OF A PLANAR MCDPR For a planar MCDPR with a point mass end-effector, the SE equation of the latter can be expressed as f e = - p ∑ j=1 q j ∑ i=1 t i j u i j . (5) Equation ( 5) can be expressed in the matrix form as: Wt + f e = 0, (6) where W is a (2 × q) wrench matrix mapping the cable tension vector t ∈ R q onto the wrench applied by the cables A 12 A 22 A 21 A 11 C C r 1 C l 2 C r 2 B 12 u 21 11 22 u u u G 1 G 2 O f cr j f cl j C lj C rj jt h m o b il e b a se f 2j 1j f G j w g j j A 1j A 2j P e f x [m] y [m] 12 22 1 1 ∑ 2 Fig. 2: Planar MCDPR composed of two MBs (p = 2), four cables (q = 4) with two cables per mobile base (q 1 = q 2 = 2) and a two degree-of-freedom (n = 2) point-mass end-effector on the end-effector. f e = [ f x e f y e ] T denotes the external wrench applied to the end effector. t and W can be expressed as: t = [t 1 t 2 . . . t j . . . t p ] T , (7) W = [W 1 W 2 . . . W j . . . W p ], (8) where t j = [t 1 j t 2 j . . . t q j j ] T , (9) W j = [u 1 j u 2 j . . . u q j j ]. (10) As the MBs should be in equilibrium during the motion of the end-effector, we need to formulate the SE conditions for each mobile base, also referred to as the tipping constraints. Figure 2 illustrates the free body diagram of M j with q j = 2. The SE equations of M j carrying up to q j cables is expressed as: w g j + q j ∑ i=1 f i j + f cl j + f cr j = 0, (11) m O j = 0. ( 12 ) w g j denotes the weight vector of M j . m O j denotes the moment of M j about point O. f cl j = [ f x cl j f y cl j ] T and f cr j = [ f x cr j f y cr j ] T denote the contact forces between the ground and the left and right wheels contact points C l j and C r j , respectively. Note that the superscripts x and y in the previous vectors denote their x and y components. m O j can be expressed as: m O j = g T j E T w g j + q j ∑ i=1 a T i j E T f i j + c T l j E T f cl j + c T r j E T f cr j , (13) with E = 0 -1 1 0 ( 14 ) where a i j = [a x i j a y i j ] T denotes the Cartesian coordinate vectors of point A i j , c l j = [c x l j c y l j ] T and c r j = [c x r j c y r j ] T denote the Cartesian coordinate vectors of contact points C l j and C r j , respectively. g j = [g x j g y j ] T is the Cartesian coordinate vector of the center of gravity G j . Let m Cr j be the moment generated at the right contact point C r j at the instant when M j loses contact with the ground at point C l j such that f y c l j = 0, expressed as: m Cr j = (g j -c r j ) T E T w g j + q j ∑ i=1 (c r j -a i j ) T E T t i j (15) Let Σ j be the set composed of M j , its front and rear wheels, the cables attached to it and the point-mass end-effector, as encircled in red in Fig. 2. From the free body diagram of Σ i , moment m Cr j can also expressed as: m Cr j = -(p -c r j ) T E T f + (g j -c r j ) T E T w g j + p ∑ o=1,o = j q o ∑ i=1 (p -c r j ) T E T t io , (16) where o = 1, . . . , p and o = j. p denotes the Cartesian coordinate vector of the point-mass end-effector P. f = [ f x f y ] T denotes the force applied by the cables onto the point-mass end-effector, namely, f = -f e . (17) Similarly, the moment m Cl j generated at the left contact point C l j on Σ j takes the form: m Cl j = -(p -c l j ) T E T f + (g j -c l j ) T E T w g j + p ∑ o=1,o = j q o ∑ i=1 (p -c l j ) T E T t io . (18) For M j to be stable, the moments generated by the external forces at point C r j (C l j , resp.) should be counterclockwise (clockwise, resp.), namely, m Cr j ≥ 0, j = 1, . . . , p (19) m Cl j ≤ 0, j = 1, . . . , p (20) IV. AVAILABLE WRENCH SET FOR MCDPRS In this section the nature of the AWS for MCDPRs is analyzed. The cable tension t i j associated with the ith cable mounted on M j is bounded between a minimum tension t i j and a maximum tension t i j . It should be noted that the AWS of a classical CDPR depends uniquely on its platform pose and cable tension limits and forms a zonotope [START_REF] Bouchard | On the ability of a cable-driven robot to generate a prescribed set of wrenches[END_REF]. In contrast, the tipping constraints of the MBs must be considered in the definition of the AWS of a MCDPR. The AWS A 1 for a planar CDPR with a point mass end-effector can be expressed as: A 1 = f ∈ R 2 | f = p ∑ j=1 q j ∑ i=1 t i j u i j , t i j ≤ t i j ≤ t i j , i = 1, . . . , q j , j = 1, . . . , p . (21) 0 1 2 3 1 2 3 4 5 11 t [N] 2 1 22 t [N] x f [N] y f [N] W 11 u 22 u 21 u x [m] y [m] t [ N ] T T 1 1 2 2 P 0 0 Fig. 3: MCDPR with q = 3 cables and a n = 2 DoF pointmass end-effector, the black polytopes illustrate the TS and AWS of the CDPR at hand, whereas green polytopes illustrate the TS and AWS of the MCDPR at hand The AWS A 2 for a planar MCDPR with a point-mass endeffector is derived from A 1 by adding the tipping constraints defined in ( 19) and (20). Thus A 2 can be expressed as: A 2 = f ∈ R 2 | f = p ∑ j=1 q j ∑ i=1 t i j u i j , t i j ≤ t i j ≤ t i j , m Cr j ≥ 0, m Cl j ≤ 0, i = 1, . . . , q j , j = 1, . . . , p . Figure 3 shows a comparison between the Tension space (TS) T 1 and the AWS A 1 of a CDPR with three cables and an point-mass platform with the TS T 2 and AWS A 2 of a MCDPR with three cables, a point-mass platform and two mobile bases. The tension space between t 11 and t 21 is reduced by the linear tipping constraint on M 1 . As M 2 is carrying a single cable C 22 , only the maximum limit t 22 is modified. The difference in the polytopes A 1 and A 2 is due to the additional constraints associated with the equilibrium of the MCDPR MBs expressed by ( 19) and (20). By considering only the classical cable tension limit (t i j and t i j ) constraints, the shape of the AWS is a zonotope. When the tipping constraints are included, the AWS is no longer a zonotope, but a convex polytope. The two methods used to represent convex polytopes are V-representation, known as the convex hull approach, and H-representation, known as the hyperplane shifting method [START_REF] Grünbaum | Convex Polytopes[END_REF]. V-representation is preferred for visualization while H-representation is used to find the relation between A and R. The convex-hull approach is used to find the vertices that form the boundary of the polytope, whereas hyperplane shifting method is a geometric method used to obtain the facets of the polytope. y [m] t [N] t [N] v 11 v 21 v 31 v 41 v 51 v 12 v 22 v 32 v 42 P x f [N] y f [N] 2 1 Fig. 4: Comparison of TS and AWS between CDPR (in black) and MCDPR (in green) (a) MCDPR configuration with q 1 = q 2 = 2 (b) TS formed by t i1 (c) TS formed by t i2 (d) A 1 is AWS of the CDPR at hand, A 2 is AWS of the MCDPR at hand A. Convex Hull Method AWS is defined using the set of vertices forming the extreme points of the polytope. For the jth mobile base M j , a q j dimensional TS is formed by the attached cables. The shape of this TS depends on the mapping of the tipping constraints on the q j -dimensional TS formed by the cable tension limits t i j and t i j . Figures 4(b) and 4(c) illustrate the TS associated with each MB of the MCDPR configuration shown in Fig. 4(a). The feasible TS is formed by the cable tension limits as well as the tipping constraints of the MBs. The new vertices of the TS for MCDPRs do not correspond with the minimum/maximum cable tensions as in the classical case [START_REF] Bouchard | On the ability of a cable-driven robot to generate a prescribed set of wrenches[END_REF]. Let v j denote the number of vertices and v k j be the coordinates of kth vertex for the TS associated to the jth mobile base M j , k = 1, . . . , v j . Let V j represent the set of the vertices of the TS associated to M j : V j = {v k j }, k = {1, . . . , v j }. ( 23 ) Let V j be a (q j × v j ) matrix containing the coordinates of the TS vertices associated with M j , expressed as: V j = [v 1 j v 2 j . . . v v j j ]. (24) v is the total number of vertices formed by all the q cables and is obtained by the product of the number of vertices for each MB, namely, v = p ∏ j=1 v j . ( 25 ) Let V represent the set of all vertices in the TS which is obtained by the Cartesian product between V j , j = 1, . . . , p. Accordingly, V is (q × v)-matrix, which denotes the coordinates of all the vertices in V expressed as: where g = 1, . . . , v. v g is a q-dimensional vector representing the coordinates of the gth vertex of the MCDPR Tension Space. The image of AWS is constructed from V under the mapping of the wrench matrix W. A numerical procedure such as quickhull [START_REF] Barber | The quickhull algorithm for convex hulls[END_REF] is used to compute the convex hull forming the boundary of AWS. Figure 4(d) illustrates the AWS obtained by Convex Hull Method. A 1 is the AWS obtained by considering only the cable tension limits and is a zonotope. A 2 is the AWS obtained by considering both the cable tension limits and the tipping constraints of the two mobile bases. V = [v 1 v 2 . . . v g . . . v v ], (26) B. Hyperplane Shifting Method Hyperplane Shifting Method (HFM) is a geometric approach, which defines a convex polytope as the intersection of the half-spaces bounded by its hyperplanes [START_REF] Gouttefarde | Characterization of parallel manipulator available wrench set facets[END_REF]. The classical HFM used to obtain the AWS of CDPRs is described in [START_REF] Bouchard | On the ability of a cable-driven robot to generate a prescribed set of wrenches[END_REF], [START_REF] Gouttefarde | Characterization of parallel manipulator available wrench set facets[END_REF]. Nevertheless, this approach is not sufficient to fully characterize the AWS of MCDPRs because it requires a hypercube TS (T 1 ). For instance, for the MCDPR shown in Fig. 3, it can be observed that the TS (T 2 ) is no longer a hypercube due to the additional constraints associated with the SE of the mobile bases. As a consequence, this section presents an improved version of the HFM described in [START_REF] Rasheed | Tension Distribution Algorithm for Planar Mobile Cable-Driven Parallel Robots[END_REF][START_REF] Pott | IPAnema: a family of cable-driven parallel robots for industrial applications[END_REF] that takes into account the tipping constraints of the MCDPR mobile bases. As a result, Fig. 5 depicts the AWS of the MCDPR configuration shown in Fig. 4(a), obtained by the improved HFM. This AWS is bounded by hyperplanes H + s , H - s , s = 1, ..., q, obtained from the cable tension limits associated to the four cables attached to the point mass end-effector, and by hyperplanes H r M 1 and H l M 2 , corresponding to the tipping constraints of M 1 about point C r1 and the tipping constraint of M 2 about point C l2 , respectively. 1) Determination of H l M j and H r M j , j=1,...,p: Let r r j (r l j , resp.) be the unit vector pointing from C r j (C l j , resp.) to P, expressed as: r r j = p -c r j p -c r j 2 , r l j = p -c l j p -c l j 2 . ( 27 ) Upon dividing (19) ((20), resp.) by pc r j 2 ( pc l j 2 , resp.), the tipping constraints can be expressed in the wrench space as: -r T r j E T f + (g j -c r j ) T p -c r j 2 E T w g j + p ∑ o=1,o = j q o ∑ i=1 r T r j E T t io ≥ 0, (28) -r T l j E T f + (g j -c l j ) T p -c l j 2 E T w g j + p ∑ o=1,o = j q o ∑ i=1 r T l j E T t io ≤ 0. (29) Equations ( 28) and (29) take the form: e T r j f ≤ d r j , e T l j f ≤ d l j . (30) Equation ( 30) corresponds to the hyperplanes [START_REF] Gouttefarde | Characterization of parallel manipulator available wrench set facets[END_REF] for the tipping constraints of M j in the wrench space. e r j and e l j are the unit vectors normal to H r M j and H l M j , expressed as: e r j = Er r j , e l j = -Er l j . d r j (d l j , resp.) denotes the shifted distance of H r j (H l j , resp.) from the origin of the wrench space along e r j (e l j , resp.). The shift d r j depends on the weight of M j and the combination of the cable tension t io for which p ∑ o=1,o = j q o ∑ i=1 r T r j E T t io is a maximum. While the shift d rl depends on the weight of M j and the combination of the cable tension t io for which p ∑ o=1,o = j q o ∑ i=1 r T l j E T t io is a minimum, namely, d r j = (g j -c r j ) T p -c r j 2 E T w g j + p ∑ o=1,o = j max q o ∑ i=1 r T r j E T v i ko u io , k = 1, ..., v o , (32) d l j = - (g j -c l j ) T p -c l j 2 E T w g j - p ∑ o=1,o = j min q o ∑ i=1 r T l j E T v i ko u io , k = 1, ..., v o , (33) where v i ko corresponds to the ith coordinate of v ko . Figure 6 shows the geometric representation of the tipping hyperplanes for the MCDPR under study. From Figs. 5 and6, it can be observed that the orientation of H r M j (H l M j , resp.) is directly obtained from r r j (r l j , resp.) 2) Determination of H + s and H - s , s=1,...,q: For classical CDPRs with given cable tension limits, ∆t i j = t i jt i j is a constant, AWS is a zonotope formed by the set of vectors α i j ∆t i j u i j , where 0 ≤ α i j ≤ 1 [START_REF] Bouchard | On the ability of a cable-driven robot to generate a prescribed set of wrenches[END_REF], [START_REF] Gouttefarde | Characterization of parallel manipulator available wrench set facets[END_REF]. The shape of the zonotope depends on the directions of the cable unit vectors u i j as well as the difference between the minimum and maximum cable tension limits ∆t i j . It is noteworthy that ∆t i j is no longer a constant for MCDPRs. The property of a zonotope having parallel facets still holds as the orientation of the hyperplanes is given by the cable unit vectors u i j . However, the position of the hyperplanes is modified, forming a convex polytope with parallel facets rather than a zonotope. H + s and H - s are obtained using the classical HFM as described in [START_REF] Bouchard | On the ability of a cable-driven robot to generate a prescribed set of wrenches[END_REF], [START_REF] Gouttefarde | Characterization of parallel manipulator available wrench set facets[END_REF] based on the TS of MCDPRs. For a planar MCDPR with a point mass end-effector, each cable unit vector u s = u i j will form a pair of parallel hyperplanes {H + s , H - s } at the origin. Each pair is associated with a unit vector e s orthogonal to its facets, expressed as: e s = E T u s . ( 34 ) The shift of the initial hyperplanes is determined by the projection of the MCDPR tension space vertices on e s . Let l s be a q-dimensional vector containing the projections of the cable unit vectors in W on e s , expressed as: l s = W T e s . (35) The projection of u s will be zero as it is orthogonal to e s . The distances h + s and h - s are given by the maximum and minimum combinations of l s with the coordinates of the TS vertices V, expressed as: h + s = max q ∑ s=1 v s g l s , g = 1, . . . , v , (36) h - s = min q ∑ s=1 v s g l s , g = 1, . . . , v , (37) where v s g and l s denote the sth coordinate of vector v g and l s , respectively. To completely characterize the hyperplanes, a point p + s (p - s , resp.) on H + s (H - s , resp.) must be obtained, given as: p + s = h + s e s + p ∑ j=1 q j ∑ i=1 t i j u i j ; p - s = h - s e s + p ∑ j=1 q j ∑ i=1 t i j u i j , (38) The respective pair of hyperplanes is expressed as: H + s : e T s f ≤ d + s ; H - s : -e T s f ≤ d - s . (40) The above procedure is repeated to determine the q pairs of hyperplanes associated to the q cables of the MCDPR. V. WORKSPACE ANALYSIS Wrench-feasible workspace (WFW) is defined as the set of poses that are wrench-feasible [START_REF] Bosscher | Wrenchfeasible workspace generation for cable-driven robots[END_REF]. A well known index used to compute the wrench feasible set of poses is called Capacity Margin [START_REF] Guay | Measuring how well a structure supports varying external wrenches[END_REF], [START_REF] Ruiz | Arachnis: Analysis of robots actuated by cables with handy and neat interface software[END_REF]. It is a measure of the robustness of the equilibrium of the robot, expressed as: s = min ( min s j,l ), (41) where s j,l is the signed distance from jth vertex of the required wrench set R to the lth face of the available wrench set A . s j,l is positive when the constraint is satisfied, and negative otherwise. The index remains negative as long as at least one of the vertices of R is outside of A . The index is positive if all the the vertices of R are inscribed by A . VI. EXPERIMENTS AND RESULTS The proposed approach was tested on MCDPR prototype shown in Fig. 7(a), made up of two TurtleBot mobile bases and a 0.5 kg point mass end-effector. The WFW of the MCDPR under study is illustrated in Fig. 7(b) for R equal to the weight of the end-effector. The green region corresponds to the modified WFW where both cable tension limits and mobile base tipping constraints are satisfied. In blue area, at least one of the two mobile bases is tipping. In red area, both the cable tension limits and the mobile base tipping constraints are not satisfied to keep the end-effector in equilibrium and to avoid mobile base tipping. It can be observed that for MCDPRs, the ability of the cables to apply wrenches on the platform may be reduced due to the mobile base tipping constraints. The simulation and the experimental validation of the proposed approach can be seen in video 1 . In the latter, two different trajectories are tested and compared. Based on the proposed workspace analysis, it can be observed that if the end-effector is within the defined WFW calculated offline, both the mobile bases are in equilibrium. On the contrary, for the end-effector outside of the WFW, at least one mobile base is not in equilibrium anymore. VII. CONCLUSION In this paper, the Available Wrench Set required to trace the Wrench Feasible Workspace of a Mobile Cable-Driven Parallel Robot (MCDPR) has been determined. The proposed workspace considers the cable tension limits and the static equilibrium of the mobile bases. Two different approaches, the convex hull and the hyperplane shifting method, are used to illustrate how the additional constraints can be considered. The additional constraints modified the shape of the AWS, forming new facets and reducing the capability of the cables to apply wrenches on the platform. Future work will focus on extending this approach to spatial MCDPRs consisting of more than two mobile bases and taking into account wheel slipping constraints. Furthermore, the evolution of the MCDPR workspace during their deployment will be studied. Fig. 1 : 1 Fig. 1: Fastkit prototype, Undeployed configuration (left) Deployed configuration (right) 1 +Fig. 5 : 15 Fig.5: AWS formed by the intersection of all the hyperplanes (red hyperplanes corresponds to the tipping constraints, blue hyperplanes corresponds to the cables tension limits) Fig. 6 : 6 Fig. 6: Geometric representation of e r1 and e l2 from the MCDPR configuration Fig. 7 : 7 Fig. 7: MCDPR prototype made up of two TurtleBot mobile bases and a 0.5 kg point mass end-effector supported by École Centrale Nantes and Echord++ FASTKIT project.
27,118
[ "10659" ]
[ "111023", "473973", "29479", "235335", "481388", "473973", "441569" ]
01757797
en
[ "spi" ]
2024/03/05 22:32:10
2017
https://hal.science/hal-01757797/file/DETC2017_67441_Nayak_Haiyang_Hao_Caro_HAL.pdf
Abhilash Nayak email: abhilash.nayak@ls2n.fr Haiyang Li email: haiyang.li@umail.ucc.ie Guangbo Hao email: g.hao@ucc.ie Stéphane Caro email: stephane.caro@ls2n.fr St Éphane Caro A Reconfigurable Compliant Four-Bar Mechanism with Multiple Operation Modes published or not. The documents may come INTRODUCTION Although mechanisms are often composed of rigid bodies connected by joints, compliant mechanisms include flexible el-ements whose elastic deformation is utilized in order to transmit a force and/or motion. There are different ways to design compliant mechanisms, such as the kinematic based approaches, the building blocks approaches, and the structural optimizationbased approaches [START_REF] Gallego | Synthesis methods in compliant mechanisms: An overview[END_REF][START_REF] Olsen | Utilizing a classification scheme to facilitate rigid-body replacement for compliant mechanism design[END_REF][START_REF] Howell | Compliant mechanisms[END_REF][START_REF] Hao | Conceptual designs of multidegree of freedom compliant parallel manipulators composed of wire-beam based compliant mechanisms[END_REF]. In the kinematic based approach, the joints of a chosen rigid-body mechanism are replaced by appropriate compliant joints followed by pseudo-rigid body modeling [START_REF] Olsen | Utilizing a classification scheme to facilitate rigid-body replacement for compliant mechanism design[END_REF][START_REF] Howell | Compliant mechanisms[END_REF][START_REF] Hao | Conceptual designs of multidegree of freedom compliant parallel manipulators composed of wire-beam based compliant mechanisms[END_REF]. This method is advantageous due to the extensive choice of existing rigid-body mechanisms and their modeling tools. Parallel or closed-loop rigid-body architectures gain an upper hand here as their intrinsic properties favour the characteristics of compliant mechanisms like compactness, symmetry to reduce parasitic motions, low stiffness along the desired degrees of freedom (DOF) and high stiffness in other directions. Moreover, compliant mechanisms usually work around a given position for small range of motions and hence they can be designed by considering existing parallel manipulators in parallel singular configurations. Parallel singularity can be an actuation singularity, constraint singularity or a compound singularity as explained in [START_REF] Amine | Classification of 3T1R parallel manipulators based on their wrench graph[END_REF][START_REF] Maraje | Operation modes comparison of a reconfigurable 3-PRS parallel manipulator based on kinematic performance[END_REF][START_REF] Nurahmi | Dimensionally homogeneous extended jacobian and condition number[END_REF]. Rubbert et al. used an actuation singularity to type-synthesize a compliant medical device [START_REF] Rubbert | Using singularities of parallel manipulators for enhancing the rigid-body replacement design method of compliant mechanisms[END_REF][START_REF] Rubbert | Design of a compensation mechanism for an active cardiac stabilizer based on an assembly of planar compliant mechanisms[END_REF]. Another interesting kind of parallel singularity for a parallel manipulator that does not depend on the choice of ac- l A D B C y 0 x 0 y 1 x 1 Ʃ 1 O 1 (a,b) O 0 (0,0) x Ʃ l Ʃ 0 FIGURE 1: AN EQUILATERAL FOUR BAR LINKAGE. tuation is a constraint singularity [START_REF] Zlatanov | Constraint Singularities as C-Space Singularities[END_REF]. It divides the workspace of a parallel manipulator into different operation modes resulting in a reconfigurable mechanism. Algebraic geometry tools have proved to be efficient in performing global analysis of parallel manipulators and recognizing their operation modes leading to mobility-reconfiguration [START_REF] Husty | Algebraic methods in mechanism analysis and synthesis[END_REF][START_REF] Nurahmi | Reconfiguration analysis of a 4-RUU parallel manipulator[END_REF][START_REF] He | Design and Analysis of a New 7R Single-Loop Mechanism with 4R, 6R and 7R Operation Modes[END_REF]. Though there are abundant reconfigurable rigid-body mechanisms in the literature, the study of reconfigurable compliant mechanisms is limited. Hao studied the mobility and structure reconfiguration of compliant mechanisms [START_REF] Hao | Design and static testing of a compact distributed-compliance gripper based on flexure motion[END_REF] while Hao and Li introduced a position-spacebased structure reconfiguration (PSR) approach to the reconfiguration of compliant mechanisms and to minimize parasitic motions [START_REF] Hao | Positionspace-based compliant mechanism reconfiguration approach and its application in the reduction of parasitic motion[END_REF][START_REF] Li | Compliant mechanism reconfiguration based on position space concept for reducing parasitic motion[END_REF]. In this paper, one of the simplest yet ubiquitous parallel mechanisms, a planar equilateral four-bar linkage is considered at a constraint singularity configuration to synthesize a reconfigurable compliant four-bar mechanism. From our best understanding, this is the first piece of work that considers a constraint singularity to design a reconfigurable compliant mechanism with multiple operation modes, also called motion modes. This paper is organized as follows : Kinematic analysis of a rigid four-bar mechanism is performed to determine the constraint singularities and different operation modes. Rigid-body replacement design approach is followed to further synthesize a reconfigurable compliant four-bar mechanism and the motion type associated to each operation mode is verified through non-linear Finite Element Analysis (FEA). KINEMATIC ANALYSIS AND OPERATION MODES OF A FOUR BAR LINKAGE A planar equilateral four-bar linkage with equal link lengths, l is depicted in Fig. 1. Link AD is fixed, AB and CD are the cranks and BC is the coupler. Origin of the fixed frame (Σ 0 ), O 0 coincides with the center of link AD while that of the moving frame (Σ 1 ) O 1 with the center of BC. The coordinate axes are oriented in such a way that the position vectors of the intersection points between the revolute joint axes and the x 0 y 0 plane can be homogeneously written as follows: r 0 A = [1, -l 2 , 0] T r 0 D = [1, l 2 , 0] T (1) r 1 B = [1, -l 2 , 0] T r 1 C = [1, l 2 , 0] T (2) The displacement of the coupler with respect to the fixed frame can be rendered by (a, b, φ ), where a and b represent the positional displacement of the coupler (nothing but the coordinates of point O 1 in Σ 0 ) and φ is the angular displacement about z 0 -axis (angle between x 0 and x 1 ). Thus, the corresponding set of displacements can be mapped onto a three-dimensional projective space, P 3 with homogeneous coordinates x i (i = 1, 2, 3, 4) [START_REF] Bottema | Theoretical Kinematics[END_REF]. This mapping (also known as Blashke mapping in the literature) is defined by the following matrix M : M =       1 0 0 2x 1 x 3 + 2x 2 x 4 x 2 3 + x 2 4 -x 2 3 + x 2 4 x 2 3 + x 2 4 -2x 3 x 4 x 2 3 + x 2 4 -2x 1 x 4 + 2x 2 x 3 x 2 3 + x 2 4 2x 3 x 4 x 2 3 + x 2 4 -x 2 3 + x 2 4 x 2 3 + x 2 4       (3) The planar kinematic mapping can also be derived as a special case of Study's kinematic mapping by equating some of the Study parameters to zero [START_REF] Husty | Algebraic methods in mechanism analysis and synthesis[END_REF]. To avoid the rotational part of M to be undefined, the following equation is defined: H := x 2 3 + x 2 4 = 1 (4) Without loss of generality, x i can be expressed in terms of (a, b, φ ), as follows [START_REF] Bottema | Theoretical Kinematics[END_REF] : x 1 : x 2 : x 3 : x 4 = (au -bv) : (av + bu) : 2u : 2v with u = sin( φ 2 ), v = cos( φ 2 ) (5) Constraint Equations Points B and C are constrained to move along circles of centers A and D, respectively and with radius l each. The position vectors of points B and C are expressed algebraically in frame Σ 0 as follows : r 0 B = M r 1 B ; r 0 C = M r 1 C (6) 4x 1 2 + 4x 2 2 -l 2 x 4 2 = 0 4lx 1 x 3 = 0 Q 1 Q 2 x 3 /x 4 x 1 /x 4 x 2 /x 4 L 1 L 2 C FIGURE 2: CONSTRAINT MANIFOLDS OF THE FOUR BAR LINKAGE IN IMAGE SPACE. Therefore, the algebraic constraint equations take the form : (r 0 B -r 0 A ) T (r 0 B -r 0 A ) = l 2 =⇒ g 1 := 4(x 2 1 + x 2 2 ) + 4lx 1 x 3 -l 2 x 2 4 = 0 (7) (r 0 C -r 0 D ) T (r 0 C -r 0 D ) = l 2 =⇒ g 2 := 4(x 2 1 + x 2 2 ) -4lx 1 x 3 -l 2 x 2 4 = 0 (8) Since g1 ± g2 = 0 gives the same variety, the final simplified constraint equations are : H 1 := g 1 -g 2 := 4lx 1 x 3 = 0 (9) H 2 := g 1 + g 2 := 4(x 2 1 + x 2 2 ) -l 2 x 2 4 = 0 (10) Equation ( 9) degenerates into two planes x 1 = x 3 = 0 into the image space and Eqn. ( 10) amounts to a cylinder with a circular cross-section in the image space. Assuming x 4 = 0, these constraint manifolds can be represented in the affine space, A 3 , as shown in Fig. 2. Operation Modes The affine variety of the polynomials H 1 and H 2 amounts to all the possible displacements attainable by the coupler. This variety is nothing but the intersection of these constraint surfaces in the image space [START_REF] Husty | Algebraic methods in mechanism analysis and synthesis[END_REF]. The intersections can be seen as two lines and a circle in Fig. 2. In fact, these curves can be algebraically represented by decomposing the constraint equations ( 9) and [START_REF] Zlatanov | Constraint Singularities as C-Space Singularities[END_REF]. A primary decomposition of the ideal I = H 1 , H 2 onto the field K(x 1 , x 2 , x 3 , x 4 ) results in the following sub-ideals: I 1 = x 1 , 2x 2 -lx 4 (11) I 2 = x 1 , 2x 2 + lx 4 ( 12 ) I 3 = x 3 , 4(x 2 1 + x 2 2 ) -l 2 x 2 4 ( 13 ) It shows that this four-bar linkage has three operation modes. The Hilbert dimension of the ideals I i including the polynomial H from Eqn. ( 4) is calculated to be one, indicating that the DOF of the four-bar mechanism is one in each of these three operation modes. I 1 and I 2 correspond to x 1 = 0 implying u = b a from Eqn. [START_REF] Amine | Classification of 3T1R parallel manipulators based on their wrench graph[END_REF]. Furthermore, for I 1 , eliminating u from 2x 2lx 4 = 0 gives a 2 + b 2 -al = 0 (14) which is the equation of a circle of center point B of Cartesian coordinates ( l 2 , 0) and radius l 2 as shown in Fig. 3. l/2 D B C ( a , b ) (0,0) (0,0) (l/2,0) A D FIGURE 3: OPERATION MODE 1 : a 2 + b 2 -al = 0 Similarly, I 2 yields a 2 + b 2 + al = 0 (15) which is the equation of a circle of center point C of Cartesian coordinates (-l 2 , 0) and radius l 2 as shown in Fig. 4. The third ideal I 3 corresponds to x 3 = 0 and hence u = 0 implying φ = 0. The second equation of the same ideal results in a 2 + b 2 -l 2 = 0 (16) B C l/2 A ( a , b ) (0,0) D (-l/2,0) (0,0) (0,0) (0,0) ) FIGURE 4: OPERATION MODE 2 : a 2 + b 2 + al = 0. being the equation of a circle of center (0, 0) and radius l as shown in Fig. 5. As a result, I 1 and I 2 represent rotational modes while I 3 represents a translational mode. l A D B C (a,b) (0,0) B FIGURE 5: OPERATION MODE 3 : a 2 + b 2 -l 2 = 0. Ultimately, in Fig. 2, the intersection lines L 1 and L 2 of the constraint manifolds portray the rotational motion modes while the circle C portrays the translational motion mode. Constraint Singularities These operation modes are separated by two similar constraint singularities shown in Fig. 6. They can be algebraically represented by 5), these singularities occur when b = 0, φ = 0 and a = ±l. These two configurations correspond to the two points Q 1 and Q 2 in the image space shown in Fig. 2. At a constraint singularity, any mechanism gains one or more degrees of freedom. Therefore, in case of the four-bar linkage with equal link lengths, the DOF at a constraint singularity is equal 2. In this configuration, points A, B, C and D are collinear and the corresponding motion type is a translational motion along the x 1 = x 3 = 4x 2 2 - l 2 x 2 4 = 0. From Eqn. ( D B C A (0,0) (l,0) L ABCD (a) a = l, b = 0, φ = 0 A C D B (0,0) (-l,0) L ABCD (b) a = -l, b = 0, φ = 0 FIGURE 6: CONSTRAINT SINGULARITIES OF THE FOUR BAR MECHANISM. normal to the line L ABCD passing through the four points A, B, C and D combined with a rotation about an axis directed along z 0 and passing through L ABCD . Eventually, it is noteworthy that two actuators are required in order to control the end-effector in those constraint singularities in order to manage the operation mode changing. DESIGN AND ANALYSIS OF A COMPLIANT FOUR-BAR MECHANISM In this section, two compliant four-bar mechanisms, compliant four-bar mechanism-1 and compliant four-bar mechanism-2, are proposed based on the operation modes and constraint singularities of the four-bar rigid-body mechanism shown in Fig. 6b. Moreover, the desired motion characteristics of the compliant four-bar mechanism-2 are verified by nonlinear FEA simulations. Design of a compliant four-bar mechanism Based on the constraint singularity configuration of the four-bar rigid-body mechanism represented in Fig. 6, a compliant four-bar mechanism can be designed through kinematically replacing the rigid rotational joints with compliant rotational joints [START_REF] Hao | Positionspace-based compliant mechanism reconfiguration approach and its application in the reduction of parasitic motion[END_REF]. Each of the compliant rotational joints can be any type compliant rotational joint such as cross-spring rotational joint, notch rotational joint and cartwheel rotational joint [START_REF] Howell | Compliant mechanisms[END_REF]. As shown in Fig. 7, a compliant four-bar mechanism, termed as the compliant four-bar mechanism-1, has been designed by replacing the four rigid rotational joints with three cross-spring rotational joints (RJ-0, RJ-1 and RJ-3) and one leaf-type isoscelestrapezoidal rotational joint that provides remote rotation centre (RJ-2). For small motion ranges, the compliant four-bar mechanism-1 has the same operation modes as the fourbar rigid-body mechanism shown in Fig. 6, via controlling the rotations of the Bar-1 and Bar-3. Moreover, both the compliant four-bar mechanism-1 and the four-bar rigid-body mechanism are plane motion mechanisms. Additionally, the three cross-spring rotational joints in the compliant four-bar mechanism-1 can be replaced by other types of rotational joints, which can form different compliant four-bar mechanisms. In this paper, cross-spring rotational joints are employed due to their large motion ranges while small rotation centre shifts. However, the isosceles-trapezoidal rotational joint in the compliant four-bar mechanism-1 performs larger rotation centre shifts compared with the cross-spring rotational joint. Therefore, the compliant four-bar mechanism-1 can be improved by replacing the leaf-type isosceles-trapezoidal rotational joint with a cross-spring rotational joint. Such an improved design can be seen in Fig. 8, which is termed as the compliant four-bar mechanism-2. Note that, in Fig. 8, the RJ-0 and RJ-2, are traditional cross-spring rotational joints, while both the RJ-1 and the RJ-3 are double cross-spring joints introduced in this paper. Each of the rotational joints, RJ-1 and RJ-3, consists of two traditional cross-spring rotational joints in series. We specify that the Bar-0 is fixed to the ground and the Bar-2 is the output motion stage, also named coupler. The main body including rigid bars and compliant joints of the proposed compliant four-bar mechanism-2 can be fabricated monolithically using a CNC milling machine. It can also be 3D printed, and a 3Dprinted prototype is shown in Fig. 8. The bars of the prototype have many small through holes, which can reduce material consumption and improve dynamic performance. Additionally, two cross-shaped parts are added to the actuated bars, which are used to actuate the mechanism by hands. The operation modes of the compliant four-bar mechanism-2 as output stage are analyzed in the following sections. OPERATION MODES OF THE COMPLIANT FOUR-BAR MECHANISM-2 Like the four-bar rigid-body mechanism shown in Fig. 6b, the output motion stage (Bar 2) of the compliant four-bar mechanism-2 has multiple operation modes under two rotational actuations (controlled by input displacements α and β ), as shown in Fig. 8. However, the compliant four-bar mechanism-2 has more operation modes than the rigid counterpart. In order to simplify the analysis, let α and β be non-negative. A coordinate system is defined in Fig. 8, which is located on Bar 2. Based on this assumption, operation modes of the compliant four-bar mechanism-2 are listed below : These operation modes are also highlighted through the printed prototype in Fig. 9. The primary motions of output motion stage (Bar-2) are the rotation in the XY plane and the translations along the X-and Y-axes; while the rotations in the XZ and YZ planes and translational motion along the Z-axis are the parasitic motions that are not the interest of this paper. Moreover, the rotation angle in the XY-plane and the Y-axis translational motion can be estimated analytically using Eqs. ( 17) and [START_REF] Zhao | A novel compliant linear-motion mechanism based on parasitic motion compensation[END_REF]. However, the X-axis translational motion cannot be accurately estimated in such a simple way, because it is heavily affected by the shift of the rotation centres of the two cross-spring rotational joints [START_REF] Zhao | A novel compliant linear-motion mechanism based on parasitic motion compensation[END_REF]. The X-axis translational motion will be analytically studied in our future work, but will be captured by non-linear FEA. θ Z = α -β (17) D Y = 1 2 (L B + L R )(sin α + sin β ) (18) where θ Z is the rotation in the XY plane and D Y is the translational displacement in the Y-axis. L B and L R are the geometrical dimensions of the reconfigurable mechanism at hand, as defined in Fig. 8. SIMULATIONS OF THE OPERATION MODES In order to verify the operation modes of the 4R compliant mechanism-2, nonlinear FEA software is employed to simulate the motions of the compliant four-bar mechanism-2. For the FEA simulations, let L B be 100 mm, L R and L H be 50 mm, the beam thickness be 1 mm, the beam width be 23 mm, the Poissons ratio be 0.33, and the Youngs modulus be 6.9 GPa. Commercial software, COMSOL MULTIPHYSICS, is selected for the nonlinear FEA simulations, using the 10-node tetrahedral element and finer meshing technology (minimum element size 0.2 mm, curvature factor 0.4, and resolution of narrow regions 0.7). Note that the translational displacements of the Bar-2 along the X and Y axes are measured at the centre point of the top surface of the Bar-2 (termed as the interest point), as shown in Fig. Overall, for all the operation modes of the compliant four-bar mechanism-2, the obtained analytical kinematic models are accurate enough to predict the rotation angle in the XY-plane and the translation displacement along the Y-axis, under specific input actuations. Additionally, the parasitic motions are much smaller than the primary motions, which ensures that the tiny effect of the parasitic motions on the primary motions can be ignored in an acceptable way. Therefore, it has been proved that the compliant four-bar mechanism-2 can be operated in the different operation modes with high accuracy. A PROSPECTIVE APPLICATION AS A COMPLIANT GRIPPER The reconfigurable compliant four-bar mechanism-1 shown in Fig. 7 is used to design a reconfigurable gripper as shown in Fig. 14. It can exhibit four grasping modes based on the actuation of the linear actuator 1 (±α) or 2 (±β ) as displayed in Fig. 15. The first three grasping modes are angular, where the jaws of the gripper rotate about an instantaneous centre of rotation which is different for each grasping mode. The gripper displays an angular grasping mode when α = 0, β = 0 as shown in Fig. 15a, α = 0, β = 0 as shown in Fig. 15b or when α < 0, β < 0 as shown in the right Fig. 15c. The parallel grasping mode in which the jaws are parallel to one another is achieved when α > 0, β < 0 as shown in the left Fig. 15c. Thus, the reconfigurable compliant gripper at hand unveils an ability to grasp a plethora of shapes unlike other compliant grippers in literature that exhibit only one of these modes of grasping [START_REF] Hao | Design and static testing of a compact distributed-compliance gripper based on flexure motion[END_REF][START_REF] Hao | Conceptual design and modelling of a self-adaptive compliant parallel gripper for highprecision manipulation[END_REF]. Potential applications include micromanipulation and grasping lightweight and vulnerable materials like glass, resins, porous composites, etc. in difficult and dangerous environments. In addition, it can be used for medical applications to grasp and manipulate living tissues during surgical operations or as a gripper mounted on a parallel manipulator dedicated to fast and accurate pick-and-place operations. Figure 16 shows the prototype of the reconfigurable compliant gripper. CONCLUSIONS AND FUTURE WORK A novel idea of designing mobility-reconfigurable compliant mechanisms inspired by the constraint singularities of rigid body mechanisms was presented in this paper. A rhombus planar rigid four-bar mechanism was analyzed to identify its three operation modes and two constraint singularities separating those modes. The rigid joints were replaced by compliant joints to obtain two designs of a reconfigurable compliant four-bar mechanism. The second design was found to be more accurate and A novel reconfigurable compliant gripper less parasitic than the first one, which is verified by its nonlinear FEA simulations in different motion modes. Moreover, the compliant four-bar mechanism was shown to have four operation modes based on the particular actuation strategy unlike its rigid counterpart. A preliminary design of a compliant gripper has been designed based on the reconfigurable compliant fourbar mechanism introduced and studied in this paper. In the future, we will focus on the analytical kinetostatic modelling of the reconfigurable compliant mechanism at hand while exploring appropriate applications. We also intend to design mobility-reconfigurable compliant mechanisms based on the constraint singularities of spatial rigid body mechanisms. Fig. 7 4R 2 FIGURE 7 : 1 Fig. 7 4RFIGURE 8 : 727178 Fig. 7 4R compliant mechanisms: (a) 4R compliant mechanism-1, and (b) 4R compliant mechanism-2 1 . 1 Operation mode I : Rotation in the XY-plane about the Axis-L, when α > 0 and β = 0, as shown in Fig. 9a, 2. Operation mode II : Rotation in the XY-plane about the Axis-R when α = 0 and β > 0, as shown in Fig. 9b, 3. Operation mode III : Rotation in the XY-plane about other axes except the Axis-L and Axis-R, when α = β > 0, as shown in Fig. 9c, and 4. Operation mode IV : Pure translations in the XY-plane along Operation mode I : Rotation in the XY-plane about the Axis-L Operation mode II : Rotation in the XY-plane about the Axis-R Operation mode III : Rotation in the XY-plane about other axes except the Axis-L and Axis-R Operation mode IV : Pure translations in the XY-plane along the X and Y axes FIGURE 9 : 9 FIGURE 9: OPERATION MODES OF THE COMPLIANT FOUR-BAR MECHANISM-2 8. Results of the simulations are plotted in Figs. 10 to 13, and the following conclusions are drawn : 1. The maximum difference between the FEA results and the analytical results in terms of the Y-axis translation of the interest point (the centre of the top surface of the Bar-2) is tiny, which is less than 0.5% as shown in Figs. 10a, 11a, 12a and 13a. 2. The FEA results of the rotation in the XY-plane match the analytical results very well. The difference is less than 0.8 × 10 -3 rad (0.5% of the maximum rotation angle), which is shown in Figs. 10b, 11b and 12b. FIGURE 10 :FIGURE 11 : 1011 FIGURE 10: FEA RESULTS FOR OPERATION MODE I FIGURE 12 :FIGURE 13 : 1213 FIGURE 12: FEA RESULTS FOR OPERATION MODE III FIGURE 14: A novel reconfigurable compliant gripper (a) Angular grasping mode 1 : α = 0, β = 0 (b) Angular grasping mode 2 : α = 0, β = 0 (c) Left : parallel grasping mode (α > 0, β < 0); Right : angular grasping mode 3 (α < 0, β < 0) FIGURE 15 : 15 FIGURE 15: FOUR GRASPING MODES OF THE COMPLI-ANT GRIPPER FIGURE 16 : 16 FIGURE 16: Prototype of the reconfigurable compliant gripper (a) Translations along the X and Y axes (b) Rotation about the Axis-L (c) parasitic motions (rotations about the X-and Y-axes and translation along the Z-axis) ACKNOWLEDGMENT The authors would like to express their gratitude for the Ulysses 2016 grant between Ireland and France. Mr. Tim Powder and Mr. Mike O'Shea in University College Cork are appreciated
24,821
[ "1307880", "10659" ]
[ "111023", "473973", "121067", "121067", "481388", "473973", "441569" ]
01757798
en
[ "spi" ]
2024/03/05 22:32:10
2017
https://hal.science/hal-01757798/file/CableCon2017_Lessanibahri_Gouttefarde_Caro_Cardou_vf2.pdf
S Lessanibahri email: saman.lessanibahri@irccyn.ec-nantes.fr M Gouttefarde email: marc.gouttefarde@lirmm.fr S Caro P Cardou email: pcardou@gmc.ulaval.ca Twist Feasibility Analysis of Cable-Driven Parallel Robots Although several papers addressed the wrench capabilities of cable-driven parallel robots (CDPRs), few have tackled the dual question of their twist capabilities. In this paper, these twist capabilities are evaluated by means of the more specific concept of twist feasibility, which was defined by Gagliardini et al. in a previous work. A CDPR posture is called twist-feasible if all the twists (point-velocity and angular-velocity combinations), within a given set, can be produced at the CDPR mobile platform, within given actuator speed limits. Two problems are solved in this paper: (1) determining the set of required cable winding speeds at the CDPR winches being given a prescribed set of required mobile platform twists; and (2) determining the set of available twists at the CDPR mobile platform from the available cable winding speeds at its winches. The solutions to both problems can be used to determine the twist feasibility of n-degree-of-freedom (DOF) CDPRs driven by m ≥ n cables. An example is presented, where the twist-feasible workspace of a simple CDPR with n = 2 DOF and driven by m = 3 cables is computed to illustrate the proposed method. Introduction A cable-driven parallel robot (CDPR) consists of a base frame, a mobile platform, and a set of cables connecting in parallel the mobile platform to the base frame. The cable lengths or tensions can be adjusted by means of winches and a number of pulleys may be used to route the cables from the winches to the mobile platform. Among other advantages, CDPRs with very large workspaces, e.g. [START_REF] Gouttefarde | A versatile tension distribution algorithm for n-DOF parallel robots driven by n+2 cables[END_REF][START_REF] Lambert | Implementation of an Aerostat Positioning System With Cable Control[END_REF], heavy payloads capabilities [START_REF] Albus | The NIST Robocrane[END_REF], or reconfiguration capabilities, e.g. [START_REF] Gagliardini | Discrete reconfiguration planning for cable-driven parallel robots[END_REF][START_REF] Rosati | On the design of adaptive cable-driven systems[END_REF] can be designed. Moreover, the moving parts of CDPRs being relatively light weight, fast motions of the mobile platform can be obtained, e.g. [START_REF] Kawamura | High-speed manipulation by using parallel wire-driven robots[END_REF]. The cables of a CDPR can only pull and not push on the mobile platform and their tension shall not become larger than some maximum admissible value. Hence, for a given mobile platform pose, the determination of the feasible wrenches at the platform is a fundamental issue, which has been the subject of several previous works, e.g. [START_REF] Bouchard | On the ability of a cable-driven robot to generate a prescribed set of wrenches[END_REF][START_REF] Hassan | Analysis of Bounded Cable Tensions in Cable-Actuated Parallel Manipulators[END_REF]. A relevant issue is then to determine the set of wrench feasible poses, i.e., the so-called Wrench-Feasible Workspace (WFW) [START_REF] Bosscher | Wrench-feasible workspace generation for cable-driven robots[END_REF][START_REF] Riechel | Force-feasible workspace analysis for underconstrained point-mass cable robots[END_REF], since the shape and size of the latter highly depends on the cable tension bounds and on the CDPR geometry [START_REF] Verhoeven | Analysis of the Workspace of Tendon-Based Stewart-Platforms[END_REF]. Another issue which may strongly restrict the usable workspace of a CDPR or, divide it into several disjoint parts, are cable interferences. Therefore, software tools allowing the determination of the interference-free workspace and of the WFW have been proposed, e.g. [START_REF] Ruiz | Arachnis : Analysis of robots actuated by cables with handy and neat interface software[END_REF][START_REF] Perreault | Geometric determination of the interferencefree constant-orientation workspace of parallel cable-driven mechanisms[END_REF],. Besides, recently, a study on acceleration capabilities was proposed in [START_REF] Eden | Available acceleration set for the study of motion capabilities for cable-driven robots[END_REF][START_REF] Gagliardini | Determination of a dynamic feasible workspace for cable-driven parallel robots[END_REF]. As noted in [START_REF] Gagliardini | Dimensioning of cable-driven parallel robot actuators, gearboxes and winches according to the twist feasible workspace[END_REF] and as well known, in addition to wrench feasibility, the design of the winches of a CDPR also requires the consideration of cable and mobile platform velocities since the selection of the winch characteristics (motors, gearboxes, and drums) has to deal with a trade-off between torque and speed. Twist feasibility is then the study of the relationship between the feasible mobile platform twists (linear and angular velocities) and the admissible cable coiling/uncoiling speeds. In the following, the cable coiling/uncoiling speeds are loosely referred to as cable velocities. The main purpose of this paper is to clarify the analysis of twist feasibility and of the related twist-feasible workspace proposed in [START_REF] Gagliardini | Dimensioning of cable-driven parallel robot actuators, gearboxes and winches according to the twist feasible workspace[END_REF]. Contrary to [START_REF] Gagliardini | Dimensioning of cable-driven parallel robot actuators, gearboxes and winches according to the twist feasible workspace[END_REF], the twist feasibility analysis proposed here is based on the usual CDPR differential kinematics where the Jacobian matrix maps the mobile platform twist into the cable velocities. This approach is most important for redundantly actuated CDPRs, whose Jacobian matrix is rectangular. A number of concepts in this paper are known, notably from manipulability ellipsoids of serial robots, e.g. [START_REF] Yoshikawa | Foundations of Robotics[END_REF], and from studies on the velocity performance of parallel robots, e.g. [START_REF] Krut | Velocity performance indices for parallel mechanisms with actuation redundancy[END_REF]. A review of these works is however out of the scope of the present paper whose contribution boils down to a synthetic twist feasibility analysis of n-degrees-of-freedom (DOF) CDPRs driven by m cables, with m ≥ n. The CDPR can be fully constrained or not, and the cable mass and elasticity are neglected. The paper is organized as follows. The usual CDPR wrench and Jacobian matrices are defined in Section 2. Section 3 presents the twist feasibility analysis, which consists in solving two problems. The first one is the determination of the set of cable velocities corresponding to a given set of required mobile platform twists (Section 3.1). The second problem is the opposite since it is defined as the calculation of the set of mobile platform twists corresponding to a given set of cable velocities (Section 3.2). The twist and cable velocity sets considered in this paper are convex Fig. 1: Geometric description of a fully constrained CDPR polytopes. In Section 4, a 2-DOF point-mass CDPR driven by 3 cables is considered to illustrate the twist feasibility analysis. Section 5 concludes the paper. Wrench and Jacobian Matrices In this section, the well-known wrench matrix and Jacobian matrix of n-DOF mcable CDPRs are defined. The wrench matrix maps the cable tensions into the wrench applied by the cables on the CDPR mobile platform. The Jacobian matrix relates the time derivatives of the cable lengths to the twist of the mobile platform. These two matrices are essentially the same since one is minus the transpose of the other. Some notations and definitions are first introduced. As illustrated in Fig. 1, let us consider a fixed reference frame, F b , of origin O b and axes x b , y b and z b . The coordinate vectors b a i , i = 1, . . . , m define the positions of the exit points, A i , i = 1, . . . , m, with respect to frame F b . A i is the point where the cable exits the base frame and extends toward the mobile platform. In this paper, the exit points A i are assumed to be fixed, i.e., the motion of the output pulleys is neglected. A frame F p , of origin O p and axes x p , y p and z p , is attached to the mobile platform. The vectors p b i , i = 1, . . . , m are the position vectors of the points B i in F p . The cables are attached to the mobile platform at points B i . The vector b l i from B i to A i is given by b l i = b a i -p -R p b i , i = 1, . . . , m (1) where R is the rotation matrix defining the orientation of the mobile platform, i.e., the orientation of F p in F b , and p is the position vector of F p in F b . The length of the straight line segment A i B i is l i = || b l i || 2 where || • || 2 is the Euclidean norm. Neglecting the cable mass, l i corresponds to the length of the cable segment from point A i to point B i . Moreover, neglecting the cable elasticity, l i is the "active" length of the cable that should be unwound from the winch drum. The unit vectors along the cable segment A i B i is given by b d i = b l i /l i , i = 1, . . . , m (2) Since the cable mass is neglected in this paper, the force applied by the cable on the platform is equal to τ i b d i , τ i being the cable tension. The static equilibrium of the CDPR platform can then be written [START_REF] Hiller | Design, analysis and realization of tendon-based parallel manipulators[END_REF][START_REF] Roberts | On the Inverse Kinematics, Statics, and Fault Tolerance of Cable-Suspended Robots[END_REF] Wτ τ τ + w e = 0 ( 3 ) where w e is the external wrench acting on the platform, τ τ τ = [τ 1 , . . . , τ m ] T is the vector of cable tensions, and W is the wrench matrix. The latter is an n × m matrix defined as W = b d 1 b d 2 . . . b d m R p b 1 × b d 1 R p b 2 × b d 2 . . . R p b m × b d m (4) The differential kinematics of the CDPR establishes the relationship between the twist t of the mobile platform and the time derivatives of the cable lengths l Jt = l ( 5 ) where J is the m × n Jacobian matrix and l = l1 , . . . , lm T . The twist t = [ ṗ, ω ω ω] T is composed of the velocity ṗ of the origin of frame F p with respect to F b and of the angular velocity ω ω ω of the mobile platform with respect to F b . Moreover, the well-known kineto-statics duality leads to J = -W T (6) In the remainder of this paper, l is loosely referred to as cable velocities. The wrench and Jacobian matrices depend on the geometric parameters a i and b i of the CDPR and on the mobile platform pose, namely on R and p. Twist Feasibility Analysis This section contains the contribution of the paper, namely, a twist feasibility analysis which consists in solving the following two problems. 1. For a given pose of the mobile platform of a CDPR and being given a set [t] r of required mobile platform twists, determine the corresponding set of cable velocities l. The set of cable velocities to be determined is called the Required Cable Velocity Set (RCVS) and is denoted l r . The set [t] r is called the Required Twist Set (RTS). 2. For a given pose of the mobile platform of a CDPR and being given a set l a of available (admissible) cable velocities, determine the corresponding set of mobile platform twists t. The former set, l a , is called the Available Cable Velocity Set (ACVS) while the latter is denoted [t] a and called the Available Twist Set (ATS). In this paper, the discussion is limited to the cases where both the RTS [t] r and the ACVS l a are convex polytopes. Solving the first problem provides the RCVS from which the maximum values of the cable velocities required to produce the given RTS [t] r can be directly deduced. If the winch characteristics are to be determined, the RCVS allows to determine the required speeds of the CDPR winches. If the winch characteristics are already known, the RCVS allows to test whether or not the given RTS is feasible. Solving the second problem provides the ATS which is the set of twists that can be produced at the mobile platform. It is thus useful either to determine the velocity capabilities of a CDPR or to check whether or not a given RTS is feasible. Note that the feasibility of a given RTS can be tested either in the cable velocity space, by solving the first problem, or in the space of platform twists, by solving the second problem. Besides, note also that the twist feasibility analysis described above does not account for the dynamics of the CDPR. Problem 1: Required Cable Velocity Set (RCVS) The relationship between the mobile platform twist t and the cable velocities l is the differential kinematics in [START_REF] Eden | Available acceleration set for the study of motion capabilities for cable-driven robots[END_REF]. According to this equation, the RCVS l r is defined as the image of the convex polytope [t] r under the linear map J. Consequently, l r is also a convex polytope [START_REF] Ziegler | Lectures on Polytopes[END_REF]. Moreover, if [t] r is a box, the RCVS l r is a particular type of polytope called a zonotope. Such a transformation of a box into a zonotope has previously been studied in CDPR wrench feasibility analysis [START_REF] Bouchard | On the ability of a cable-driven robot to generate a prescribed set of wrenches[END_REF][START_REF] Gallina | 3-dof wire driven planar haptic interface[END_REF][START_REF] Gouttefarde | Characterization of Parallel Manipulator Available Wrench Set Facets[END_REF]. Indeed, a box of admissible cable tensions is mapped by the wrench matrix W into a zonotope in the space of platform wrenches. However, a difference lies in the dimensions of the matrices J and W, J being of dimensions m × n while W is an n × m matrix, where n ≤ m. When n < m, on the one hand, W maps the m-dimensional box of admissible cable tensions into the n-dimensional space of platform wrenches. On the other hand, J maps n-dimensional twists into its range space which is a linear subspace of the m-dimensional space of cable velocities l. Hence, when J is not singular, the n-dimensional box [t] r is mapped into the zonotope l r which lies into the ndimensional range space of J, as illustrated in Fig. 3 . When J is singular and has rank r, r < n, the n-dimensional box [t] r is mapped into a zonotope of dimension r. When an ACVS l a is given, a pose of the mobile platform of a CDPR is twist feasible if l r ⊆ l a (7) Since l a is a convex polytope, ( 7) is verified whenever all the vertices of l r are included in l a . Moreover, it is not difficult to prove that l r is the convex hull of the images under J of the vertices of [t] r . Hence, a simple method to verify if a CDPR pose is twist feasible consists in verifying whether the images of the vertices of [t] r are all included into l a . Problem 2: Available Twist Set (ATS) The problem is to determine the ATS [t] a corresponding to a given ACVS l a . In the most general case considered in this paper, l a is a convex polytope. By the Minkowski-Weyl's Theorem, a polytope can be represented as the solution set of a finite set of linear inequalities, the so-called (halfspace) H-representation of the polytope [START_REF] Fukuda | Frequently asked questions in polyhedral computation[END_REF][START_REF] Ziegler | Lectures on Polytopes[END_REF], i.e. l a = { l | C l ≤ d } (8) where matrix C and vector d are assumed to be known. According to (5), the ATS is defined as [t] a = { t | Jt ∈ l a } (9) which, using (8), implies that [t] a = { t | CJt ≤ d } (10) The latter equation provides an H-representation of the ATS [t] a . In practice, when the characteristics of the winches of a CDPR are known, the motor maximum speeds limit the set of possible cable velocities as follows li,min ≤ li ≤ li,max [START_REF] Gouttefarde | Characterization of Parallel Manipulator Available Wrench Set Facets[END_REF] where li,min and li,max are the minimum and maximum cable velocities. Note that, usually, li,min = -li,max , l1,min = l2,min = . . . = lm,min , and l1,max = l2,max = . . . = lm,max . In other words, C and d in (8) are defined as C = 1 -1 and d = l1,max , . . . , lm,max , -l1,min , . . . , -lm,min T ( 12 ) where 1 is the m × m identity matrix. Eq. ( 10) can then be written as follows [t] a = { t | lmin ≤ Jt ≤ lmax } (13) where lmin = l1,min , . . . , lm,min T and lmax = l1,max , . . . , lm,max T . When a RTS [t] r is given, a pose of the mobile platform of a CDPR is twist feasible if [t] r ⊆ [t] a (14) In this paper, [t] r is assumed to be a convex polytope. Hence, ( 14) is verified whenever all the vertices of [t] r are included in [t] a . With the H-representation of [t] a in (10) (or in ( 13)), testing if a pose is twist feasible amounts to verifying if all the vertices of [t] r satisfy the inequality system in (10) (or in ( 13)). Testing twist feasibility thereby becomes a simple task as soon as the vertices of [t] r are known. Finally, let the twist feasible workspace (TFW) of a CDPR be the set of twist feasible poses of its mobile platform. It is worth noting that the boundaries of the TFW are directly available in closed form from [START_REF] Gallina | 3-dof wire driven planar haptic interface[END_REF] or [START_REF] Hassan | Analysis of Bounded Cable Tensions in Cable-Actuated Parallel Manipulators[END_REF]. If the vertices of the (convex) RTS are denoted t j , j = 1, . . . , k, and the rows of the Jacobian matrix are -w T i , according to ( 13), the TFW is defined by li,min ≤ -w T i t j and -w T i t j ≤ li,max , for all possible combinations of i and j. Since w i contains the only variables in these inequalities that depend on the mobile platform pose, and because the closedform expression of w i as a function of the pose is known, the expressions of the boundaries of the TFW are directly obtained. Case Study This section deals with the twist feasibility analysis of the two-DOF point-mass planar CDPR driven by three cables shown in Fig. 2. The robot is 3.5 m long and 2.5 m high. The three exit points of the robot are named A 1 , A 2 are A 3 , respectively. The point-mass is denoted P. b d 1 , b d 2 and b d 3 are the unit vectors, expressed in frame F b , of the vectors pointing from point-mass P to cable exit points A 1 , A 2 are A 3 , respectively. The 3 × 2 Jacobian matrix J of this planar CDPR takes the form: J = -   b d T 1 b d T 2 b d T 3   ( 15 ) Figure 3 is obtained by solving the Problem 1 formulated in Sec. 3. For the robot configuration depicted in Fig. 3a and the given RTS of the point-mass P represented in Fig. 3b, the RCVS for the three cables of the planar CDPR are illustrated in Figs. 3c to 3f. Note that the RTS is defined as: -1 m.s -1 ≤ ẋP ≤ 1 m.s -1 (16) -1 m.s -1 ≤ ẏP ≤ 1 m.s -1 (17) where [ ẋP , ẏP ] T is the velocity of P in the fixed reference frame F b . Figure 4 depicts the isocontours of the Maximum Required Cable Velocity (MRCV) for each cable through the Cartesian space and for the RTS shown in Fig. 3b. Those results are obtained by solving Problem 1 for all positions of point P. It is apparent y b O b x b A 1 A 2 A 3 P d 1 d 2 d 3 F b p a 3 3.5 m 2.5 m Fig. 2: A two-DOF point-mass planar cable-driven parallel robot driven by three cables that P RTS is satisfied through the Cartesian space as long as the maximum velocity of each cable is higher than √ 2 m.s -1 , namely, l1,max = l2,max = l3,max = √ 2 m.s -1 with li,min = -li,max , i = 1, 2, 3. For the Available Cable Velocity Set (ACVS) defined by inequalities [START_REF] Gouttefarde | Characterization of Parallel Manipulator Available Wrench Set Facets[END_REF] with li,max = 1.3 m.s -1 , i = 1, 2, 3 (18) Fig. 5 is obtained by solving the Problem 2 formulated in Sec. 3. For the two robot configurations illustrated in Fig. 5a and5c, the Available Twist Set (ATS) associated to the foregoing ACVS is determined from Eq. ( 13). It is noteworthy that the ATS in each configuation in delimited by three pairs of lines normal to three cables, respectively. It turns out that the first robot configuration is twist feasible for the RTS defined by Eqs. ( 16) and ( 17) because the latter is included into the ATS as shown Fig. 5b. Conversely, the second robot configuration is not twist feasible as the RTS is partially outside the ATS as shown Fig. 5d. Finally, Fig. 6 shows the TFW of the planar CDPR for four maximum cable velocity limits and for the RTS shown in Fig. 3b. It is apparent the all robot poses are twist feasible as soon as the cable velocity limits of the three cables are higher than √ 2 m.s -1 . x [m] Conclusion In summary, this paper presents two methods of determining the twist-feasibility of a CDPR. The first method uses a set of required mobile platform twists to compute the corresponding required cable velocities, the latter corresponding to cable winding speeds at the winches. The second method takes the opposite route, i.e., it uses the available cable velocities to compute the corresponding set of available mobile platform twists. The second method can be applied to compute the twist-feasible workspace, i.e., to determine the set of mobile platform poses where a prescribed polyhedral required twist set is contained within the available twist set. This method can thus be used to analyze the CDPR speed capabilities over its workspace, which should prove useful in high-speed CDPR applications. The proposed method can be seen as a dual to the one used to compute the wrench-feasible workspace of a CDPR, just as the velocity equations may be seen as dual to static equations. From a mathematical standpoint, however, the problem is much simpler in the case of the twist-feasible workspace, as the feasibility conditions can be obtained explicitly. Nevertheless, the authors believe that the present paper complements nicely the previous works on wrench feasibility. Finally, we should point out that the proposed method does not deal with the issue of guaranteeing the magnitudes of the mobile platform point-velocity or angular velocity. In such a case, the required twist set becomes a ball or an ellipsoid, and thus is no longer polyhedral. This ellipsoid could be approximated by a polytope in order to apply the method proposed in this paper. However, since the accuracy of the approximation would come at the expense of the number of conditions to be numerically verified, part of our future work will be dedicated to the problem of determining the twist-feasibility of CDPRs for ellipsoidal required twist sets. Fig. 6: TFW of the planar CDPR for four maximum cable velocity limits and for the RTS shown in Fig. 3b Manufacturing Technologies for Composite, Metallic and Hybrid Structures) and of the RFI AT-LANSTIC 2020 CREATOR project. Fig. 3 : 3 Fig. 3: Required Twist Set (RTS) of the point-mass P and corresponding Required Cable Velocity Sets for the three cables of the CDPR in a given robot configuration Fig. 4 : 4 Fig.4: Maximum Required Cable Velocity (MRCV) of each cable through the Cartesian space for the RTS shown in Fig.3b Fig. 5 : 5 Fig. 5: A feasible twist pose and an infeasible twist pose of the CDPR 1 x 1 x 11 li,max = 1.12 m.s -li,max = 1.32 m.s -li,max = 1.414 m.s -1 Acknowledgements The financial support of the ANR under grant ANR-15-CE10-0006-01 (Dex-terWide project) is greatly acknowledged. This research work was also part of the CAROCA project managed by IRT Jules Verne (French Institute in Research and Technology in Advanced
23,653
[ "1030159", "170861", "10659", "903902" ]
[ "111023", "473973", "388165", "481388", "473973", "93488" ]
01757800
en
[ "spi" ]
2024/03/05 22:32:10
2017
https://hal.science/hal-01757800/file/CableCon2017_Rasheed_Long_Marquez_Caro_vf.pdf
Tahir Rasheed email: tahir.rasheed@ls2n.fr Philip Long email: philip.long@northeastern.edu David Marquez-Gamez email: david.marquez-gamez@irt-jules-verne.fr Stéphane Caro email: stephane.caro@ls2n.fr Tension Distribution Algorithm for Planar Mobile Cable-Driven Parallel Robots Keywords: Cable-Driven Parallel Robot, Mobile Robot, Reconfigurability, Tension Distribution Algorithm, Equilibrium published or not. The documents may come L'archive ouverte pluridisciplinaire Introduction A Cable-Driven Parallel Robot (CDPR) is a type of parallel robot whose movingplatform is connected to the base with cables. The lightweight properties of the CDPR makes them suitable for multiple applications such as constructions [START_REF] Albus | The nist spider, a robot crane[END_REF], [START_REF] Pott | Large-scale assembly of solar power plants with parallel cable robots[END_REF], industrial operations [START_REF] Gagliardini | Discrete reconfiguration planning for cable-driven parallel robots[END_REF], rehabilitation [START_REF] Rosati | Performance of cable suspended robots for upper limb rehabilitation[END_REF] and haptic devices [START_REF] Gallina | 3-dof wire driven planar haptic interface[END_REF]. A general CDPR has a fixed cable layout, i.e. fixed exit points and cable configuration. This fixed geometric structure may limit the workspace size of the manipulator due to cable collisions and some extrernal wrenches that cannot be accepted due to the robot configuration. As there can be several configurations for the robot to perform the prescribed task, an optimized cable layout is required for each task considering an appropriate criterion. Cable robots with movable exit and/or anchor points are known as Reconfigurable Cable-Driven Parallel Robots (RCDPRs). By appropriately modifying the geometric architecture, the robot performance can be improved e.g. lower cable tensions, larger workspace and higher stiffness. The recent work on RCDPR [START_REF] Gagliardini | A reconfiguration strategy for reconfigurable cable-driven parallel robots[END_REF][START_REF] Gagliardini | Discrete reconfiguration planning for cable-driven parallel robots[END_REF][START_REF] Nguyen | On the analysis of largedimension reconfigurable suspended cable-driven parallel robots[END_REF][START_REF] Rosati | On the design of adaptive cable-driven systems[END_REF][START_REF] Zhou | Analysis framework for cooperating mobile cable robots[END_REF] proposed different design strategies and algorithms to compute optimized cable layout for the required task, while minimizing appropriate criteria such as the robot energy consumption, the robot workspace size and the robot stiffness. However, for most existing RCDPRs, the reconfigurability is performed either discrete and manually or continuously, but with bulky reconfigurable systems. This paper deals with the concept of Mobile Cable-Driven Parallel Robots (MCDPRs). The idea for introducing MCDPRs is to overcome the manual and discrete reconfigurability of RCDPRs such that an autonomous reconfiguration can be achieved. A MCDPR is composed of a classical CDPR with m cables and a n degreeof-freedom (DoF) moving-platform mounted on p mobile bases. Mobile bases are four-wheeled planar robots with two-DoF translational motions and one-DoF rotational motion. A concept idea of a MCDPR is illustrated in Fig. 1 with m = 8, n = 6 and p = 4. The goal of such system is to provide a low cost and versatile robotic solution for logistics using a combination of mobile bases and CDPR. This system addresses an industrial need for fast pick and place operations while being easy to install, keeping existing infrastructures and covering large areas. The exit points for the cable robot is associated with the position of its respective mobile bases. Each mobile base can navigate in the environment thus allowing the system to alter the geometry of the CDPR. Contrary to classical CDPR, equilibrium for both the moving-platform and the mobile bases should be considered while analyzing the behaviour of the MCDPR. A Planar Mobile Cable-Driven Parallel Robot with four cables (m = 4), a point mass (n = 2) and two mobile bases (p = 2), shown in Fig. 2, is considered throughout this paper as an illustrative example. This paper is organized as follows. Section 2 presents the static equilibrium conditions for mobile bases using the free body diagram method. Section 3 introduces a modified real time Tension Distribution Algorithm (TDA), which takes into account the dynamic equilibrium of the moving-platform and the static equilibrium of the mobile bases. Section 4 presents the comparison between the existing and modified TDA on the equilibrium of the Static Equilibrium of Mobile Bases This section aims at analyzing the static equilibrium of the mobile bases of MCD-PRs. As both the mobile bases should be in equilibrium during the motion of the end-effector, we need to compute the reaction forces generated between the ground and the wheels of the mobile bases. Figure 2 illustrates the free body diagram for the jth mobile base. u i j denotes the unit vector of the ith cable attached to the jth mobile base, i, j = 1, 2. u i j is defined from the point mass P of the MCDPR to the exit point A i j . Using classical equilibrium conditions for the jth mobile base p j , we can write: ∑ f = 0 ⇒ m j g + f 1 j + f 2 j + f r1 j + f r2 j = 0 (1) All the vectors in Eq. ( 1) are associated with the superscript x and y for respective horizontal and vertical axes. Gravity vector is denoted as g = [0g] T where g = 9.8 m.s -2 , f 1 j = [ f x 1 j f y 1 j ] T and f 2 j = [ f x 2 j f y 2 j ] T are the reaction forces due to cable tensions onto the mobile base p j , C 1 j and C 2 j are the front and rear wheels con- tact points having ground reaction forces f r1 j = [ f x r1 j f y r1 j ] T and f r2 j = [ f x r2 j f y r2 j ] T , respectively. In this paper, wheels are assumed to be simple support points and the friction between those points and the ground is supposed to be high enough to prevent the mobile bases from sliding. The moment at a point O about z-axis for the mobile base to be in equilibrium is expressed as: M z O = 0 ⇒ g T j E T m j g + a T 1 j E T f 1 j + a T 2 j E T f 2 j + c T 1 j E T f r1 j + c T 2 j E T f r2 j = 0 (2) with E = 0 -1 1 0 (3) a 1 j = [a x 1 j a y 1 j ] T and a 2 j = [a x 2 j a y 2 j ] T denote the Cartesian coordinate vectors of the exit points A 1 j and A 2 j , c 1 j = [c x 1 j c y 1 j ] T and c 2 j = [c x 2 j c y 2 j ] T denote the Cartesian coordinate vectors of the contact points C 1 j and C 2 j . g j = [g x j g y j ] T is the Cartesian coordinate vector for the center of gravity G j of the mobile base p j . The previous mentioned vector are all expressed in the base frame F B . Solving simultaneously Eqs. ( 1) and ( 2), the vertical components of the ground reaction forces take the form: f y r1 j = m j g(c x 2 j -g x j ) + f y 1 j (a x 1 j -c x 2 j ) + f y 2 j (a x 2 j -c x 2 j ) -f x 1 j a y 1 j -f x 2 j a y 2 j c x 2 j -c x 1 j (4) f y r2 j = m j g -f y 1 j -f y 2 j -f y r1 j (5) Equations ( 4) and ( 5) illustrate the effect of increasing the external forces (cable tensions) onto the mobile base. Indeed, the external forces exerted onto the mobile base may push the latter towards frontal tipping. It is apparent that the higher the cable tensions, the higher the vertical ground reaction force f y r 1 j and the lower the ground reaction force f y r 2 j . There exists a combination of cable tensions such that f y r 2 j = 0. At this instant, the rear wheel of the jth mobile base will lose contact with the ground at point C 2 j , while generating a moment M C1 j about z-axis at point C 1 j : M z C1 j = (g j -c 1 j ) T E T m j g + (a 1 j -c 1 j ) T E T f 1 j + (a 2 j -c 1 j ) T E T f 2 j (6) Similarly for the rear tipping f y r 1 j = 0, the jth mobile base will lose the contact with the ground at C 1 j and will generate a moment M c2 j about z-axis at point C 2 j : M z C2 j = (g j -c 2 j ) T E T m j g + (a 1 j -c 2 j ) T E T f 1 j + (a 2 j -c 2 j ) T E T f 2 j (7) As a consequence, for the first mobile base p 1 to be always stable, the moments generated by the external forces should be counter clockwise at point C 11 while it should be clockwise at point C 21 . Therefore, the stability conditions for mobile base p 1 can be expressed as: M z C11 ≥ 0 (8) M z C21 ≤ 0 (9) Similarly, the stability constraint conditions for the second mobile base p 2 are expressed as: M z C12 ≤ 0 (10) M z C22 ≥ 0 ( 11 ) where M z C12 and M z C22 are the moments of the mobile base p 2 about z-axis at the contact points C 12 and C 22 , respectively. Real-time Tension Distribution Algorithm In this section an existing Tension Distribution Algorithm (TDA) defined for classical CDPRs is adopted to Mobile Cable-driven Parallel Robots (MCDPRs). The existing algorithm, known as barycenter/centroid algorithm is presented in [START_REF] Lamaury | A tension distribution method with improved computational efficiency[END_REF][START_REF] Mikelsons | A real-time capable force calculation algorithm for redundant tendon-based parallel manipulators[END_REF]. Due to its geometric nature, the algorithm is efficient and appropriate for real time applications [START_REF] Gouttefarde | A versatile tension distribution algorithm for-dof parallel robots driven by cables[END_REF]. First, the classical Feasible Cable Tension Domain (FCTD) is defined for CDPRs based on the cable tension limits. Then, the stability (static equilibrium) conditions for the mobile bases are considered in order to define a modified FCTD for MCDPRs. Finally, a new TDA aiming at obtaining the centroid/barycenter of the modified FCTD is presented. FCTD based on cable tension limits The dynamic equilibrium equation of a point mass platform is expressed as: Wt p + w e = 0 =⇒ t p = -W + w e ( 12 ) where W = [u 11 u 21 u 12 u 22 ] is n × m wrench matrix mapping the cable tension space defined in R m onto the available wrench space defined in R (m-n) . w e denotes the external wrench exerted onto the moving-platform. W + is the Moore Penrose pseudo inverse of the wrench matrix W. t p = [t p11 t p21 t p12 t p22 ] T is a particular solution (Minimum Norm Solution) of Eq. ( 12). Having redundancy r = mn = 2, a homogeneous solution t n can be added to the particular solution t p such that: t = t p + t n =⇒ t = -W + w e + Nλ λ λ ( 13 ) where N is the m × (mn) null space of the wrench matrix W and λ λ λ = [λ 1 λ 2 ] T is a (mn) dimensional arbitrary vector that moves the particular solution into the feasible range of cable tensions. Note that the cable tension t i j associated with the ith cable mounted onto the jth mobile base should be bounded between a minimum tension t and a maximum tension t depending on the motor capacity and the transmission system at hand. According to [START_REF] Gouttefarde | A versatile tension distribution algorithm for-dof parallel robots driven by cables[END_REF][START_REF] Lamaury | A tension distribution method with improved computational efficiency[END_REF], there exists a 2-D affine space Σ defined by the solution of Eq. ( 12) and another m-dimensional hypercube Ω defined by the feasible cable tensions: Σ = {t | Wt = w e } (14 ) Ω = {t | t ≤ t ≤ t} (15) The intersection between these two spaces amounts to a 2-D convex polygon also known as feasible polygon. Such a polygon exists if and only if the tension distribution admits a solution at least that satisfies the cable tension limits as well as the equilibrium of the moving-platform defined by Eq. [START_REF] Rosati | On the design of adaptive cable-driven systems[END_REF]. Therefore, the feasible polygon is defined in the λ λ λ -space by the following linear inequalities: t -t p ≤ Nλ λ λ ≤ t -t p (16) The terms of the m × (mn) null space matrix N are defined as follows: N =     n 11 n 21 n 12 n 22     (17) where each component n i j of the null space N in Eq. ( 17) is a (1 × 2) row vector. FCTD based on the stability of the mobile bases This section aims at defining the FCTD while considering the cable tension limits and the stability conditions of the mobile bases. In order to consider the stability of the mobile bases, Eqs. (8 -11) must be expressed into the λ λ λ -space. The stability constraint at point C 11 from Eq. ( 8) can be expressed as: 0 ≤ (g 1 -c 11 ) T E T m 1 g + (a 11 -c 11 ) T E T f 11 + (a 21 -c 11 ) T E T f 21 (18) f i j is the force applied by the ith cable attached onto the jth mobile base. As f i j is opposite to u i j (see Fig. 2), from Eq. ( 13) f i j can be expressed as: f i j = -[t pi j + n i j λ λ λ ] u i j (19) Substituting Eq. (19) in Eq. (18) yields: Term [n i j λ λ λ ]u i j is the mapping of homogeneous solution t ni j for the ith cable carried by the jth mobile base into the Cartesian space. M C11 represents the lower bound for the constraint (8) in the λ λ λ -space: (c 11 -g 1 ) T E T m 1 g ≤ (c M C11 = (c 11 -g 1 ) T E T m 1 g + (a 11 -c 11 ) T E T t p11 + (a 21 -c 11 ) T E T t p21 (22) Simplifying Eq. ( 21) yields: M C11 ≤ (c 11 -a 11 ) T E T u 11 (c 11 -a 21 ) T E T u 21 n 11 n 21 λ 1 λ 2 (23) Equation ( 23) can be written as: M C11 ≤ n C11 λ λ λ (24) where n C11 is a 1 × 2 row vector. Similarly the stability constraint at point C 21 from Eq. ( 9) can be expressed as: n C21 λ λ λ ≤ M C21 (25) where: M C21 = (c 21 -g 1 ) T E T m 1 g + (a 11 -c 21 ) T E T t p 11 + (a 21 -c 21 ) T E T t p 21 ( 26 ) n C21 = (c 21 -a 11 ) T E T u 11 (c 21 -a 21 ) T E T u 21 n 11 n 21 (27) Equations ( 24) and ( 25) define the stability constraints of the mobile base p 1 in the λ λ λ -space for the static equilibrium about frontal and rear wheels. Similarly, the above procedure can be repeated to compute the stability constraints in the λ λ λ -space for mobile base p 2 . Constraint Eqs. ( 10) and ( 11) for point C 12 and C 22 can be expressed in the λ λ λ -space as: n C12 λ λ λ ≤ M C12 (28) M C22 ≤ n C22 λ λ λ (29) Considering the stability constraints related to each contact point (Eqs. ( 24), ( 25), ( 28) and ( 29)) with the cable tension limit constraints (Eq. ( 16)), the complete system of constraints to calculate the feasible tensions for MCDPR can be expressed as: t -t p M ≤ N N c λ 1 λ 2 ≤ t -t p M (30) where: N c =     n C11 n C21 n C12 n C22     , M =     M C11 -∞ -∞ M C22     , M =     ∞ M C21 M C12 ∞     , (31) The terms -∞ and ∞ are added for the sake of algorithm [START_REF] Gouttefarde | A versatile tension distribution algorithm for-dof parallel robots driven by cables[END_REF] as the latter requires bounds from both ends. The upper part of Eq. (30) defines the tension limit constraints while the lower part represents the stability constraints for both mobile bases. Tracing FCTD into the λ λ λ -space The inequality constraints from Eq. ( 30) are used to compute the feasible tension distribution among the cables using the algorithm in [START_REF] Gouttefarde | A versatile tension distribution algorithm for-dof parallel robots driven by cables[END_REF] for tracing the feasible polygon P I . Each constraint defines a line in the λ λ λ -space where the coefficients of λ λ λ define the slope of the corresponding lines. The intersections between these lines form a feasible polygon. The algorithm aims to find the feasible combination for λ 1 and λ 2 (if it exists), that satisfies all the inequality constraints. The algorithm can start with the intersection point v i j between any two lines L i and L j where λ 1 λ 2 L 1, m ax L 1, m in L 4,min 4,max L 2, m in L L 3,m in L 3,m ax L 2, m ax v 2 v 3 v8 v 7 v 6 v v 4 v 5 f = v 1 init v = P I1 L 1, m ax L 1, m in L 4,min 4,max L2 ,m in L 2, m ax L 3,m in L 3,m ax L LC 2 1 ,m a x L C 1 2 ,m a x L C 1 1 ,m in L C 2 2 ,m in λ 1 λ 2 v = init v 1 v 2 v 5 v 4 v 3 v 6 v 7 v 8 v f = P I 2 Fig. 4: Feasible Polygon considering both tension limit and stability constraints each intersection point v corresponds to a specific value for λ λ λ . After reaching the intersection point v i j , the algorithm leaves the current line L j and follows the next line L i in order to find the next intersection point v ki between lines L k and L i . The feasible polygon P I is associated with the feasible index set I, which contains the row indices in Eq. (30). At each intersection point, the feasible index set is unchanged or modified by adding the corresponding row index of Eq. (30). It means that for each intersection point, the number of rows from Eq. (30) satisfied at current intersection point should be greater than or equal to the number of rows satisfied at previous visited points. Accordingly, the algorithm makes sure to converge toward the solution. The algorithm keeps track of the intersection points and updates the first vertex v f of the feasible polygon, which depends on the update of feasible index set I. If the feasible index set is updated at intersection point v, the first vertex of the polygon is updated as v f = v. Let's consider that the algorithm has reached a point v ki by first following line L j , then following L i intersecting with line L k . The feasible index set I ki at v ki should be such that I i j ⊆ I ki . If index k is not available in I i j , then I ki = I i j ∪ k as the row k is now satisfied. At each update of the feasible index set I, a new feasible polygon is achieved and the first vertex v f of the polygon is replaced by the current intersection point. This procedure is repeated until a feasible polygon (if it exists) is found, which is determined by visiting v f more than once. After computing the feasible polygon, its centroid, namely the solution furthest away from all the constraints is calculated. The λ λ λ coordinates of the centroid is used to calculate the feasible tension distribution using Eq. [START_REF] Sardain | Forces acting on a biped robot. center of pressure-zero moment point[END_REF]. For the given end-effector position in static equilibrium (see Fig. 2), the feasible polygon P I1 based only on the tension limits is illustrated in Fig. 3 while the feasible polygon P I2 based on the cable tension limits and the stability of the mobile bases is illustrated in Fig. 4. It can be observed that P I2 is smaller than P I1 and, as a consequence, their centroids are different. Case Study The stability of the mobile bases is defined by the position of their Zero Moment Point (ZMP). This index is commonly used to determine the dynamic stability of the humanoid and wheeled robots [START_REF] Lafaye | Linear model predictive control of the locomotion of pepper, a humanoid robot with omnidirectional wheels[END_REF][START_REF] Sardain | Forces acting on a biped robot. center of pressure-zero moment point[END_REF][START_REF] Vukobratović | Zero-moment pointthirty five years of its life[END_REF]. It is the point where the moment of contact forces is reduced to the pivoting moment of friction forces about an axis normal to the ground. Here the ZMP amounts to the point where the sum of the moments due to frontal and rear ground reaction forces is null. Once the feasible cable tensions are computed using the constraints of the modified TDA, the ZMP d j of the mobile base p j is expressed by the equation: M z d j = M z O -f y r j d j (32) where f y r j is the sum of all the vertical ground reaction forces computed using Eqs. ( 4) and ( 5), M d j is the moment generated at ZMP for the jth mobile base such that M z d j = 0. M O is the moment due to external forces, i.e., weight and cable tensions, except the ground reaction forces at O given by the Eq. ( 2). As a result from Eq. (32), ZMP d j will take the form: d j = M z O f y r j = g T j E T m j g + a T 1 j E T f 1 j + a T 2 j E T f 2 j f y r j (33) For the mobile base p j to be in static equilibrium, ZMP d j must lie within the contact points of the wheels, namely, Modified Algorithm for MCDPRs is validated through simulation on a rectangular test trajectory (green path in Fig. 2) where each corner of the rectangle is a zero velocity point. A 8 kg point mass is used. Total trajectory time is 10 s having 3 s for 1-2 and 3-4 paths while 2 s for 2-3 and 4-1 paths. The size of each mobile base is 0.75 m × 0.64 m × 0.7 m. The distance between the two mobile bases is 5 m with exit points A 2 j located at the height of 3 m. The evolution of ZMP for mobile base p 1 is illustrated in Fig. 5a. ZMP must lie between 0 and 0.75, which corresponds to the normalized distance between the two contact points of the wheels, for the first mobile base to be stable. By considering only cable tension limit constraints in the TDA, the first mobile base will tip over the front wheels along the path 3-4 as ZMP goes out of the limit (blue in Fig. 5a). While considering both cable tension limits and stability constraints, the MCDPR will complete the required trajectory with the ZMP satisfying Eqs. (34) and (35). Figure 5b depicts positive cable tensions computed using modified FCTD for MCDPRs. A video showing the evolution of the feasible polygon as a function of time considering only tension limit constraints and both tension limits and stability constraints can be downloaded at 1 . This video also shows the location the mobile base ZMP as well as some tipping configurations of the mobile cable-driven parallel robot under study. Conclusion This paper has introduced a new concept of Mobile Cable-Driven Parallel Robots (MCDPR). The idea is to autonomously navigate and reconfigure the geometric architecture of CDPR without any human interaction. A new real time Tension Distribution algorithm is introduced for MCDPRs that takes into account the stability of the mobile bases during the computation of feasible cable tensions. The proposed algorithm ensures the stability of the mobile bases while guaranteeing a feasible ca-ble tension distribution. Future work will deal with the extension of the algorithm to a 6-DoF MCDPR by taking into account frontal as well as sagittal tipping of the mobile bases and experimental validation thanks to a MCDPR prototype under construction in the framework of the European ECHORD++ "FASTKIT" project. Fig. 1 : 1 Fig. 1: Concept idea for Mobile Cable-Driven Parallel Robot (MCDPR) with eight cables (m = 8), a six degree-of-freedom moving-platform (n = 6) and four mobile bases (p = 4) 2 PFig. 2 : 22 Fig. 2: Point mass Mobile Cable-Driven Parallel Robot with p = 2, n = 2 and m = 4 11 -a 11 ) T E T [t p11 +n 11 λ λ λ ]u 11 +(c 11 -a 21 ) T E T [t p21 +n 21 λ λ λ ]u 21 (20) M C11 ≤ (c 11 -a 11 ) T E T [n 11 λ λ λ ]u 11 + (c 11 -a 21 ) T E T [n 21 λ λ λ ]u 21 (21) Fig. 3 : 3 Fig. 3: Feasible Polygon considering only tension limit constraints Fig. 5 : 5 Fig. 5: (a) Evolution of ZMP for mobile base p 1 (b) Cable tension profile https://www.youtube.com/watch?v=XmwCoH6eejw Acknowledgements This research work is part of the European Project ECHORD++ "FASTKIT" dealing with the development of collaborative and mobile cable-driven parallel robots for logistics.
23,122
[ "1030160", "10659" ]
[ "473973", "235335", "481388", "473973", "441569" ]
01757809
en
[ "shs", "sde" ]
2024/03/05 22:32:10
2009
https://hal.science/hal-01757809/file/Bouleau%20et%20al%20Ecological%20Indicators%202009.pdf
Gabrielle Bouleau Christine Argillier Yves Souchon Carole Barthelemy Marc Babut Carole Barthélémy How ecological indicators construction reveals social changes -the case of lakes and rivers in France come How ecological indicators construction reveals social changes -the case of lakes and rivers in France Introduction For forty years, ecological concerns have spread beyond scientific spheres. Public interest and investment for ecosystems have been growing, notably for wetlands, rivers, and lakes. As a consequence, many agencies are developing so-called 'ecological restoration' projects the efficacy of which is not always obvious [START_REF] Kondolf | Two Decades of River Restoration in California: What Can We Learn?[END_REF]. Ecological indicators (EI) are meant to support decisions in order to set restoration priorities and/or to assess whether the proposed management will improve ecological conditions or not, or to appraise completed projects. Despite increasing developments in the field of EIs during the last thirty years [START_REF] Barbour | Measuring the attainment of biological integrity in the USA: A critical element of ecological integrity[END_REF][START_REF] Kallis | Evolution of EU water policy: A critical assessment and a hopeful perspective[END_REF][START_REF] Moog | Assessing the ecological integrity of rivers: walking the line among ecological, political and administrative interests[END_REF][START_REF] Wasson | Typologie des eaux courantes pour la directive cadre européenne sur l' eau : l' approche par hydro-écorégion. Mise en place de systèmes d' information à références spatiales[END_REF], many scientists complain that such indicators are hardly used to support decisions, management plans, and programs evaluations ( [START_REF] Dale | Challenges in the development and use of ecological indicators[END_REF][START_REF] Lenz | From data to decisions: Steps to an application-oriented landscape research[END_REF]. They usually attribute the gap separating the creation and the use of EI to social factors [START_REF] Turnhout | Ecological indicators: Between the two fires of science and policy[END_REF]. Therefore, research addressing the social perspective of EI has been developed recently. Yet such work focuses on short-term analyses. It has mainly addressed what social drivers are responsible for using or not using EI, once EI have been designed by scientists. The recent literature on this topic falls in two categories: the market or the political arena as driving forces. In the first category, a market is presumed to exist for EI in which environmental scientists, the providers, must fulfil the expectations of decision makers, the buyers. In this perspective, authors recommend that EI be reliable, cheap, easy to use [START_REF] Cairns | A history of biological monitoring using benthic macroinvertebrates[END_REF][START_REF] Lenat | Using Benthic Macroinvertebrates Community Structure for Rapid, Cost-effective, Water Quality Monitoring: Rapid Bioassessment[END_REF]. They should provide users with the information they need in a form they can understand [START_REF] Shields | The role of values and objectives in communicating indicators of sustainability[END_REF][START_REF] Mcnie | Reconciling the supply of scientific information with user demands: an analysis of the problem and review of the literature[END_REF] according to their tasks, responsibilities, and values. Different problems in which scientists, politicians, and experts have different roles, may therefore require different indicators [START_REF] Turnhout | Ecological indicators: Between the two fires of science and policy[END_REF]. Even so, [START_REF] Ribaudo | Environmental indices and the politics of the Conservation Reserve Program[END_REF] noticed there is a need for generalised indicators which meet multiple objectives. In this market-like perspective, social demands challenge EI that were designed under purely scientific considerations. These authors suggest that defining indicators with stakeholders' participation improves their chance of success. In the second category, EI are not presumed to compete in a market but rather in a political arena. Each indicator is believed to promote a particular political point of view. Indeed, the ecological status of an ecosystem depends on the boundaries chosen for the system and on the characteristics chosen to be restored. Scholars adopting this perspective insist on the plurality of ecological objectives [START_REF] Higgs | Expanding the scope of restoration ecology[END_REF][START_REF] Jackson | Ecological restoration: a definition and comments[END_REF] and consider that defining restoration goals and objectives is a value-based activity [START_REF] Lackey | Values, policy, and ecosystem health[END_REF]. In this second perspective, economical and technical characteristics of indicators are hardly addressed. Emphasis is put on the expression of choices made within any indicator. Both categories may overlap since the frontier between science, market and policy is a fuzzy area [START_REF] Davis | The Science and Values of Restoration Ecology[END_REF][START_REF] Hobbs | Restoration ecology: the challenge of social values and expectations[END_REF][START_REF] Turnhout | Ecological indicators: Between the two fires of science and policy[END_REF]. In both perspectives, authors have addressed a small feedback loop between science and policy in which social factors act as selection forces for useful or compelling EI, which in turn are meant to supply data for adaptive management and policy changes if required (Fig. i). To date, very little attention has been paid to the manner values or expectations of stakeholders influence the development of EI prior to selection. Determining the role the social context plays in EI development requires a historical perspective to study a larger feedback loop (Fig. ii). This paper develops a method to study long-term interactions between EI development and social factors. Social factors are understood here to designate all dynamics within human society including structural constraints and human agency. We elaborate an analytical framework to account for these interactions. Then we apply this approach to five case studies in France on lakes and rivers (saprobic index, test on minnows, biotic index, fish index and rapid diagnosis). Last we conclude on the interest of this new approach which provides new elements for explaining the gap between production and use of EI. An analytical framework to study the long-term co-evolution of EI and society Managing the environment and the human population is a recent concern of states. Foucault argues that western governments and scientists became interested in indicators only in the last 150 years, as governmental legitimacy shifted from expansionism to optimizing the well-being of domestic population. As political definitions of well-being evolved, they reshaped the scientific agenda [START_REF] Foucault | Naissance de la biopolitique[END_REF]. Scientific facts are not simply given, but selected by actors to make sense at one point [START_REF] Fleck | Genesis and Development of a Scientific Fact[END_REF][START_REF] Latour | Science in action[END_REF]). Access to nature influences data collecting [START_REF] Forsyth | Critical Political Ecology: The Politics of Environmental Science[END_REF]. Power structures influence the way experts define crises [START_REF] Trottier | Water crises: political construction or physical reality[END_REF]. In turn, experts' discourses spur new social demand. This longterm dynamic changes relationships between technology and society [START_REF] Callon | Some elements of a sociology of translation: domestication of the scallops and the fishermen of Saint Brieuc Bay. In Power, Action and Belief: A New Sociology of Knowledge? Sociological Review Monograph[END_REF][START_REF] Latour | Science in action[END_REF][START_REF] Star | Institutional Ecology, ' translations' and Boundary Objects: Amateurs and Professionals in Berkeley' s Museum of Vertebrate Zoology, 1907-39[END_REF]Sabatier and Jenkins-Smith 1993). This applies for EI as well. Exploring their history sheds light on the manner EI evolve, are selected and kept and how in turn, they influence knowledge and social representations (Fig. ii). This research identifies such long-term influences. For this purpose, we propose an analytical framework to sketch a new approach for social studies of EI, which is (1) interdisciplinary, (2) inductive, and (3) historical. (1) A social study of EI requires a multidisciplinary dialogue between social and natural sciences. Social representations of nature are simultaneously influenced by the scientific state of the art, with which biologists are more familiar, and by the cultural and political context of the period, which social scientists know better. Both competencies are needed. They should not be separate; they should provide different assumptions for the same research questions. Once this is achieved this multidisciplinary approach has yielded an interdisciplinary framework [START_REF] Trottier | Managing Water Resources Past and Present[END_REF]. (2) An inductive and qualitative approach allows emerging concepts to be incorporated during the research [START_REF] Bryman | Qualitative Research Methodology -A review[END_REF][START_REF] Altheide | Ethnographic Content Analysis[END_REF]). In doing so, we identify causal relationships that may be significant even though not being always necessary nor sufficient. We advocate focusing on a small set of largely used indicators rather than studying many different ones. We look for factors that were significant at least in one case study. (3) Without questioning the validity of scientific methods used, we explain how social factors historically influenced the choice of species and the finality of research. For this purpose, we define constitutive elements of the historical trajectory of an EI: the historical background of places where it was developed, the way data were collected and treated, what ecological knowledge was available, what belief concerning nature and society was spread, and what were the ecological management goals. We elaborate on the variation-selection-retention model of socio-environmental co-evolution developed by [START_REF] Kallis | Socio-environmental co-evolution: some ideas for an analytical approach[END_REF]: why different social groups promoted different representations of nature (variation), how a mainstream representation emerged (selection) and how it was maintained despite possible criticisms (retention). The variation step received very little attention in previous academic work. A second layer of evolution was also missing in previous short-term analyses. Scholars often considered society as a homogeneous entity one could represent in one box (Fig. This analytic framework enables to address cumulative effects, historical opportunities, technological gridlocks and path-dependence of EI that are otherwise ignored. Case study background and method The recent evolution of freshwater quality and its relationship with EI development are well documented. Experts generally agree that in western countries, during the 1960s, increasingly visible environmental catastrophes spurred public support for environmental legislation. This resulted in pieces of law as the Clean Water Act in the USA [START_REF] Barbour | Measuring the attainment of biological integrity in the USA: A critical element of ecological integrity[END_REF] and the first Council Directives in the European Community addressing water quality [START_REF] Aubin | European Water Policy. A path towards an integrated resource management[END_REF]. The adopted legislation has proven to be generally efficient in addressing point-source chemical pollution but is too narrowly focused to deal with habitat degradation that became more obvious once industrial and urban discharges into streams were reduced [START_REF] Karr | Defining and assessing ecological integrity beyond water quality[END_REF]. EI development appeared as a secondary social response to this initial improvement in chemical quality of freshwater. Yet the knowledge and the data used to design EI were developed much sooner in the last century. Therefore a study stretching back to the late nineteenth century was necessary to understand what data was available and what social representations framed (or were framed by) the emergence of EI in France. Identifying consistent periods during which water management was uniform is difficult. Transitions emerge slowly from several, sometimes conflicting, influences. Yet, several-decade phases offer convenient time intervals to identify social changes affecting water management. Three main periods are presented within the scope of our study. In the first phase , the quality of surface water became a national issue for fishermen and hygienists as they developed specific sectors for drinking water and fishing. In the second phase economists and urban planners began to worry about the availability of water resources and they developed a sector to treat pollution. In the third phase (since 1980) a social demand for ecosystems management grew at the European level and promoted water body restoration. Our research project was undertaken by two sociologists and three freshwater biologists. Sociologists undertook in depth interviews with actors having played different roles in different places (managers, scientists, activists, policy makers …). They conducted thirty-two 2-3 hour semi-structured interviews with people involved in the design or the use of five indicators in France -i.e. saprobic index [START_REF] Kolkwitz | Okologie des pflanzlichen saprobien[END_REF], test on minnows (Phoxinus phoxinus, L.1758), biotic index [START_REF] Woodiwiss | The biological system of stream classification used by the Trent River Board[END_REF][START_REF] Tufféry | Méthode de détermination de la qualité biologique des eaux courantes -exploitation codifiée des inventaires de la faune de fond[END_REF], fish index [START_REF] Oberdorff | Development and validation of a fishbased index for the assessment of 'river health' in France[END_REF][START_REF] Pont | Assessing the biotic integrity of rivers at the continental scale: a European approach[END_REF], and rapid diagnosis for lake [START_REF] Barbe | Diagnose rapide des plans d' eau[END_REF]). These indicators are the most currently used by water agencies and wildlife services. Sociologists attend some field work to get familiar with methods. Freshwater biologists compiled bibliographic and ethnographic work on published and non-published ecological research. Together we analysed the evolution of available knowledge and social justifications in each document. The snowballing method was used to identify actors involved in this process. Being affiliated to Cemagref and/or graduated from the National School of ingénieurs du GREF sometimes helped contacting officials and state engineers involved in the story. Most interviewees were contacted several times. We asked them to review their interviews and to send related personal archives. Historical studies were used for most ancient periods. We asked the actors to tell how they have been involved in the process and how they might explain choices done before they had been involved. We triangulated the results from the interviews using scientific, legal, and technical materials published at each stage to support or confront testimonies. We completed this material with bibliographical review of social works dealing with how water-related social and scientific practices and ideas have evolved during time. We compared different trajectories in order to assess whether common characteristics could be identified. Results: the development of EI in France for lakes and rivers In this section, we focus on three historical phases, 1900-1959, 1960-1980, and since 1980 to study five EI, namely the saprobic index, the test on minnows, the biotic index, the fish index and the rapid diagnosis for lake. All EI we studied were first set up by outsiders challenging mainstream water management. Following this phase of construction where adaptation prevailed, legal changes in law induced a shift by resting the burden of proof on other stakeholders and leading to a phase of accumulation. 1900-1959: Focus on fish and germs At the end of the 19 th Century, two social groups alerted the public to the deterioration of surface water quality in France. The first group stemmed from a handful of physicians who experienced difficulty demonstrating that water could be pathogenic. Fishing clubs made up the second group as they complained about fish depletion. Unlike epidemiologists, they did not join the hygienist movement. They advocated restocking, reducing poaching, and punishing industrial discharges. These approaches led to two different tools: the saprobic index (developed in Germany at the beginning of the 20 th century) and a poorly reliable ecotoxicological test on minnows, perfected by French legal experts at the same period. While the hygiene act of 1902 compelled to test water quality systems systematically and secured funds for nationwide surveys, French law did not acknowledge accidental pollution as a misdemeanour before 1959 and fishermen kept struggling to obtain the recognition of its impact on fish population. This difference had a major effect on the fate of the variable promoted by each group. Sanitary actors experienced hard time in the beginning. Freshwaters were long considered very healthy in France. The miasma theory predominated, which held that odours -rather than water -were malignant. Moreover, until the mid-nineteenth century, decayed matter was considered as beneficial by most people, including a large part of physicians. Despite the initial reluctance from the medical world, some outsiders started listing organisms living in contaminated waters. They succeeded in getting other stakeholders (urban planners, engineers, politicians) interested in their hygienist ideas. These sanitary reformers made rivers suspicious as they established more relations between contaminated places and water [START_REF] Goubert | La conquête de l' eau. L' avènement de la santé à l' âge industriel[END_REF]. It took time before hygienism spread out as a political doctrine with the scientific support of German physician Robert Koch (1843Koch ( -1910) ) and French chemist Louis Pasteur (1822-1895) pointing out the existence of germs. The French hygiene act of 1902 resulted from this evolution and paved the way for systematic chemical and biological assessments of diversions for water supply. The first EI got stabilized in this context. German naturalists [START_REF] Kolkwitz | Okologie des pflanzlichen saprobien[END_REF] developed the "saprobic" index. It consisted of an array of organisms ranked according to their response to organic matter. Presence and abundance of saprobic organisms, those that resisted best to faecal contamination, meant risks and corresponding waters were not to be used for drinking purposes. The saprobic index became a common "passage point" [START_REF] Star | Institutional Ecology, ' translations' and Boundary Objects: Amateurs and Professionals in Berkeley' s Museum of Vertebrate Zoology, 1907-39[END_REF] for many technologies addressing human well-being in cities. Sanitary authorities published in 1903 the first nationwide inventory of water addressing quality and updated periodically [START_REF] Goubert | La conquête de l' eau. L' avènement de la santé à l' âge industriel[END_REF]). Ironically, the sanitary perspective did not lead to any improvement of surface water quality. Since technical solutions to convey spring water or groundwater such as long aqueducts existed, urban rivers became stale. Large sewers were developed, more and more effluents were discharged in rivers [START_REF] Barles | The Nitrogen Question: Urbanization, Industrialization, and River Quality in Paris, 1830-1939[END_REF]. The development of the saprobic index is a success story. Outside the mainstream medical world, a few physicians lobbied to change waste water management and ended with standardization of the EI once the law institutionalized hygienist ideas. Fishermen were not as successful. Fishing clubs had long complained about industrial impacts. But fish depletion in freshwaters was always attributed to poaching [START_REF] Barthélémy | Des rapports sociaux à la frontière des savoirs. Les pratiques populaires de pêche amateur au défi de la gestion environnementale du Rhône[END_REF] and French regulation targeted fishermen first. If ever acknowledged, pollution was considered as reversible and likely to be compensated by restocking. In case of fish mortality, claimants remained isolated outsiders. They had to provide evidence to pursue effluents dischargers. Fishing authorities would sample suspicious waters and perform tests on minnows to assess presence of toxic substances. Frequently samples were done after the toxic discharge stopped. Judges constantly considered that such cases were peculiar cases of poaching [START_REF] Corbin | L' avénement des loisirs (1850-1960[END_REF]. In the late 1880s, the state promoted fishing as an accessible supply of proteins to the poor. To prevent poaching and to maintain fish populations, regardless of water quality, fishing clubs and governmental authorities agreed to raise a tax on fishermen to increase control and develop hatcheries. Since the beginning of the twentieth century, fishermen helplessly asked for a specific legislation against pollution. The ecotoxicological test on minnows failed. Its story started with isolated actors claiming for a different water management. But fishermen hardly convinced larger coalitions of actors that pollution was a problem and the law did not change. No public fund was ever secured to apply the test at large scale. Since the burden of proving pollution rested on fishing clubs, data they accumulated remained scattered, and depending on their own activism. The legal acknowledgement of any ecological dysfunction is a crucial step for EI development. It changes the need for its monitoring. This is what happened in 1959, when pollution became punishable in France. 1960-1980: resource management Discontent fishermen suddenly found an opportunity to be listened to in 1959. As the colonial empire challenged French political influence overseas, President de Gaulle was trying to get legitimacy by paying attention to current domestic claims during emergency powers. By the edict of 1959, fishermen obtained that accidental pollution in rivers should be punished. Before the edict of 1959, only fishermen were trying to find evidence of pollution. After this edict, polluters too suddenly became concerned. This piece of law was the impetus of a major change in the French water management, although it was not discussed by parliament and adopted during a troublesome period [START_REF] Bouleau | La gestion française des rivières et ses indicateurs à l' épreuve de la directive cadre[END_REF]. In response, industrials and municipal councils, threatened by the new edict, asked for public funds to support treatment plants. The French post-war planned administration relied on the Commissariat Général au Plan for making prospects and public policies [START_REF] Colson | Revisiting a futures studies project--' Reflections on 1985[END_REF]. In this arena, many engineers were in favour of an economic and spatial approach of natural resources planning [START_REF] Rabinow | French Modern: Norms and Forms of the Social Environment[END_REF]. Influenced by [START_REF] Pigou | The Economics of Welfare[END_REF] and Kneese's ideas (1962), they promoted a "polluter-pays principle" at the scale of the watershed. It was enacted by the 1964 water law which established basin agencies and basin committee [START_REF] Nicolazo | Les agences de l' eau[END_REF]. To apply this principle, basin committees agreed on several conventional chemical parameters in order to make pollution commensurable from upstream to downstream, in rivers and lakes: suspended matter, dissolve salts, and a combined parameter including biological and chemical oxygen demand. Basin Agencies collected levies on water users and delivered grants to fund water supply and waste water treatment facilities. Needs were huge given the post-war demographic expansion, the repatriation after the end of colonisation, and industrial development fuelled by the reconstruction economy. The focus on chemistry helped build consensus on investment priorities, but it left habitat degradation aside. The biotic index was an attempt to fill the gap on rivers. Because this story is quite recent, we were able to interview actors who promoted this new EI [START_REF] Bouleau | La gestion française des rivières et ses indicateurs à l' épreuve de la directive cadre[END_REF]. The origin of the French biotic index dated back to the creation of a single state corps of engineers in 1965, ingénieurs du génie rural, des eaux et des forêts (IGREF), gathering "developers" of the corps of rural planners (ingénieurs du génie rural) and more "conservationist" engineers of the corps for water and forestry (Ingénieurs des eaux et des forêts). Developers promoted agricultural modernization. They allocated state grants to farmers and rural districts in order to drain wetlands, canalize streams, raise levees against frequent floods and build dams storing winter flows for irrigation. IGREF conservationists were critical of such projects they called "anti-tank ditches", and they felt marginalized. Basin agencies did not pay attention to upstream developments, focusing on downstream urbanized and industrialized rivers, more affected by pollution. As they shared the same concern for upstream rivers, conservationists built alliances with fishing clubs, getting them organized at the State level, supporting their claims against polluting effluents and sharing material and staff. They searched an EI that would perform better than the test on minnows. As sampling was often performed after the toxic discharge had stopped, they were looking for biological sentinels proving that pollution occurred. They hired freshwater biologists "who were anglers, interested in this stuff, because this was the issue". One IGREF said to a candidate: "your one and only job is to find some stuff we could present and standardize". Tuffery and Verneaux (1967) adapted the biotic index developed by the Trent River Authority [START_REF] Woodiwiss | The biological system of stream classification used by the Trent River Board[END_REF] to French rivers. [START_REF] Verneaux | Cours d' eau de Franche-Comté (Massif du Jura). Essai de biotypologie[END_REF] identified 10 river types on the Doubs basin based on a factorial correspondence analysis of the presence or abundance of Macrobenthic Invertebrates. In the foreword of his PhD dissertation Verneaux recounted: "This deals with a long story which started with my father twenty-five years ago on the banks of the Doubs River and is probably going to stop in few years as the river dies". He established reference conditions correlated to a combination of abiotic variables (slope, distance to source, width, and summer temperature). Then he extended his survey and adapted the biotic index to all French rivers and got it standardized (Indice Biologique Global Normalisé, IBGN). Conservationists enlisted the biotic index in the state regular monitoring (arrêté interministériel of September 2, 1969). Following the 1976 act on nature protection, IBGN became a common tool for environmental impact assessment. Because of its initial target on accidental pollution, IBGN remained poorly relevant to point out potential disturbance of trout habitat (channel simplication, increasing temperature) by hydraulic works. IBGN had little consideration to environmental variability, which would have required much more data collecting and processing. Albeit imperfect, such an option enabled rapid development of professional training for ecological monitoring. It secured funds for monitoring activities on a regular basis, and resulted in a today-available thirty-year database. Analysing documents concerning the biotic index, we found the same pattern of EI development as the one induced from the saprobic index case. Outsiders promoted another focus on river management, made up a new EI, enrolled allies, and obtained a legal obligation of monitoring. Our interviews revealed other aspects of the social trajectory of the biotic index. Experts focused on what they valued in personal ties and collective strategies or practices. In the case of the biotic index their preference for upstream reaches and their targeting accidental pollution were embedded in the EI they developed. to now: ecosystem management Whereas post-war industrial fast development notably improved French workers conditions, it did not thoroughly fulfil middle class expectations. Graduate employees from the baby-boom generation did not adhere completely to the stalwart praise for industrial achievements. They complained of environmental degradation. Better equipped with computers to quantitatively address ecosystem relationships, ecological scientists supported this claim [START_REF] Fabiani | Sciences des écosystèmes et protection de la nature. In Protection de la nature[END_REF][START_REF] Aspe | Construction sociale des normes et mode de pensée environnemental[END_REF]. Data collection remained the weak spot. Developers had much more data to promote their projects than ecologists had to sustain environmental preservation. The creation of the Ministry of Environment (1971) and the 1976 act requiring environmental impact assessments, were critical steps securing funding for research. But the French state would not have extended its regular monitoring without European binding legislation. It spurred the development of two other EI, the rapid diagnosis for lakes and the fish index. Although, lake dystrophy had long been ignored by the French state, freshwater biologists of the state field station in Thonon-les-Bains initiated a local eutrophication monitoring in the sixties, after algae proliferation was observed in fishermen's gillnets in nearby Lake Geneva, the largest natural lake in France and Switzerland. They chose chlorophyll a, a chemical surrogate of the abundance of the phytoplankton to demonstrate a causal relationship between nutrients and dystrophy. Similarly Swiss researchers started worm communities monitoring in the last 1970s [START_REF] Lang | Eutrophication of Lake Geneva indicated by the oligochaete communities of the profundal[END_REF]. This was the time when Northern American researchers supported by the "Soap and Detergent Association" published the article "We hung phosphates without a fair trial" (Legge and Dingeldein 1970 cited by [START_REF] Barroin | Phosphore, azote, carbone... du facteur limitant au facteur de maîtrise[END_REF]. Transatlantic controversies and French chemical lobbies hushed-up eutrophication alerts in France. Experts remained isolated. Managers of the Rhône Basin Agency in charge of financing restoration plans for alpine lakes finally agreed on supporting the development of a "rapid diagnosis" for lakes. Mouchel proposed to build on the trophic index developed in the United-States and Canada [START_REF] Dillon | The phosphorus-chlorophyll relathionship in lakes[END_REF][START_REF] Carlson | A trophic state index for lakes[END_REF][START_REF] Vollenweider | Synthesis report[END_REF][START_REF] Canfield | Prediction of chlorophyll a concentrations in Florida lakes: the importance of phosphorus and nitrogen[END_REF]. He correlated nutrient concentrations (total phosphorus, orthophosphates, total nitrogen, mineral nitrogen), transparency (Secchi depth) and chlorophyll a, and expanded results from the Alps to set up a national typology. Including then Oligochetae describing mineralization and assimilation of nutrients [START_REF] Lafont | Contribution à la gestion des eaux continentales : utilisation des oligochètes comme descripteurs de l' état biologique et du degré de pollution des eaux et des sédiments[END_REF][START_REF] Lafont | Un indice biologique lacustre basé sur l' examen des oligochètes[END_REF], and a Mollusc index responding to dissolved oxygen and organic matter content within sediment [START_REF] Mouthon | Un indice biologique lacustre basé sur l' examen des peuplements de mollusques[END_REF] the 'rapid diagnosis' was tested on about thirty lakes spread across different regions [START_REF] Mouchel | Essai de définition d' un protocole de diagnose rapide de la qualité des eaux des lacs. Utilisation d' indices trophiques[END_REF]1987;[START_REF] Barbe | Diagnose rapide des plans d' eau[END_REF]). Nevertheless this work did not lead to systematic lake monitoring in France. Resistance from lobbies defending chemical industries was strong [START_REF] Barroin | Phosphore, azote, carbone... du facteur limitant au facteur de maîtrise[END_REF]. Political engagement for ecological preservation of lakes was low. Such EI in the making were mainly used in the Alps. In rivers, the 1976 act on nature protection encouraged further research on impacts beyond IBGN to address morphological changes on streams. The Research Centre for Agriculture, Forestry and Rural Environment (CERAFER, which later became Cemagref) devised new methods to determine minimum instream flows downstream hydroelectric impoundments [START_REF] Souchon | Peut-on rendre plus objective la détermination des débits réservés par une approche scientifique ?[END_REF] based on American existing works [START_REF] Tennant | Instream flow regimes for fish, wildlife, recreation and related environmental resources[END_REF][START_REF] Milhous | The PHABSIM system for instream flow studies[END_REF][START_REF] Bovee | A Guide to Stream Habitat Analysis Using Instream Flow Incremental Methodology[END_REF]. It resulted in the adoption of the 1984 act on fishing which required minimum instream flows. Anticipating the revival of hydropower development after the oil crises, the Ministry of Environment and the National Center for Scientific Research (CNRS) decided to support environmental research on the Rhône in 1978 (PIREN Rhône). Between 1979 and 1985, scientists of the PIREN assessed the environmental impact of hydropower projects in the upper-Rhône valley. They were politically supported by ecologist activists and urban recreational users of the valley who were calling into question the power facilities. Sampling floodplain fauna and flora, they related such inventories to physical and chemical analyses. They showed evidence of seasonal longitudinal, lateral and vertical transfers between the river, the floodplain and the groundwater [START_REF] Roux | Cartographie polythématique appliquée à la gestion écologique des eaux. Etude d' un hydrosytème fluvial : le Haut-Rhône français[END_REF][START_REF] Castella | Macroinvertebrates as 'describers' of morphological and hydrological types of aquatic ecosystems abandoned by the Rhône River[END_REF][START_REF] Amoros | A method for applied ecological studies of fluvial hydrosystems[END_REF][START_REF] Richardot-Coulet | Classification and succession of former channels of the French upper rhone alluvial plain using mollusca[END_REF]. They elaborated a general deterministic understanding of the "hydrosystem" which advocated for the preservation of natural hydrological patterns in rivers. Scientific evidence and political support led to the 1992 water act. It required permits and public hearings for all developments significantly affecting water. Yet, it did not result in a national change in monitoring activities. State authorities had already much invested in collecting biochemical, IBGN and fish population data. If ecological concerns were high in the Rhône valley, they were not priorities in the rest of the country. The list of currently monitored indicators appeared to be locked-in at that level. This reduced the scope of possible data accumulation. Instead, the adaptation phase extended. Challenged by the European on-going research and inspired by Karr's work on biological integrity (1981;[START_REF] Karr | Biological Integrity: A Long-Neglected Aspect of Water Resource Management[END_REF], French freshwater biologists developed more theoretical work. They recomputed accumulated ecological data of the Rhône to test theories of functional ecology (Statzner, Resh et al. 1994;Statzner, Resh et al. 1994;[START_REF] Beisel | Stream community structure in relation to spatial variation: the influence of mesohabitat characteristics[END_REF][START_REF] Usseglio Polatera | Biological and ecological traits of benthic freshwater macroinvertebrates: relationships and definition of groups with similar traits[END_REF]. The functional approach spread out, notably at European level where it makes very different ecosystems comparable. The European political arena happens to be much more sensitive to ecologist concerns than the French one. In 2000, European Parliament and council enacted the Water Framework Directive (WFD) which has set an ecosystem-based regulation. It holds undisturbed conditions of freshwaters as references defined in different hydro-ecoregions and requires economic justification for any development resulting in a significant gap between the current status of a water body and its reference. Designed as a masterpiece of binding legislation for European waters, the WFD created favourable conditions to break down the previous locked-in fits of the French freshwater monitoring. Freshwater biologists re-processed the thirty-year database of faunistic inventories used for IBGN to identify eco-regional references for rivers [START_REF] Khalanski | Quelles variables biologiques pour quels objectifs de gestion ? In Etat de santé des écosystèmes aquatiques -les variables biologiques comme indicateurs[END_REF][START_REF] Wasson | Typologie des eaux courantes pour la directive cadre européenne sur l' eau : l' approche par hydro-écorégion. Mise en place de systèmes d' information à références spatiales[END_REF]. Others compiled records on fish population to calibrate a "fish index" [START_REF] Oberdorff | Development and validation of a fishbased index for the assessment of 'river health' in France[END_REF]. Further research is currently underway to better understand how already monitored biological assemblages respond to contamination [START_REF] Archaimbault | Biological And Ecological Responses Of Macrobenthic Assemblages To Sediment Contamination In French Streams[END_REF]). More research is required to calibrate indicators for compartments that were previously little monitored such as diatoms and macrophytes. French lake specialists were spurred to update the 'rapid diagnosis' method according to WFD standards, i.e. better considering environmental and morphological heterogeneity, including missing biological information (abundance of taxa for example). Previous ecological research on Alpine lakes was reprocessed accordingly to propose new index WFD compliant (phytoplankton and macrobenthos index) Nevertheless, this integrative approach was mainly calibrated for the quality assessment of natural lakes of the Alpine area and remained less reliable on other lakes. Moreover, because fish experts and botanists hardly explored lakes in France before the WFD was enacted, fish and macrophytes are still to be studied in a bioindication perspective. Relationships between fish assemblages and environmental factors have been studied (Argillier, Pronier et al. 2002;Argillier, Pronier et al. 2002;[START_REF] Irz | Comparison between the fish communities of lakes, reservoirs and rivers: can natural systems help define the ecological potential of reservoirs?[END_REF], but EI development are still under development at the national level. These two cases ('rapid diagnosis' and fish index) show how EI initially designed for a specific political arena can be reshaped to fit another one. Regional EI promoters may get institutional support above state level (European Community) to impose the legal framework favourable to the implementation of EI at lower level. This requires that EI promoters, who initially worked at regional scale, shall reprocess their indicators to meet international specifications. When no previous research was conducted at national level, data must be collected in the first place. To reduce this endeavour, experts promote their methods at European level. Yet few adaptations to regional features remain enshrined in EI and impair their universality. Conclusion: a cumulative and adaptive model of social trajectories of EI This study is meant to be inductive and allows us to derive a theoretical framework from observations. Bioindication tools we studied were never optimised for the present market or political arena but rather recycled. Their development was (a) adaptive, i.e. innovative to respond to new questions; (b) constrained by law; and (c) cumulative, i.e. built on what already existed. (a) We observed innovative ecological monitoring stemming from regional outsiders who wanted to address a new water quality problem raised by a social minority. Interaction between regional scientists and the social minority framed the issue (double arrow in Fig. iii). Actors reluctant to admit the problem asked for evidence (Fig. iii shows no direct influence from innovation to implementation). The burden of the proof rested on EI developers, who adapted previously existing data and knowledge to their specific concern. It consisted of changing the format of data and including new information, very often without immediate recognition. From adapted and new gathered data, they induced ecological causal relationships. They refined protocols in order to address more specifically the problem. At this stage, ecological data were not framed as indicators -i.e. decision support tools -they were only variables. But variables could be mapped in space and monitored in time. This phase is called "adaptation" in Fig. iii. (b) New EI promoters experienced difficulty in convincing existing institutions and did not get much funding for implementing their monitoring. Some failed to convince other actors and went on adapting their tools at regional level. Others were able to raise larger public attention or to meet other stakeholders' concerns who joined in "advocacy coalitions" [START_REF] Sabatier | The Advocacy Coalition Framework. An Assessment[END_REF]. Negotiations required quantification and standardization. Variables that were previously only used by activists became "passage points" [START_REF] Star | Institutional Ecology, ' translations' and Boundary Objects: Amateurs and Professionals in Berkeley' s Museum of Vertebrate Zoology, 1907-39[END_REF] for many different actors and got standardized. People translated their interests using such variables, and spread their use. Successful EI promoters found a suitable political arena for changing law despite institutional resistance to change and got their EI institutionalized. We illustrate this phase with the box "institutions" in Fig. iii. (c) Evolutions we studied were also cumulative processes. Once enacted, a new law imposed that a successful EI had to be regularly monitored. It transferred the burden of the proof onto projects initiators whose plans might harm freshwaters or made the state responsible for collecting data. It spread the use of new EI outside the region where it was developed. Given the cost of developing an EI, recycling prevailed. Systematic monitoring got established and data became available for further research. We call this phase "accumulation" in Fig . iii. Yet regional fits of the EI were often kept in this process. Moreover the inertia of the routine worked as a locked-in fit preventing other EI to be adopted. Adaptive management is limited (dotted line in Fig. iii). Therefore new environmental problems could hardly be seen through mainstream monitoring and required another cycle of adaptation (a). This analytical model applies in the development of all the five EI we studied, (i.e. saprobic index, test on minnows, biotic index, 'rapid diagnosis', and fish index). It reveals the evolution of French water management in two ways. The emergence of an EI corresponds to the emergence of an ecological problem mainly ignored by authorities. Secondly groups of biota used for the construction of an EI reveal a converging interest of different stakeholders for the same type of information at one historical moment. We inherit from EI that fit the social context of their emergence. New phases of adaptation may be required to adjust them to our current concerns. It enlightens weaknesses of the models illustrated in Fig. This analytical framework helps us addressing the initial question: "why EI seem to be hardly used to support decisions, management plans, and programs evaluations?" In the adaptation phase, outsiders raise the alarm without bending the trend of the current management. They collect new data and reprocess available information in order to convince others. This phase can be long and very frustrating. Biologists get little support and often complain about indifferent decision-makers. Institutional recognition on the contrary is a sudden shift which challenges the existing water management. Authorities seek new solutions. EI are used to assess pilot restoration techniques. Soon best practices emerge and got standardized. Managers make plans and programs based on such standards. They used EI to define targets but EI do not influence everyday decisions. Bad records may raise the attention of the public and the decision makers. But if the problem is settled enough, social mobilization declines. In such cases EI are monitored by precaution rather than decision support. Hence two kinds of EI exist, the ones addressing an issue that is not yet acknowledged and the ones monitoring a problem already tackled. The period inbetween is too short to get noticed. One should not base the righteousness of EI according to their influence on everyday decisions. On the contrary, the virtue of environmental indicators is to put into question paradigms of management. And this may not happen everyday. On a regular basis EI are collected for the sake of accumulation of data and knowledge. They will constitute a valuable resource for the next outsiders tracking regional biases of existing EI and revealing unnoticed environmental problems. Figures i and ii). To account for the social evolution, we propose to split the social component into two elements (Fig. iii), one representing what is established (institutions, law) and one representing more evolving social features (coalitions, representations). i and Fig. ii and supports the theoretical framework presented in Fig. iii. The adaptation phase accounts for innovations mainly due to outsiders who seek opportunities for institutionalization. Implementation is more an accumulation phase secured by regular funding than one of selection or adaptation. Fig. 1 . 1 Fig. 1. To date, authors have studied a small loop of feedback between social and political factors and EI implementation. They have not addressed the social and political influences on the EI development. Fig. 2 .Fig. 3 . 23 Fig.2. We propose to address a larger loop of social interactions in which EI development is also included. This approach enables to take into account influences of data availability, changes in social and scientific representations, opportunity and resistance to changes. Acknowledgements This work was financially supported by Cemagref call for research "MAITRISES". The authors are grateful to the scientists, managers and stakeholders interviewed for the purpose of this study. The paper benefited from valuable discussions with and wise advice of Tim Duane, Adina Merenlender, Stefania Barca, and Matt Kondolf from University of California at Berkeley. The authors would like to thank Delaine Sampaio, Christelle Gramaglia and Julie Trottier and two anonymous reviewers for their helpful comments and suggestions.
48,588
[ "20857", "772072", "736113" ]
[ "568889", "18344", "182258", "12804", "18361" ]
01737915
en
[ "spi" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01737915/file/CorentinPigot_JJAP2017_3ndrev.pdf
Corentin Pigot email: corentin.pigot@st.com Fabien Gilibert Marina Reyboz Marc Bocquet Paola Zuliani Jean-Michel Portal Phase-Change Memory: A Continuous Multilevel Compact Model of Subthreshold Conduction and Threshold Switching published or not. The documents may come L'archive ouverte pluridisciplinaire Introduction The rapidly growing market of the internet-of-things requires the use of embedded non-volatile memory (e-NVM) presenting ultra-small area, ultra-fast access time and ultra-low consumption. The mainstream solution, the NOR flash technology, needs high potentials hardly compatible with high-k metal gate of advanced CMOS processes (like FinFET or FD-SOI) and thus requires costly advanced process modifications. Other back-end resistive memories are then investigated, and among them, phase-change memory (PCM) is one of the most mature technologies and has reached a pre-production level. 1,2) The memory principle relies on the phase transition of an active element between two phases (amorphous or crystalline) presenting resistance levels separated by two or three orders of magnitude. 3) The state of the cell is determined by reading the resistance in low field area. The PCM has unique multilevel capabilities, because the resistance can vary continuously between the full RESET and the full SET state. [START_REF] Papandreou | IEEE Int. Conf. Electron. Circuits, Syst[END_REF] For some states including the RESET state, the I-V characteristics of the cell exhibits a threshold switching, above which the amorphous phase becomes suddenly conductive. The nature of this threshold switching has been a long-term discussion and relies classically on two main hypotheses. First of all, the mechanism has been reported to be mainly electronic, [START_REF] Adler | [END_REF][6][7][8][9] but recent studies brought evidences in favor of a thermal activation in some nanoscale cell. 10,11) Although it has nothing to do with phase transition, this behavior is central in the PCM's functioning and crucial for circuit designers to determine sense margin. Indeed, the read operation has to be done under the threshold, otherwise phase transition could happen. However, fast reading means setting the reading voltage as high as possible in order to maximize the current difference between two states and to speed up the read process. Moreover, the lower the SET programing current, and the lower the SET resistance achieved, so the better the programming window, [START_REF] Kluge | IMW[END_REF] which is interesting in terms of multilevel programing. To fully exploit this unique feature, designers need a trustworthy model to verify by simulation the validity of their design. Mainly, designers want to validate that no switching occurs when supplying the array during the read phase and that peripherals are well designed and provide proper biasing during programming phase with the continuous simulation of the state of the cell including the threshold switching process. In this context, a compact model requires to be fast, robust and accurate. The threshold switching is a difficult part of the PCM modeling due to its intrinsic non-linearity and abrupt transition regarding transient simulation. Thus, it may generate convergence problems in the electrical simulators used in the CAD tools. Lots of compact models of PCM have appeared through the years using various modeling strategies. SPICE macro-models have been developped, [START_REF] El-Hassan | [END_REF][14][15] other more physical models based on a crystalline fraction have been implemented in verilog-A, [16][17][18] but most of them dedicate themselves to the phase transition and attach too few importance to the DC behavior. Among those, some use a negative resistance area, 19) some use a Fermi-like smoothing function, 20,21) others use switches. 22) In this work, it has been modeled for the first time using exclusively self-heating mechanism of the cell. This original approach has been validated through I-V measurements for a large set of intermediate states. The simplicity and the continuity for all regimes (below and above the threshold voltage) of the approach is highly interesting in terms of simulation time and convergence ease required in compact modeling. This paper expands the abstract presented on the 2017 International Conference on Solid State Devices and Materials, 23) justifying deeper the validity of the proposed compact model, and exhibiting new simulation results. First, the measurement setup, followed by the modeling method are presented. The correlation between experimental and modeling results is then detailed, and the good convergence is validated with additional simulations. Finally, comments on the coherency of such modeling approach is discussed and the compliance with a new cell-metric is shown. Experimental Setup and Modeling Method Experimentation Measurements have been performed on a test structure manufactured on a 90nm CMOS node with embedded PCM option. This test structure is composed of a PCM stack serially connected to a MOS transistor, the latter being used to limit the current flowing through the cell. A TEM cross-section along with a 3-D equivalent schematic of the memory cell is shown on Electrode (TE) and a heater with a wall structure shape. 3) The size of the amorphous dome that can be seen on Fig. 1 reflects the state of the cell, so the goal of the measurements is to let this thickness ua vary in order to highlight threshold switching for all the states where it happens. Tuning the WL voltage from 1V to 2V, a resistance continuum between 125kΩ and 1.3MΩ can be achieved. The current-voltage characteristics is then obtained by reading the Bit Line current while applying a 1V/ms ramp (0 to 2V) on the top electrode. During this read phase, the WL voltage is set to 1.2V to limit the current and thus the PCM stress. In order to avoid any drift effect, 24) and to ensure similar measurement conditions whatever the resistance level, a fixed delay has been introduced between every SET pulse and read ramping. Compact Model It is widely known that the amorphous part of the subthreshold transport is a hopping conduction of Poole-Frenkel type. [25][26][START_REF] Shih | ICSSICT[END_REF][START_REF] Ielmini | [END_REF] In this work however, for compact model purpose, a limited density of traps is assumed and only a simplified form 29) is considered, given by, 𝐼 𝑃𝐹 = 𝐴 * 𝐹 * exp (- 𝛷-𝛽√𝐹 𝑘𝑇 ) with 𝐹 = 𝑉 𝑢 𝑎 and 𝛷 = 𝐸 𝑎 0 - 𝑎𝑇 2 𝑏+𝑇 ( 1 ) where k is the Boltzmann constant, β a constant of the material linked to its permittivity and 𝐴 is a fitting parameter. T is a global temperature inside the active area and F the electric field across the amorphous phase. It is calculated through this simplified equation under the assumption of a negligible voltage drop inside the crystalline GST, allowing the access to the amorphous thickness ua, straightly linked to the state of the memory (Fig. 1). V is the PCM's voltage, and Φ is the activation energy of a single coulombic potential well. It follows the Varshni's empirical law for its temperature dependence, 11,30) with 𝐸 𝑎 0 the barrier height at 0K, a and b material-related fitting parameters. The threshold switching is modeled as a thermal runaway in the Poole-Frenkel current triggered by the self-heating of the cell. Any elevation of the temperature in the material being due to the Joule Effect, the temperature is calculated, under the assumption of a short time constant, as, 𝑇 = 𝑇 𝑎𝑚𝑏 + 𝑅 𝑡ℎ * 𝑃 𝐽 where 𝑃 𝐽 = 𝑉 𝑃𝐶𝑀 * 𝐼 𝑃𝐶𝑀 (2) with Tamb the ambient temperature, Rth an effective thermal resistance, taking amongst other the geometry of the cell into account. 𝑃 𝐽 is the electrical power dissipated inside the PCM. As it depends on the current flowing through the cell, the calculation of the temperature implies a positive feedback responsible of the switching inside the amorphous phase. Once it has switched, a series resistance of 6kΩ corresponding to the heater resistance limits the current. Extending the field approximation as long as some amorphous phase exists in the active areaneglecting the voltage drop outside the areathe same Poole-Frenkel current is applied to all the intermediate states as well. ua parameter carries the state as it varies from 0nm to the maximum thickness ua,max extracted from the full RESET state. The crystalline resistance is said to be semiconducting-type, so it can be expressed as, 31) 𝑅 𝑐𝑟𝑦 = 𝑅 𝐶 0 * e -𝐸 𝑎 𝑐 ( 1 𝑘𝑇 𝑎𝑚𝑏 - 1 𝑘𝑇 ) (3) where 𝐸 𝑎 𝑐 is an activation energy and 𝑅 𝐶 0 = 𝑅 𝑐𝑟𝑦 when 𝑇 = 𝑇 𝑎𝑚𝑏 ; they are both treated as fitting parameters. Results and Discussion Subthreshold conduction and threshold switching modeling The comparison of the I-V characteristics between model and measurements for a full range of resistance values is presented on Fig. 4. It shows a very good agreement between data and simulations for two decades of current. The measured resistance is extracted at a constant voltage of 0.36V during the slow ramping procedure. The current at high applied voltage is fitted by the modeling of the serially connected MOS transistor. (b) (a) 800K The model card parameters are summarized in Table I. Rth is in good agreement with the commonly accepted value for a high thermal efficiency nanoscale PCM cell. 20) A value of relative dielectric constant εr = 10 11) implies, accordingly to Poole-Frenkel's theory, 25) β = 24μeV.V -0.5 .m 0.5 . a and b parameters have been chosen to fit the self-heating inside the GST but they were kept close to the one found in the literature. 11) Similarly, the couple of parameters (RC0, Eac) has been chosen to fit the self-heating of the material in crystalline phase for high current density so it is not surprising that it is found higher than previous values. 32) Fig. 5. Resistance of the cell as a function of the amorphous thickness The only model parameter that varies from one state to another is the amorphous thickness ua. Reversing Eq. ( 1), amorphous thicknesses have been calculated as a function of the measured resistance for each state. The resistance level as a function of the amorphous thickness is given Fig. 5 and one can verify that there is an excellent correlation between the simulated and the measured-based calculated ua. It means that ua can indeed be used as state parameter for further model-measurement correlation. This allows the computation of the threshold field (cf. Eq. ( 1)), which is plotted Fig. 6, along with the threshold power (see Eq. ( )), as a function of the resistance of the cell. The threshold is defined as the value of voltage and current where the current in one voltage step of 10mV exceeds a given value of 1μA. Based on this definition, states that are less resistive than 0.45MΩthat have an amorphous dome smaller than 28.8nmdo not present the threshold switching. Robustness and coherency of the model The method used for the measurements and simulations presented Fig. 4 was to apply voltage steps on the top electrode and read the current, it is not possible to see a snapback this way. On the contrary, the current-driven simulations for different sizes of amorphous dome shown As the threshold switching is highly dependent on the temperature calculation, the subthreshold conduction in full amorphous state (ua = 48nm) as a function of the temperature has been plotted Fig. 8. The threshold power has been extracted based on the same criteria as before, and it is plotted against the ambient temperature in the inset of the Fig. 8. The subthreshold conduction has a strong temperature dependence, but the threshold seems here again to happen at fixed power. These simulations are fully coherent with previous experiments about threshold switching 11) and validate that the switching is indeed triggered by a thermal runaway in the model. I-V characteristics simulated for ambient temperature ranging from 0°C to 85°C. The inset plot the threshold power as a function of the ambient temperature, showing a constant trend. The main roadblock preventing a good multilevel programing of the phase-change memory is for now on the resistance drift, due to relaxation inside the amorphous phase. 2) The resistance of the cell tends to increase with time, preventing to correctly read the state of the cell after a while. To get rid of the inconvenience, Sebastian et al. purposed a new metric M that is less drift-dependent than the low-field resistance for the cell reading. 33) M is defined as the voltage needed to reach a reference current under the application of a linear ramping voltage. M as a function of ua is plotted Fig. 9 Conclusion This work presents a compact modeling of the threshold switching in phase-change memory based solely on self-heating in the Poole-Frenkel's conduction. This new approach presents the advantage of modeling the current characteristic in a fully continuous way, even the non-linearity of the threshold switching, which eases the convergence and speed-up the simulation time. It has been shown that the model presents a good correlation with measurements Fig. 1 . 1 Fig. 1.TEM cross-section (a) and 2D equivalent schematic (b) of the test structure. Fig. 1 . 1 Fig. 1. The 50nm-thick phase-change material (GST225) layer has been inserted between Top Fig. 2 2 Fig. 2 represents the chronogram of the measurement protocol for each programmed state. A reset pulse of 2V is applied during 200ns before any programming pulse. The word line (WL) bias is then tuned in order to modulate the bit line (BL) current during an 800ns pulse of 2V on the top electrode, resulting in a wide range of intermediate states (see Fig. 3). Fig. 3 .Fig. 2 . 32 Fig. 3. Cell resistance as a function of the programming gate voltage, displaying the continuously distributed states achieved Fig. 4 : 4 Fig. 4: Cell current versus applied voltage (a) in logarithmic scale and (b) in linear scale for several intermediate states with model (line) and measurements (symbol). Fig. 7 , 7 Fig. 7, exhibit the snapback behavior without any convergence trouble, which illustrates the robustness of the model. The 19.2nm and 48nm simulated curves correspond respectively to the minimum and maximum size of amorphous dome measured. Those values are coherent with the height of the deposited GST layer of about 50nm. The snapback appears for amorphous domes larger than 28.8nm, this limit corresponding to the minimum size of dome where the threshold is observed. Fig. 6 . 6 Fig. 6. Threshold field (up) and power (down) versus the resistance of the cell. Fig. 7 . 7 Fig. 7. Current-driven simulation for cell states corresponding to the range measured at 273K. The snapback is observable for states that have an amorphous dome larger than 28.8nm. Fig. 8 . 8 Fig. 8. I-V characteristics simulated for ambient temperature ranging from 0°C to 85°C. The inset plot the threshold power as a function of the ambient temperature, showing a constant trend. for a detection current of 1µA. The simulation presents an excellent correlation with the measurement and both exhibits a linear relationship between M and ua. This proportionality is in agreement with Sebastian's et al., even though the amorphous thickness is not extracted the same way as in article because of the different expression of the subthreshold current used. It confirms the relevance of the choice of ua as a state parameter of the model and validates that the model is suitable for multilevel programing. Fig. 9 . 9 Fig. 9. New metric M as a function of the amorphous thickness of the state Table I . I Model card parameters Parameter Value Rth 2.0K.μW -1 A 1.45.10 -4 Ω -1 β 24µeV.V -0.5 .m 0.5 𝐸 𝑎 0 0.3eV Rheater 6kΩ 𝑅 𝐶 0 10kΩ 𝐸 𝑎 𝑐 0.1eV a 1.2meV.K -1 b
15,836
[ "1029597", "1029577", "18361", "20388" ]
[ "199957", "23639", "23639", "40214", "199957", "40214", "199957" ]
01620505
en
[ "spi", "info" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01620505v3/file/SSP2018.pdf
Luc Le Magoarou Stéphane Paquelet PARAMETRIC CHANNEL ESTIMATION FOR MASSIVE MIMO Keywords: Cramér-Rao bound, Channel estimation, MIMO Channel state information is crucial to achieving the capacity of multiantenna (MIMO) wireless communication systems. It requires estimating the channel matrix. This estimation task is studied, considering a sparse physical channel model, as well as a general measurement model taking into account hybrid architectures. The contribution is twofold. First, the Cramér-Rao bound in this context is derived. Second, interpretation of the Fisher Information Matrix structure allows to assess the role of system parameters, as well as to propose asymptotically optimal and computationally efficient estimation algorithms. INTRODUCTION Multiple-Input Multiple-Output (MIMO) wireless communication systems allow for a dramatic increase in channel capacity, by adding the spatial dimension to the classical time and frequency ones [START_REF] Telatar | Capacity of multi-antenna gaussian channels[END_REF][START_REF] Tse | Fundamentals of wireless communication[END_REF]. This is done by sampling space with several antenna elements, forming antenna arrays both at the transmitter (with nt antennas) and receiver (with nr antennas). Capacity gains over single antenna systems are at most proportional to min(nr,nt). Millimeter wavelengths have recently appeared as a viable solution for the fifth generation (5G) wireless communication systems [START_REF] Theodore S Rappaport | Millimeter wave mobile communications for 5g cellular: It will work![END_REF][START_REF] Swindlehurst | Millimeter-wave massive mimo: the next wireless revolution?[END_REF]. Indeed, smaller wavelengths allow to densify half-wavelength separated antennas, resulting in higher angular resolution and capacity for a given array size. This observation has given rise to the massive MIMO field, i.e. the study of systems with up to hundreds or even thousands of antennas. Massive MIMO systems are very promising in terms of capacity. However, they pose several challenges to the research community [START_REF] Rusek | Scaling up mimo: Opportunities and challenges with very large arrays[END_REF][START_REF] Erik G Larsson | Massive mimo for next generation wireless systems[END_REF], in particular for channel estimation. Indeed, maximal capacity gains are obtained in the case of perfect knowledge of the channel state by both the transmitter and the receiver. The estimation task amounts to determine a complex gain between each transmit/receive antenna pair, the narrowband (single carrier) MIMO channel as a whole being usually represented as a complex matrix H ∈ C nr ×nt of such complex gains. Without a parametric model, the number of real parameters to estimate is thus 2nrnt, which is very large for massive MIMO systems. Contributions and organization. In this work, massive MIMO channel estimation is studied, and its performance limits are sought, as well as their dependency on key system parameters. In order to answer this question, the framework of parametric estimation [START_REF] Kay | Fundamentals of Statistical Signal Processing: Estimation Theory[END_REF] is used. A physical channel model is first presented, with the general considered observation model, and the objective is precisely stated. The Cramér-Rao bound for is then derived, which bounds the variance of any unbiased estimator. Then, the interpretation of the bound allows to precisely assess the role of system design on estimation performance, as well as to propose new computationally efficient channel estimation algorithms showing asymptotic performance equivalent to classical ones based on sparse recovery. PROBLEM FORMULATION Notations. Matrices and vectors are denoted by bold upper-case and lower-case letters: A and a (except 3D "spatial" vectors that are denoted -→ a ); the ith column of a matrix A by: ai; its entry at the ith line and jth column by: aij or Aij. A matrix transpose, conjugate and transconjugate is denoted by: A T , A * and A H respectively. The image, rank and trace of a linear transformation represented by A are denoted: im(A), rank(A) and Tr(A) respectively. For matrices A and B, A ≥ B means that A-B is positive semidefinite. The linear span of a set of vectors A is denoted: span(A). The Kronecker product, standard vectorization and diagonalization operators are denoted by vec(•), diag(•), and ⊗ respectively. The identity matrix, the m×n matrix of zeros and ones are denoted by Id, 0m×n and 1m×n respectively. CN (µ,Σ) denotes the standard complex gaussian distribution with mean µ and covariance Σ. E(.) denotes expectation and cov(.) the covariance of its argument. Parametric physical channel model Consider a narrowband block fading channel between a transmitter and a receiver with respectively nt and nr antennas. It is represented by the matrix H ∈ C nr ×nt , in which hij corresponds to the channel between the jth transmit and ith receive antennas. Classically, for MIMO systems with few antennas, i.e. when the quantity nrnt is small (up to a few dozens), estimators such as the Least Squares (LS) or the Linear Minimum Mean Squared Error (LMMSE) are used [START_REF] Biguesh | Training-based mimo channel estimation: a study of estimator tradeoffs and optimal training signals[END_REF]. However, for massive MIMO systems, the quantity 2nr nt is large (typically several hundreds), and resorting to classical estimators may become computationally intractable. In that case, a parametric model may be used. Establishing it consists in defining a set of np parameters θ (θ1,...,θn p ) T that describe the channel as H ≈ f (θ) for a given function f , where the approximation is inherent to the model structure and neglected in the sequel (considering H = f (θ)). Channel estimation then amounts to estimate the parameters θ instead of the channel matrix H directly. The parametrization is particularly useful if np ≪ 2nrnt, without harming accuracy of the channel description. Inspired by the physics of wave propagation under the plane waves assumption, it has been proposed to express the channel matrix as a sum of rank-1 matrices, each corresponding to a single physical path between transmitter and receiver [START_REF] Akbar | Deconstructing multiantenna fading channels[END_REF]. Adopting this kind of modeling and generalizing it to take into account any three-dimensional antenna array geometry, channel matrices take the form H = P p=1 cper( --→ ur,p).et( --→ ut,p) H , (1) where P is the total number of considered paths (no more than a few dozens), cp ρpe jφp is the complex gain of the pth path, --→ ut,p is the unit vector corresponding to its Direction of Departure (DoD) and --→ ur,p the unit vector corresponding to its Direction of Arrival (DoA). Any unit vector -→ u is described in spherical coordinates by an azimuth angle η and an elevation angle ψ. The complex response and steering vectors er( -→ u ) ∈ C nr and et( -→ u ) ∈ C nt are defined as (ex( -→ u ))i = 1 √ nx e -j 2π λ --→ a x,i . -→ u for x ∈ {r,t}. The set { --→ ax,1,..., ---→ ax,n x } gathers the positions of the antennas with respect to the centroid of the considered array (transmit if x = t, receive if x = r). In order to lighten notations, the matrix Ax 2π λ ( --→ ax,1,... , ---→ ax,n x ) ∈ R 3×nx is introduced. It simplifies the steering/response vector expression to ex( -→ u ) = 1 √ nx e -jA T x -→ u , where the exponential function is applied component-wise. In order to further lighten notations, the pth atomic channel is defined as Hp cper( --→ ur,p).et( --→ ut,p) H , and its vectorized version hp vec(Hp) ∈ C nr nt . Therefore, defining the vectorized channel h vec(H), yields h = P p=1 hp. Note that the channel description used here is very general, as it handles any three-dimensional antenna array geometry, not only Uniform Linear Arrays (ULA) or Uniform Planar Arrays (UPA) as is sometimes proposed. In short, the physical channel model can be seen as a parametric model with θ = {θ (p) (ρp,φp,ηr,p,ψr,p,ηt,p,ψt,p), p = 1,...,P }. There are thus 6P real parameters in this model (the complex gain, DoD and DoA of every path are described with two parameters each). Of course, the model is most useful for estimation in the case where 6P ≪ 2nrnt, since the number of parameters is thus greatly reduced. Note that most classical massive MIMO channel estimation methods assume a similar physical model, but discretize a priori the DoDs and DoAs, so that the problem fits the framework of sparse recovery [START_REF] Mallat | Matching pursuits with timefrequency dictionaries[END_REF][START_REF] Tropp | Signal recovery from random measurements via orthogonal matching pursuit[END_REF][START_REF] Bajwa | Compressed channel sensing: A new approach to estimating sparse multipath channels[END_REF]. The approach used here is different, in the sense that no discretization is assumed for the analysis. Observation model In order to carry out channel estimation, ns known pilot symbols are sent through the channel by each transmit antenna. The corresponding training matrix is denoted X ∈ C nt×ns . The signal at the receive antennas is thus expressed as HX + N, where N is a noise matrix with vec(N) ∼ CN (0,σ 2 Id). Due to the high cost and power consumption of millimeter wave Radio Frequency (RF) chains, it has been proposed to have less RF chains than antennas in both the transmitter and receiver [START_REF] Ayach | Spatially sparse precoding in millimeter wave mimo systems[END_REF][START_REF] Alkhateeb | Channel estimation and hybrid precoding for millimeter wave cellular systems[END_REF][START_REF] Heath | An overview of signal processing techniques for millimeter wave mimo systems[END_REF][START_REF] Akbar | Millimeter-Wave MIMO Transceivers: Theory, Design and Implementation[END_REF]. Such systems are often referred to as hybrid architectures. Mathematically speaking, this translates into specific constraints on the training matrix X (which has to "sense" the channel through analog precoders vi ∈ C nt , i = 1,...,nRF, nRF being the number of RF chains on the transmit side), as well as observing the signal at the receiver through analog combiners. Let us denote wj ∈ C nr , j = 1,...,nc the used analog combiners, the observed data is thus expressed in all generality as Y = W H HX+W H N, (2) where W (w1,... ,wn c ) and the training matrix is constrained to be of the form X = VZ, where Z ∈ C n RF ×ns is the digital training matrix. Objective: bounding the variance of unbiased estimators In order to assess the fundamental performance limits of channel estimation, the considered performance measure is the relative Mean Squared Error (rMSE). Denoting indifferently H(θ) f (θ) or H the true channel (h(θ) or h in vectorized form) and H( θ) f ( θ) or Ĥ its estimate (h( θ) or ĥ in vectorized form) in order to lighten notations, rMSE is expressed rMSE = E H-Ĥ 2 F . H -2 F = Tr cov ĥ Variance + E( Ĥ)-H 2 F Bias . H -2 F , (3) where the bias/variance decomposition can be done independently of the considered model [START_REF] Kay | Fundamentals of Statistical Signal Processing: Estimation Theory[END_REF]. The goal here is to lower-bound the variance term, considering the physical model introduced in the previous subsection. The bias term is not studied in details here, but its role is evoked in section 3.3. CRAM ÉR-RAO LOWER BOUND In this section, the variance term of eq. ( 3) is bounded using the Cramér-Rao Bound (CRB) [START_REF] Calyampudi | Information and the accuracy attainable in the estimation of statistical parameters[END_REF]18], which is valid for any unbiased estimator θ of the true parameter θ. The complex CRB [START_REF] Van Den Bos | A cramér-rao lower bound for complex parameters[END_REF] states, cov g( θ) ≥ ∂g(θ) ∂θ I(θ) -1 ∂g(θ) ∂θ H , with I(θ) E ∂logL ∂θ ∂logL ∂θ H the Fisher Information Matrix (FIM), where L denotes the model likelihood, and g is any complex differentiable vector function. In particular, regarding the variance term of eq. ( 3), Tr cov h( θ) ≥ Tr ∂h(θ) ∂θ I(θ) -1 ∂h(θ) ∂θ H , (4) with ∂h(θ) ∂θ = ∂h(θ) ∂θ 1 ,..., ∂h(θ) ∂θn p . A model independent expression for the FIM is provided in section 3.1, and particularized in section 3.2 to the model of section: 2.1. Finally, the bound is derived from eq. ( 4) in section 3.3. General derivation First, notice that vectorizing eq. ( 2), the observation matrix Y follows a complex gaussian distribution, vec(Y) ∼ CN (X T ⊗W H )h(θ) µ(θ) ,σ 2 (Idn s ⊗W H W) Σ . In that particular case, the Slepian-Bangs formula [START_REF] Slepian | Estimation of signal parameters in the presence of noise[END_REF][START_REF] Bangs | Array Processing With Generalized Beamformers[END_REF] yields: I(θ) = 2Re ∂µ(θ) ∂θ H Σ -1 ∂µ(θ) ∂θ = 2α 2 σ 2 Re ∂h(θ) ∂θ H P ∂h(θ) ∂θ , (5) with P σ 2 α 2 (X * ⊗W)Σ -1 (X T ⊗W H ) where α 2 1 ns Tr(X H X) is the average transmit power per time step. Note that the expression can be simplified to P = 1 α 2 (X * X T ) ⊗ (W(W H W) -1 W H ) using elementary properties of the Kronecker product. The matrix W(W H W) -1 W H is a projection matrix onto the range of W. In order to ease further interpretation, assume that X H X = α 2 Idn s . This assumption means that the transmit power is constant during training time ( xi 2 2 = α 2 , ∀i) and that pilots sent at different time instants are mutually orthogonal (x H i xj = 0, ∀i = j). This way, 1 α 2 X * X T is a projection matrix onto the range of X * , and P can itself be interpreted as a projection, being the Kronecker product of two projection matrices [22, p.112] (it is an orthogonal projection since P H = P). Fisher information matrix for a sparse channel model Consider now the parametric channel model of section 2.1, where h = P p=1 hp, with hp = cpet( --→ ut,p) * ⊗er( --→ ur,p). Intra-path couplings. The derivatives of h with respect to parameters of the pth path θ (p) can be determined using matrix differentiation rules [START_REF] Brandt Petersen | The matrix cookbook[END_REF]: ∂φp , ∂h(θ) ∂ηr,p , ∂h(θ) ∂ψr,p , ∂h(θ) ∂ηt,p , ∂h(θ) ∂ψt,p , the part of the FIM corresponding to couplings between the parameters θ (p) (intra-path couplings) is expressed as • I (p,p) 2α 2 σ 2 Re ∂h H ∂θ (p) P ∂h ∂θ (p) . (6) Let us now particularize this expression. First of all, in order to ease interpretations carried out in section 4, consider the case of optimal observation conditions (when the range of P contains the range of ∂h(θ) ∂θ ). This allows indeed to interpret separately the role of the observation matrices and the antenna arrays geometries. Second, consider for example the entry corresponding to the coupling between the departure azimuth angle ηt,p and the arrival azimuth angle ηr,p of the pth path. It is expressed under the optimal observation assumption as since Ar1n r = 0 and At1n t = 0 by construction (because the antennas positions are taken with respect to the array centroid). This means that the parameters ηr,p and ηt,p are statistically uncoupled, i.e. orthogonal parameters [START_REF] David | Parameter orthogonality and approximate conditional inference[END_REF]. Computing all couplings for θ (p) yields I (p,p) = 2ρ 2 p α 2 σ 2      1 ρ 2 p 0 01×2 01×2 0 1 01×2 01×2 02×1 02×1 Br 02×2 02×1 02×1 02×2 Bt      , (7) where Bx = 1 nx   A T x --→ vη x,p 2 2 --→ vη x,p T AxA T x ---→ v ψx,p ---→ v ψx,p T AxA T x --→ vη x,p A T x ---→ v ψx,p 2 2   , (8) with x ∈ {r, t}. These expressions are thoroughly interpreted in section 4. Global FIM. Taking into account couplings between all paths, The global FIM is easily deduced from the previous calculations and block structured, 2) ... I (1,P ) I (2,1) I (2,2) . . . . . . I (P,1) I (P,P )   , where I (p,q) ∈ R 6×6 contains the couplings between parameters of the pth and qth paths and is expressed I (p,q) 2α 2 σ 2 Re ∂h ∂θ (p) H P ∂h ∂θ (q) . The off-diagonal blocks I (p,q) of I(θ), corresponding to couplings between parameters of distinct paths, or inter-path couplings, can be expressed explicitly (as in eq. ( 7) for intra-path couplings). However, the obtained expressions are less prone to interesting interpretations, and inter-paths couplings have been observed to be negligible in most cases. They are thus not displayed in the present paper, for brevity reasons. Note that a similar FIM computation was recently carried out in the particular case of linear arrays [START_REF] Garcia | Optimal robust precoders for tracking the aod and aoa of a mm-wave path[END_REF]. However, the form of the FIM (in particular parameter orthogonality) was not exploited in [START_REF] Garcia | Optimal robust precoders for tracking the aod and aoa of a mm-wave path[END_REF], as is done here in sections 4 and 5. I(θ) =   I (1,1) I (1, Bound on the variance The variance of channel estimators remains to be bounded, using eq. ( 4). From eq. ( 5), the FIM can be expressed more conveniently only with real matrices as I(θ) = 2α 2 σ 2 DT P D, with D Re{ ∂h(θ) ∂θ } Im{ ∂h(θ) ∂θ } , P Re{P} -Im{P} Im{P} Re{P} , where P is also a projection matrix. Finally, injecting eq. ( 5) into eq. ( 4) assuming the FIM is invertible, gives for the relative variance (this is actually an optimal SNR, only attained with perfect precoding and combining). Optimal bound. The first inequality in eq. ( 9) becomes an equality if an efficient estimator is used [START_REF] Kay | Fundamentals of Statistical Signal Processing: Estimation Theory[END_REF]. Moreover, the second inequality is an equality if the condition im ∂h(θ) ∂θ ⊂ im (P) is fulfilled (this corresponds to optimal observations, further discussed in section 4). Remarkably, under optimal observations, the lower bound on the relative variance is directly proportional to the considered number of paths P and inversely proportional to the SNR, and does not depend on the specific model structure, since the influence of the derivative matrix D cancels out in the derivation. Sparse recovery CRB. It is interesting to notice that the bound obtained here is similar to the CRB for sparse recovery [START_REF] Ben | The cramér-rao bound for estimating a sparse parameter vector[END_REF] (corresponding to an intrinsically discrete model), that is proportional to the sparsity of the estimated vector, analogous here to the number of paths. Tr cov h( θ) . h -2 2 ≥ σ 2 2α 2 Tr D( DT P D) -1 DT . h -2 2 ≥ σ 2 2α 2 Tr D( DT D) -1 DT . h -2 2 = σ 2 2α 2 h 2 2 np = 3P SNR , (9) INTERPRETATIONS The main results of sections 3.2 and 3.3 are interpreted in this section, ultimately guiding the design of efficient estimation algorithms. Parameterization choice. The particular expression of the FIM allows to assess precisely the chosen parameterization. First of all, I(θ) has to be invertible and well-conditioned, for the model to be theoretically and practically identifiable [START_REF] Thomas | Identification in parametric models[END_REF][START_REF] Kravaris | Advances and selected recent developments in state and parameter estimation[END_REF], respectively. As a counterexample, imagine two paths indexed by p and q share the same DoD and DoA, then proportional columns appear in ∂h(θ) ∂θ , which implies non-invertibility of the FIM. However, it is possible to summarize the effect of these two paths with a single virtual path of complex gain cp +cq without any accuracy loss in channel description, yielding an invertible FIM. Similarly, two paths with very close DoD and DoA yield an ill-conditioned FIM (since the corresponding steering vectors are close to colinear), but can be merged into a single virtual path with a limited accuracy loss, improving the conditioning. Interestingly, in most channel models, paths are assumed to be grouped into clusters, in which all DoDs and DoAs are close to a principal direction [START_REF] Adel | A statistical model for indoor multipath propagation[END_REF][START_REF] Jensen | Modeling the indoor mimo wireless channel[END_REF][START_REF] Michael | A review of antennas and propagation for mimo wireless communications[END_REF]. Considering the MSE, merging close paths indeed decreases the variance term (lowering the total number of parameters), without increasing significantly the bias term (because their effects on the channel matrix are very correlated). These considerations suggest dissociating the number of paths considered in the model P from the number of physical paths, denoted P φ , taking P < P φ by merging paths. This is one motivation behind the famous virtual channel representation [START_REF] Akbar | Deconstructing multiantenna fading channels[END_REF], where the resolution at which paths are merged is fixed and given by the number of antennas. The theoretical framework of this paper suggests to set P (and thus the merging resolution) so as to minimize the MSE. A theoretical study of the bias term of the MSE (which should decrease when P increases) could thus allow to calibrate models, choosing an optimal number of paths P * for estimation. Such a quest for P * is carried out empirically in section 5. Optimal observations. The matrices X and W (pilot symbols and analog combiners) determine the quality of channel observation. Indeed, it was shown in section 3.3 that the lowest CRB is obtained when im ∂h(θ) ∂θ ⊂ im (P), with P = 1 α 2 (X * X T ) ⊗ (W(W H W) -1 W H ) . In case of sparse channel model, using the expressions for ∂h (θ) ∂θ derived above, this is equivalent to two distinct conditions for the training matrix: ux,p) with x ∈ {r, t} and ξ ∈ {η, ψ}. These conditions are fairly intuitive: to estimate accurately parameters corresponding to a given DoD (respectively DoA), the sent pilot sequence (respectively analog combiners) should span the corresponding steering vector and its derivatives (to "sense" small changes). To accurately estimate all the channel parameters, it should be met for each atomic channel. Array geometry. Under optimal observation conditions, performance limits on DoD/DoA estimation are given by eq. ( 8). The lower the diagonal entries B -1 x , the better the bound. This implies the bound is better if the diagonal entries of Bx are large and the offdiagonal entries are small (in absolute value). Since the unit vectors --→ vη x,p and ---→ v ψx,p are by definition orthogonal, having AxA T x = β 2 Id with maximal β 2 is optimal, and yields uniform performance limits for any DoD/DoA. Moreover, in this situation, β 2 is proportional to 1 nx nx i=1 --→ ax,i 2 2 , the mean squared norm of antenna positions with respect to the array centroid. Having a larger antenna array is thus beneficial (as expected), because the furthest antennas are from the array centroid, the larger β 2 is. Orthogonality of DoA and DoD. Section 3.2 shows that the matrix corresponding to intra-path couplings (eq. ( 7)) is block diagonal, meaning that for a given path, parameters corresponding to gain, Algorithm 1 Sequential direction estimation (DoA first) X H et( -→ v 1 ) X H et( -→ v 1 ) 2 |...| X H et( -→ vn) X H et( -→ vn) 2 6: Find the index ĵ of the maximal entry of er( -→ u î ) H YKt, set - → ut ← -→ v ĵ (O(n) complexity) phase, DoD and DoA are mutually orthogonal. Maximum Likelihood (ML) estimators of orthogonal parameters are asymptotically independent [START_REF] David | Parameter orthogonality and approximate conditional inference[END_REF] (when the number of observations, or equivalently the SNR goes to infinity). Classically, channel estimation in massive MIMO systems is done using greedy sparse recovery algorithms [START_REF] Mallat | Matching pursuits with timefrequency dictionaries[END_REF][START_REF] Tropp | Signal recovery from random measurements via orthogonal matching pursuit[END_REF][START_REF] Bajwa | Compressed channel sensing: A new approach to estimating sparse multipath channels[END_REF]. Such algorithms can be cast into ML estimation with discretized directions, in which the DoD and DoA (coefficient support) are estimated jointly first (which is costly), and then the gain and phase are deduced (coefficient value), iteratively for each path. Orthogonality between the DoD and DoA parameters is thus not exploited by classical channel estimation methods. We propose here to exploit it via a sequential decoupled DoD/DoA estimation, that can be inserted in any sparse recovery algorithm in place of the support estimation step, without loss of optimality in the ML sense. In the proposed method, one direction (DoD or DoA) is estimated first using an ML criterion considering the other direction as a nuisance parameter, and the other one is deduced using the joint ML criterion. Such a strategy is presented in algorithm 1. It can be verified that lines 3 and 6 of the algorithm actually correspond to ML estimation of the DoA and joint ML estimation, respectively. The overall complexity of the sequential directions estimation is thus O(m+n), compared to O(mn) for the joint estimation with the same test directions. Note that a similar approach, in which DoAs for all paths are estimated at once first, was recently proposed [START_REF] Noureddine | A two-step compressed sensing based channel estimation solution for millimeter wave mimo systems[END_REF] (without theoretical justification). PRELIMINARY EXPERIMENT Let us compare the proposed sequential direction estimation to the classical joint estimation. This experiment must be seen as an example illustrating the potential of the approach, and not as an extensive experimental validation. Experimental settings. Consider synthetic channels generated using the NYUSIM channel simulator [START_REF] Mathew | 3-d millimeterwave statistical channel model for 5g wireless system design[END_REF] (setting f = 28 GHz, the distance between transmitter and receiver to d = 30 m) to obtain the DoDs, DoAs, gains and phases of each path. The channel matrix is then obtained from eq. ( 1), considering square Uniform Planar Arrays (UPAs) with half-wavelength separated antennas, with nt = 64 and nr = 16. Optimal observations are considered, taking both W and X as the identity. Moreover, the noise variance σ 2 is set so as to get an SNR of 10 dB. Finally, the two aforementioned direction estimation strategies are inserted in the Matching Pursuit (MP) algorithm [START_REF] Mallat | Matching pursuits with timefrequency dictionaries[END_REF], discretizing the directions taking m = n = 2, 500, and varying the total number P of estimated paths. Results. Table 1 shows the obtained relative MSE and estimation times (Python implementation on a laptop with an Intel(R) Core(TM) i7-3740QM CPU @ 2.70 GHz). First of all, for P = 5, 10, 20, the estimation error decreases and the estimation time increases with P , exhibiting a trade-off between accuracy and time. However, increasing P beyond a certain point seems useless, since the error re-increases, as shown by the MSE for P = 40, echoing the trade-off evoked in section 3.3, and indicating that P * is certainly between 20 and 40 for both methods in this setting. Finally, for any value of P , while the relative errors of the sequential and joint estimation methods are very similar, the estimation time is much lower (between ten and twenty times) for sequential estimation. This observation validates experimentally the theoretical claims made in the previous section. CONCLUSIONS AND PERSPECTIVES In this paper, the performance limits of massive MIMO channel estimation were studied. To this end, training based estimation with a physical channel model and an hybrid architecture was considered. The Fisher Information Matrix and the Cramér-Rao bound were derived, yielding several results. The CRB ended up being proportional to the number of parameters in the model and independent from the precise model structure. The FIM allowed to draw several conclusions regarding the observation matrices and the arrays geometries. Moreover, it suggested computationally efficient algorithm which are asymptotically as accurate as classical ones. This paper is obviously only a first step toward a deep theoretical understanding of massive MIMO channel estimation. Apart from more extensive experimental evaluations and optimized algorithms, a theoretical study of the bias term of the MSE would be needed to calibrate models, and the interpretations of section 4 could be leveraged to guide system design. Regarding the complex gain cp = ρpe jφp , the model yields the expressions ∂h(θ) ∂ρp = 1 ρp hp and ∂h(θ) ∂φp = jhp. • Regarding the DoA, ∂h(θ) ∂ηr,p = Idn t ⊗diag(-jA T r --→ vη r,p ) hp and ∂h(θ) ∂ψr,p = Idn t ⊗diag(-jA T r ---→ v ψr,p ) hp, where --→ vη r,p and ---→ v ψr,p are the unit vectors in the azimuth and elevation directions at --→ ur,p, respectively. • Regarding the DoD, ∂h(θ) ∂ηt,p = diag(jA T t --→ vη t,p )⊗Idn r hp and ∂h(θ) ∂ψt,p = diag(jA T t --→ v ψt,p )⊗Idn r hp, where --→ vη t,p and --→ v ψt,p are the unit vectors in the azimuth and elevation directions at --→ ut,p, respectively. Denoting ∂h ∂θ (p) ∂h(θ) ∂ρp , ∂h(θ) 2α 2 σ 2 2 Re ∂h(θ) vη r,p = 0, 2 h 2 2 σ 2 222 where the second inequality comes from the fact that P being an orthogonal projection matrix, P ≤ Id ⇒ DT P D ≤ DT D ⇒ ( DT P D) -1 ≥ ( DT D) -1 ⇒ D( DT P D) -1 DT ≥ D( DT D) -1 DT (using elementary properties of the ordering of semidefinite positive matrices, in particular [26, Theorem 4.3]). The first equality comes from the fact that Tr D( DT D) -1 DT = Tr(Idn p ) = np. Finally, the second equality is justified by np = 6P considering the sparse channel model, and by taking SNRα = diag(-jA T x --→ v ξx,p )ex( --→ Acknowledgments. The authors wish to thank Matthieu Crussière for the fruitful discussions that greatly helped improving this work. This work has been performed in the framework of the Horizon 2020 project ONE5G (ICT-760809) receiving funds from the European Union. The authors would like to acknowledge the contributions of their colleagues in the project, although the views expressed in this contribution are those of the authors and do not necessarily represent the project.
30,130
[ "4463" ]
[ "471268", "471268" ]
01716111
en
[ "info" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01716111v2/file/Stability%20and%20Security%20of%20the%20Tangle.pdf
Quentin Bramas email: bramas@unistra.fr The Stability and the Security of the Tangle In this paper we study the stability and the security of the distributed data structure at the base of the IOTA protocol, called the Tangle. The contribution of this paper is twofold. First, we present a simple model to analyze the Tangle and give the first discrete time formal analyzes of the average number of unconfirmed transactions and the average confirmation time of a transaction. Then, we define the notion of assiduous honest majority that captures the fact that the honest nodes have more hashing power than the adversarial nodes and that all this hashing power is constantly used to create transactions. This notion is important because we prove that it is a necessary assumption to protect the Tangle against double-spending attacks, and this is true for any tip selection algorithm (which is a fundamental building blocks of the protocol) that verifies some reasonable assumptions. In particular, the same is true with the Markov Chain Monte Carlo selection tip algorithm currently used in the IOTA protocol. Our work shows that either all the honest nodes must constantly use all their hashing power to validate the main chain (similarly to the bitcoin protocol) or some kind of authority must be provided to avoid this kind of attack (like in the current version of the IOTA where a coordinator is used). The work presented here constitute a theoretical analysis and cannot be used to attack the current IOTA implementation. The goal of this paper is to present a formalization of the protocol and, as a starting point, to prove that some assumptions are necessary in order to defend the system again double-spending attacks. We hope that it will be used to improve the current protocol with a more formal approach. Introduction Since the day Satoshi Nakamoto presented the Bitcoin protocol in 2008 [START_REF] Nakamoto | Bitcoin: A peer-to-peer electronic cash system[END_REF], the interest in Blockchain technology has grown continuously. More generally, the interest concerns Distributed Ledger Technology, which refers to a distributed data storage protocol. Usually it involves a number of nodes (or processes, or agents) in a network that are known to each other or not. Those nodes may not trust each-other so the protocol should ensure that they reach a consensus on the order of the operations they perform, in addition to other mechanism like data replication for instance. The consensus problem has been studied for a long time [START_REF] Pease | Reaching agreement in the presence of faults[END_REF][START_REF] Fischer | Impossibility of distributed consensus with one faulty process[END_REF] providing a number of fundamental results. But the solvability of the problem was given in term of proportion of faulty agents over honest agents and in a trustless network, where anyone can participate, an adversary can simulate an arbitrary number of nodes in the network. To avoid that, proof systems like Proof of Work (PoW) or Proof of Stake (PoS) are used to link the importance of an entity with some external properties (processing power in PoW) or internal properties (the number of owned tokens 1 in PoS) instead of simply the number of nodes it controls. The solvability of the consensus is now possible only if the importance of the adversary (given in terms of hashing power or in stake) is smaller than the honest one (the proportion is reduced to 1/3 if the network is asynchronous). In Bitcoin and in the other blockchain technologies, transactions are stored in a chain of blocks, and the PoW or PoS is used to elect one node that is responsible for writing data in the next block. The "random" selection and the incentive a node may have to execute honestly the protocol make the whole system secure, as it was shown by several formal analysis [START_REF] Nakamoto | Bitcoin: A peer-to-peer electronic cash system[END_REF][START_REF] Garay | The bitcoin backbone protocol: Analysis and applications[END_REF]. Usually, there are properties that hold with high probability i.e., with a probability that tends to one quickly as the time increases. For instance, the order between two transactions do not change with probability that tends to 1 exponentially fast over the time in the Bitcoin protocol, if the nodes executing honestly (or rationally) the protocol have more than a third of the hashing total power. In this paper we study another distributed ledger protocol called the Tangle, presented by Serguei Popov [START_REF] Popov | The tangle. white paper[END_REF] that is used in the IOTA cryptocurrency to store transactions. The Tangle is a Directed Acyclic Graph (DAG) where a vertex, representing a transaction, has two parents, representing the transactions it confirms. According to the protocol a PoW has to be done when adding a transaction to the Tangle. This PoW prevents an adversary from spamming the network. However, it is not clear in the definition of the Tangle how this PoW impacts its security. When a new transaction is appended to the Tangle, it references two previous unconfirmed transactions, called tips. The algorithm selecting the two tips is called a Tip Selection Algorithm (TSA). It is a fundamental parts of the protocol as it is used by the participants to decide, among two conflicting transactions, which one is the valid. It is the most important part in order for the participants to reach a consensus. The TSA currently used in the IOTA implementation uses the PoW contained in each transaction to select the two tips. Related Work Very few academic papers exist on this protocol, and there is no previous work that formally analyzes its security. The white paper behind the Tangle [START_REF] Popov | The tangle. white paper[END_REF] presents a quick analysis of the average number of transactions in the continuous time setting. This analysis is done after assuming that the evolution converge toward a stationary distribution. The same paper presents a TSA using Monte Carlo Markov Chain (MCMC) random walks in the DAG from old transactions toward new ones, to select two unconfirmed transactions. The random walk is weighted to favor transactions that are confirmed by more transactions. There is no analysis on how the assigned weight, based on the PoW of each transaction, affects the security of the protocol. This MCMC TSA is currently used by the IOTA cryptocurrency. It is shown in [START_REF] Popov | Equilibria in the tangle[END_REF] that using the default TSA correspond to a Nash equilibrium. Participants are incite to use the MCMC TSA, because using another TSA (e.g. a lazy one that confirms only already confirmed transactions) may increase the chances of seeing their transactions unconfirmed. Finally, the tangle has also been analyzed by simulation [START_REF] Kuśmierz | The first glance at the simulation of the tangle: discrete model[END_REF] using a discrete time model, where transactions are issued every round following a Poisson distribution of parameter λ. Like in the continuous time model, the average number of unconfirmed transactions (called tips) seems to grow linearly with the value of λ, but a little bit slower (≈ 1.26λ compared to 2λ in the continuous time setting) Contributions The contribution of our paper is twofold. First, we analyze formally the number of tips in the discrete time setting, depending on the value of λ by seeing it as a Markov chain where at each round, their is a given probability to obtain a given number of tips. Unlike previous work, we here prove the convergence of the system toward a stationary distribution. This allows use to prove the previous results found by simulations [START_REF] Kuśmierz | The first glance at the simulation of the tangle: discrete model[END_REF] that the average number of tips is stationary and converge towards a fixed value. Second, we prove that if the TSA depends only on the PoW, then the weight of the honest transactions should exceed the hashing power of the adversary to prevent a double-spending attack. This means that honest nodes should constantly use their hashing power and issue new transactions, otherwise an adversary can attack the protocol even with a small fraction of the total hashing power. Our results is interesting because it is true for any tip selection algorithm i.e., the protocol cannot be more secure by simply using a more complex TSA. The remaining of the paper is organized as follow. Section 2 presents our model and the Tangle. In Section 3 we analyze the average confirmation time and the average number of unconfirmed transactions. In Section 4 we prove our main theorem by presenting a simple double-spending attack. Model The Network We consider a set N of processes, called nodes, that are fully connected. Each node can send a message to all the other nodes (the network topology is a complete graph). We will discuss later the case where the topology is sparse. We assume nodes are activated synchronously. The time is discrete and at each time instant, called round, a node reads the messages sent by the other nodes in the previous round, executes the protocol and, if needed, broadcast a message to all the other nodes. When a node broadcast a message, all the other nodes receive it in the next round. The size of messages are in O(B), where B is an upper bound on the size of the messages when all the nodes follow the protocol honestly i.e., the size of the messages is sufficiently high not to impact the protocol when it is executed honestly. Those assumptions are deliberately strong as they make an attack more difficult to perform. The DAG In this paper, we consider a particular kind of distributed ledger called the Tangle, which is a Direct Acyclic Graph (DAG). Each node u stores at a given round r a local DAG G u r (or simply G r or G if the node or the round are clear from the context), where each vertex, called site, represents a transaction. Each site has two parents (possibly the same) in the DAG. We say a site directly confirm its two parents. All sites that are confirmed by the parents of a site are also said to be confirmed (or indirectly confirmed) by it i.e., there is a path from a site to all the sites it confirms in the DAG (see Figure 1). A site that is not yet confirmed is called a tip. There is a unique site called genesis that does not have parents and is confirmed by all the other sites. For simplicity we model a DAG simply by a set of sites G = (s i ) i∈I Two sites may be conflicting. This definition is application-dependent so we assume that there exists a function areConf licting(a, b) that answer whether two sites are conflicting or not. If the Tangle is used to store the balance of a given currency (like the IOTA cryptocurrency), then a site represents a transaction moving funds from a sender address to a receiver address and two sites are conflicting if they try to move the same funds to two different receivers i.e., if both executing transactions results in a negative balance for the sender. The details of this example are outside the scope of this paper, but we may use this terminology in the remaining of the In each site, the first number is its score and the second is its cumulative weight. The two tips (with dashed border) are not confirmed yet and have cumulative weight of 1. paper. In this case, signing a transaction means performing the PoW to create the site that will be included in the Tangle, and broadcasting a transaction means sending it to the other nodes so that they can include it to their local Tangle. At each round, each node may sign one or more transactions. For each transaction, the node selects two parents. The signed transaction becomes a site in the DAG. Then, the node broadcast the site to all the other node. DAG extension Definition 1. Let G be a DAG and A a set of sites. If each site of A has its parents in A or in the tips of G, then we say that A is an extension of G and G ∪ A denotes the DAG composed by the union of sites from G and A. We also say that A extends G. One can observe that if A extends G, then the tips of G forms a cut of G ∪ A. Definition 2. Let A be a set of sites extending a DAG G. We say A completely extends G (or A is a complete extension of G) if all the tips of G ∪ A are in A. In other word, the sites of A confirm all the tips of G. The local DAG of a node may contain conflicting sites. If so, the DAG is said to be forked (or conflicting). A conflict-free sub-DAG is a sub-DAG that contains no conflicting sites. Weight and Hashing Power When a transaction is signed and become a site in the DAG, a small proof of work (PoW) is done. The difficulty of this PoW is called the weight of the site. Initially, this PoW has been added to the protocol to prevent a node from spamming a huge number of transactions. In order to issue a site of weight w a processing power (or hashing power ) proportional to w needs to be consumed. With the PoW, spamming requires a large amount of processing power, which increases its cost and reduces its utility. It was shown [START_REF] Popov | The tangle. white paper[END_REF] that site should have bounded weight and for simplicity, one can assume that the weight of each site is 1. Then, this notion is also used to compute the cumulative weight of a site, which is the amount of work that has been done to deploy this site and all sites that confirm it. Similarly, the score of a site is the sum of all weight of sites confirmed by it i.e., the amount of work that has been done to generate the sub-DAG confirmed by it, see Figure 1 for an illustration. Tip Selection Algorithm When signing a transaction s, a node u has to select two parents i.e., two previous sites in its own version of the DAG.According to the protocol, this is done by executing an algorithm called the tip selection algorithm (TSA). The protocol says that the choice of the parents must be done among the sites that have not been confirmed yet i.e., among tips. Also, the two selected parents must not confirm, either directly or indirectly, conflicting sites. We denote by T the TSA, which can depend on the implementation of the protocol. For simplicity, we assume all the nodes use the same algorithm T . As pointed by previous work [START_REF] Popov | The tangle. white paper[END_REF], the TSA is a fundamental factor of the security and the stability of the Tangle. For our analysis, we assume T depends only on the topology of the DAG, on the weight of each site in the DAG and on a random source. It is said to be statefull if it also depends on previous output of the TSA by this node, otherwise we say it is stateless. The output of T depends on the current version of the DAG and on a random source (that is assumed distinct for two different nodes). The random source is used to prevent different nodes that has the same view from selecting the same parents when adding a site to the DAG at the same time. However, this is not deterministic and it is possible that two distinct nodes issue two sites with the same parents. Local Main DAG The local DAG of a node u may contain conflicting sites. For consistency, a node u can keep track of a conflict-free sub-DAG main u (G) that it considers to be its local main DAG. If there are two conflicting sites a and ā in the DAG G, the local main DAG contains at most one of them. The main DAG of a node is used as a reference for its own view of the world, for instance to calculate the balance associated with each address. Of course this view may change over the time. When new transactions are issued, a node can change its main DAG, updating its view accordingly (exactly like in the bitcoin protocol, when a fork is resolved due to new blocks being mined). When changing its main DAG, a local node may discard a sub-DAG in favor of another sub-DAG. In this case, several sites may be discarded. This is something we want to avoid or at least ensure that the probability for a site to be discarded tends quickly to zero with time. The tip selection algorithm decides what are the tips to confirm when adding a new site. Implicitly, this means that the TSA decides what sub-DAG is the main DAG. In more detail, the main DAG of a node at round r is the sub-DAG confirmed by the two sites output by the TSA. Thus, a node can run the TSA just to know what is its main DAG and even if no site has to be included to the DAG. One can observe that, to reach consensus, the TSA should ensure that the main DAG of all the nodes contain a common prefix of increasing size that represents the transactions everyone agree on. Adversary Model Among the nodes, some are honest i.e., they follow the protocol, and some are byzantine and behave arbitrarily. For simplicity, we can assume that only one node is byzantine and we call this node the adversary. The adversary is connected to the network and receive all the transactions like any other honest node. He can behave according to the protocol but he can also create (and sign) transactions without broadcasting them, called hidden transaction (or hidden sites). To make the results stronger, we can assume that the adversary cannot sign a message using another node's identity. Here, even if we use the term "signing", sites may not have identity attached to them so it is actually not relevant to decide whether or not adversary can steal the identity of honest nodes. When an honest node issues a new site, the two sites output by T must be two tips, at least in the local DAG of the node. Thus, one parent p 1 cannot confirm indirectly the other p 2 , because in this case the node is aware of p 2 having a child and is not a tip. Also, a node cannot select the same site as parent for two different site, thus the number of honest children cannot exceed the number of nodes in the network. This implies the following property. Property 1. In a DAG constructed by n honest nodes using a TSA, a site cannot have one parent that confirms the other one. Moreover, the number of children of each site is bounded by n. The first property should be preserved by an adversary as it is easy for the honest nodes to check and discard a site that does not verify it. However the adversary can issue multiple sites that directly confirm the same site and the honest nodes have no way to know which sites are honest. Assiduous Honest Majority Assumption The cumulative weight and the score can be used by a node to select its main DAG. However, even if it is true that a heavy sub-DAG is harder to generate than a light one, there is no relation yet in the protocol between the weight of sites and the hashing power capacity of honest nodes. We define the assiduous honest majority assumption as the fact that the hashing power of honest nodes is constantly used to generate sites and that it is strictly greater than the hashing power of the adversary. In fact, without this assumption, it is not relevant to look at the hashing power of the honest nodes if they do not constantly use it to generates new sites. Thus, under this assumption, the cumulative weight of the honest DAG grows according to the hashing power of the honest nodes, and the probability that an adversary generates more sites than the honest nodes in a given period of time tends to 0 as the duration of the period tends to infinity. Conversely, without this assumption, an adversary may be able to generates more sites than the honest nodes, even with less available hashing power. Average Number of Tips and Confirmation Time In this section we study the average number of tips depending on the rate of arrival of new sites. In this section, like in previous analysis [START_REF] Popov | The tangle. white paper[END_REF], we assume that tip selection algorithm is the simple uniform random tip selection that select two tips uniformly at random. We denote by N (t) the number of tips at time t and λ(t) the number of sites issued at time t. Like previously, we assume λ(t) follows a Poisson distribution of parameter λ. Each new site confirms two tips and we denote by C(t) the number of sites confirmed at time t, among those confirmed sites, C tips (t) represents the number of tips i.e., the number of sites that are confirmed for the first time at time t. Due to the latency, if h > 1, a site can be selected again by the TSA. We have: N (t) = N (t -1) + λ(t) -C tips (t) We say we are in state N ≤ 1 if there are N tips at time t. Then, the number of tips at each round is a Markov chain (N (t)) t≥0 with an infinite number of states [1, ∞). To find the probability of transition between two states (given in Lemma 2) we first calculate the probability of transition when the number of new site is known. Proof. If k new sites are issued, then there are up to 2k sites that are confirmed. This can be seen as a "balls into bins" problem [START_REF] Mitzenmacher | Probability and computing: Randomized algorithms and probabilistic analysis[END_REF] with 2k balls thrown into N bins, and the goal is to see how many bins are not empty i.e. how many unique sites are confirmed. First, there are N 2k possible outcome for this experience so the probability of a particular configuration is 1 N 2k . The number of ways we can obtains exactly C = N -N + k non empty bins, or confirmed transaction (so that there are exactly N tips afterward) is the number of ways we can partition a set of 2k elements into C parts times the number of ways we can select C bins to receive those C parts. P N k →N = N ! N 2k (N -k)! 2k N -N + k The first number is called the Stirling number of the second kind and is denoted by 2k N -N +k . The second number is N ! (N -k)! . Then, the probability of transition is a direct consequence of the previous lemma Lemma 2. The probability of transition from N to N is P N →N = N k=|N -N | P(Λ = k)P N k →N = N k=|N -N | N !λ k e -λ N 2k (N -k)!k! 2k N -N + k Proof. We just have to observe that the probability of transition from N to N is null if the number of new sites is smaller than N -N (because each new site can decrease the number of tips by at most one), smaller than N -N (because each site can increase the number of tips by at most one), or greater than N (because each new site is a tip). Lemma 3. The Markov chain (N (t)) t≥0 has a positive stationary distribution π. Proof. First, it is clear that (N (t)) t≥0 is aperiodic and irreducible because for any state N > 0, resp. N > 1, there is a non-null probability to move to state N + 1, resp. to state N -1. Since it is irreducible, we only have to find one state that is positive recurrent (i.e., that the expectation of the hitting time is finite) to prove that there is a unique positive stationary state. For that we can observe that the probability to transition from state N to N > N tends to 0 when N tends to infinity. Indeed, for a fixed k, we even have that the probability to decrease the number of tips by k tends to 1: P N k →N -k = N ! N 2k (N -2k)! = (1 - 1 N )(1 - 2 N ) . . . (1 - 2k + 1 N ) (1) lim N →∞ P N k →N -k = 1 (2) So that for any ε > 0 there exists a k ε such that P(Λ ≥ k ε ) < ε/2 and from (2) an N ε such that ∀k < k ε , P N k →N -k < ε 2kε so we obtain: A Nε = N >N ≥Nε P N →N = P(N (i + 1) > N (i + 1) | N (i) ≥ N ) (3) < P(Λ ≥ k ε ) + k<kε P N k →N -k (4) < ε So that the probability A N to "move away" from states [1, N ] tends to 0 when N tends to infinity. In fact, it is sufficient to observe that there is a number N 1/2 such that the probability to "move away" from states [1, N 1/2 ] is strictly smaller than 1/2. Indeed, this is a sufficient condition to have a state in [1, N 1/2 ] that is positive recurrent (one can see this by looking at a simple random walk in one dimension with a mirror at 0 and a probability p < 1/2 to move away from 0 by one and (1 -p) to move closer to 0 by 1). From the irreducibility of (N (t)) t≥0 , all the states are positive recurrent and the Markov chain admit a unique stationary distribution π. The stationary distribution π verifies the formula π N = i≥1 π i P i→N , which we can use to approximate it with arbitrary precision. When the stationary distribution is known, the average number of tips can be calculated N avg = i>0 iπ i , and with it the average confirmation time Conf of a tip is simply given by the fact that, at each round, a proportion λ/N avg of tips are confirmed in average. So Conf = N avg /λ rounds are expected before a given tip is confirmed. The value of Conf depending on λ is shown in Figure 3. With this, we show that Conf converges toward a constant when λ tends to infinity. In fact, for a large λ, the average confirmation time is approximately 1.26, equivalently, the average number of tips N avg is 1.26λ. For smaller values of λ, intuitively the time before first confirmation diverges to infinity and N avg converges to 1. A Necessary Condition for the Security of the Tangle A simple attack in any distributed ledger technology is the double spending attack. The adversary signs and broadcast a transaction to transfer some funds to a seller to buy a good or a service, and when the seller gives the good (he consider that the transaction is finalized), the adversary broadcast a transaction that conflicts the first one and broadcast other new transactions in order to discard the first transaction. When the first transaction is discarded, the seller is not paid anymore and the funds can be reused by the adversary. The original motivation behind our attack of the Tangle is as follow: after the initial transaction to the seller, the adversary generates the same sites as the honest nodes, forming a sub-DAG with the same topology as the honest sub-DAG (but including the conflicting transaction). Having no way to tell the difference between the honest sub-DAG and the adversarial sub-DAG, the latter will be selected by the honest nodes at some point. This approach may not work with latency in the network, because the sub-DAG of the adversary is always shorter than the honest sub-DAG, which is potentially detected by the honest nodes. To counter this, the adversary can generate a sub-DAG that has not exactly the same topology, but that has the best possible topology for the tip selection algorithm. The adversary can then use all its available hashing power to generate this conflicting sub-DAG that will at some point be selected by the honest nodes. For this attack we use the fact that a TSA selects two tips that are likely to be selected by the same algorithm thereafter. For simplicity we captured this with a stronger property: the existence of a maximal deterministic TSA. Definition 3 (maximal deterministic tip selection algorithm). A given TSA T has a maximal deterministic TSA T det if T det is a deterministic TSA and for any DAG G, there exists N G ∈ N such that for all n ∈ N the following property holds: Let A det be the extension of G obtained with N G + n executions of T det . Let A be an arbitrary extension of G generated with T of size at most n, conflicting with A det , and let G = G∪A∪A det . We have: P(T (G ) ∈ A det ) ≥ 1/2 Intuitively this means that executing the maximal deterministic TSA generates an extension this is more likely to be selected by the honest nodes, provided that it contains more sites than the other extensions. When the assiduous honest majority assumption is not verified, the adversary can use this maximal deterministic TSA at his advantage. Theorem 1. Without the assiduous honest majority assumption, and if the TSA has a maximal deterministic tip selection, the adversary can discard one of its transaction that has an arbitrary cumulative weight. Proof. Without the assiduous honest majority assumption, we assume that the adversary can generate strictly more sites than the honest nodes. Let W be an arbitrary weight. One can see W has the necessary cumulative weigh a given site should have in order to be considered final. Let G 0 be the common local main DAG of all node at a given round r 0 . At this round our adversary can generate two conflicting sites confirming the same pair of parents. One site a is sent to the honest nodes and the other ā is kept hidden. The adversary can use T det the maximal deterministic TSA of T to generate continuously (using all its hashing power) sites extending G ∪ {ā}. While doing so, the honest nodes extend G ∪ {a} using the standard TSA T . After r W rounds, it can broadcast all the generated sites to the honest nodes. The adversary can choose r W so that (i) the probability that it has generated N G more sites than the honest nodes is sufficiently high, and (ii) transaction a has the target cumulative weight W . After receiving the adversarial extension, by definition 3, honest nodes will extend the adversarial sub-DAG with probability greater than 1/2. In expectation, half of the honest nodes start to consider the adversarial sub-DAG as their main DAG, thus the honest nodes will naturally converge until they all chose the adversarial sub-DAG as their main DAG, which discard the transaction a. If the bandwidth of each channel is limited, then the adversary can start broadcasting the sites of its conflicting sub-DAG at round r W , at a rate two times greater than the honest nodes. This avoids congestion, and at round r W + r W /2 all the adversarial sub-DAG is successfully received by the honest nodes. Due to this additional latency, the number of sites in the honest sub-DAG might still be greater than the number of sites in the adversarial sub-DAG, so the adversary continues to generate and to broadcast sites extending its conflicting sub-DAG and at round at most 2r W , the adversarial extension of G received by the honest nodes has a higher number of sites than the honest extension. So the same property is true while avoiding the congestion problem. Now that we have our main theorem, we show that any TSA defined in previous work (especially in the Tangle white paper [START_REF] Popov | The tangle. white paper[END_REF]) has a corresponding maximal deterministic TSA. To do so we The rectangle site conflicts with all site in A det so that when executing the TSA on G ∪ A ∪ A det , tips either from A or from A det are selected. The strategy to construct A det can be either to increase the number of children of G tips or to increase their weight ; both ways are presented here. can see that to increase the probability for the adversarial sub-DAG to be selected, the extension of a DAG G obtained by the maximal deterministic TSA should either increase the weight or the number of direct children of the tips of G as shown in Figure 4. We now prove that the three TSA presented in the Tangle white paper [START_REF] Popov | The tangle. white paper[END_REF], (i) the random tip selection, (ii) the MCMC algorithm and (iii) the Logarithmic MCMC algorithm, all have a maximal deterministic TSA, which implies that the assiduous honest majority assumption is necessary when using them (recall that we do not study the sufficiency of this assumption). The Uniform Random Tip Selection Algorithm The uniform random tip selection algorithm is the simplest to implement and the easiest to attack. Since it chooses the two tips uniformly at random, an attacker just have to generates more tips than the honest nodes in order to increase the probability to have one of its tips selected. Lemma 4. The Random TSA has a maximal deterministic TSA. Proof. For a given DAG G the maximal deterministic T det always chooses as parents one of the l tips of G. So that, after n + l newly added sites A det , the tips of G ∪ A det are exactly A det and no other extension of G of size n can produce more than n + l tips so that the probability that the random TSA select a tip from A det is at least 1/2. Corollary 1. Without the assiduous honest majority assumption, the Tangle with the Random TSA is susceptible to double-spending attack. The MCMC Algorithm The MCMC algorithm is more complex than the random TSA. It starts by initially putting a fixed number of walker on the local DAG. Each walker performs a random walk towards the tips of the DAG with a probabilistic transition function that depends on the cumulative weight of the site it is located to and its children. In more details, a walker at a site v has a probability p v,u to move to a child u with p v,u = exp(-α(w(v) -w(u))) c∈Cv exp(-α(w(v) -w(c)) (6) Where the set C v is the children of v, and α > 0 is a parameter of the algorithm. The question to answer in order to find the maximal deterministic TSA of MCMC algorithm is: what is the best way to extend a site v to maximize the probability that the MCMC walker chooses our sites instead of another site. The following Lemma shows that the number of children is an important factor. This number depends on the value α. Lemma 5. Suppose a MCMC walker is at a site v. There exists a constant C α such that if v has C α children of weight n, then, when extending v with an arbitrary set of sites H of size n, the probability that the walker move to H is at most 1/2. Proof. When extending v with n sites, one can choose the number h of direct children, and then how the other sites extends those children. There are several ways to extends those children which changes their weights w 1 , w 2 , . . . , w h . The probability p H for a MCMC walker to move to H is calculated in the following way: S H = h 1 exp(-α(W -w i )) S H = C α exp(-α(W -n)) S = S H + S H p H = S H /S The greater the weight the greater the probability p H . Adding more children, might reduce their weights (since H contains only n sites). For a given number of children h, there are several way to extends those children, but we can arrange them so that each weight is at least n -h + 1 by forming a chain of length n -h and by connecting the children to the chain with a perfect binary tree. The height l i of a children i gives it more weight. So that we have w i = n -h l i . A property of a perfect binary tree is that h 1 2 l i = 1. We will show there is a constant C α such that for any h and any l 1 , . . . , l h , with h 1 2 l i = 1, we have S H ≥ S H C α exp(-α(W -n)) ≥ h i=1 exp(-α(W -w i )) C α ≥ h i=1 exp(-α(h -l i )) (7) Surprisingly, one can observe that our inequality does not depend on n, so that the same is true when we arrange the sites when extending a site v in order to increase the probability for the walker to select our sites. Let f h : (l 1 , . . . , l h ) → e -αh h 1 exp(αl i ). So the goal is to find an upper bound for the function f h that depends only on α. The function f h is convex (as a sum of convex functions), so the maximum is on the boundary of the domain, which is either (l 1 , . . . , l h ) = (1, 2, . . . , h) or (l 1 , . . . , l h ) = ( log(h) , . . . , log(h) , log(h) , . . . , log(h) ) For simplicity, let assume that h is a power of two so that the second case is just ∀i, l i = log(h). In the first case we have f h (1, . . . , h) = exp(-αh) exp(α(h + 1)) -exp(-α) exp(-α) -1 = exp(α) -exp(-α(h + 1)) exp(-α) -1 which tends to 0 when h tends infinity, so it admits a maximum C 1 α In the second case, we have f h (1, . . . , h) = exp(-αh)h exp(α log(h)) which again tends to 0 when h tends infinity, so it admits a maximum C 2 α By choosing C α = max(C 1 α , C 2 α ) we have the inequality [START_REF] Kuśmierz | The first glance at the simulation of the tangle: discrete model[END_REF] for any value of h. Lemma 6. The MCMC tip selection has a maximal deterministic TSA. Proof. Let G be a conflict-free DAG with tips G tips . Let T be the number of tips times the number C α defined in Lemma 5. The T first executions of T det select a site from G tips (as both parents) until all site from G tips as exactly C α children. The next executions of T det selects two arbitrary tips (different if possible). After T executions, only one tip remains and the newly added sites form a chain. Let N G = 2T . N G is a constant that depends only on α and on G. After N G + n added sites, each site in G tips has a C α children with weight at least n. Thus, by Lemma 5, a MCMC walker located at a site v ∈ G tips moves to our extension with probability at least 1/2. Since this is true for all sites in G tips and G tips is a cut. All MCMC walker will end up in A det with probability at least 1/2. One can argue that this is not optimal and we could have improved the construction of the extension to reduce the number of sites, but we are mostly interested here in the existence of such a construction. Indeed, in practice, the probability for a walker to move to our extension would be higher as the honest sub-DAG A is not arbitrary, but generated with the TSA. Our analyze shows that even in the worst configuration, the adversary can still generate an extension with a good probability of being selected. Corollary 2. Without the assiduous honest majority assumption, the Tangle with the MCMC TSA is susceptible to double-spending attack. The Logarithmic MCMC Algorithm In the Tangle white paper, it is suggested that the MCMC probabilistic transition function can be defined with the function h → h -α = exp(-α ln(h)). In more details, a walker at a site v has a probability p v,u to move to a child u with p v,u = (w(v) -w(u)) -α c∈Cv (w(v) -w(c)) -α (8) Where the set C v is the children of v, and α > 0 is a parameter of the algorithm. The IOTA implementation currently uses this function with α = 3. With this transition function, the number of children is more important than their weight. Lemma 7. The logarithmic MCMC tip selection has a maximal deterministic TSA. Proof. Let G be a conflict-free DAG with tips G tips . Let T be the number of tips. T det always selects two sites from G tips in a round-robin manner. After kT executions (k ∈ N), each site from G tips has 2k children. Let n be the number of sites generated with T det and A an arbitrary extension of G. Let v ∈ G tips and C v be the number of children of v that are in A and C det = 2n/T be the number of children of v generated by T det . With w(v) the weight of v, we have that w(v) ≤ 2n and C det ≤ w(v) -n ≤ n Let p be the probability that a walker located at v chooses a site generated by T det . We have p ≥ C det (w(v) -1) -α C v (w(v) -n) -α ≥ C det (2n) -α C v (C det ) -α = C det C v T α = 2n T 1-α C v With T a constant and C v bounded, we have that p tends to infinity as n tends to infinity. This is true for each site of G tips , so after a given number of generated site N G , the probability that a LMCMC walker located at any site of G tips moves to a site generated by by T det is greater than 1/2. Corollary 3. Without the assiduous honest majority assumption, the Tangle with the Logarithmic MCMC TSA is susceptible to double-spending attack. Discussion Sparse Network The case of sparse networks intuitively gives more power to the adversary as it is more difficult for the honest nodes to coordinate. Let assume that the communication graph is arbitrary. It can be a geometric graph (eg. a grid), a small-world graph with a near constant diameter, or anything else. In order for the protocol to work properly, we assume that for each link between two nodes, there is enough bandwidth to send all the sites generated by the honest nodes and that the usage of each link is a constant fraction of its available capacity. For simplicity we can assume that no conflict occurs from the honest nodes so that, without adversary, the local DAG of each node is its main DAG. Due to multi-hop communications, the local DAGs of the honest nodes may differ but only with respect to the sites generated during the last D rounds, where D is the diameter of the communication graph. Take an arbitrary node u in this graph. We connect our adversary to node u, so that every sites received by u at round r is received by our adversary at round r + 1. In this arbitrary network, the attack is exactly the same, except for the number of rounds r W that the adversary waits before revealing the conflicting sub-DAG. Indeed, r W should be greater to take into account the propagation time and ensure that that the first adversarial site a has a cumulative weight greater than W in all the honest nodes (typically we should wait D more rounds compared to the previous case). As in the previous case, the adversary can broadcast its conflicting sub-DAG while ensuring not to induce congestion, for instance, at a rate two times greater than the honest. The topology of the network does not change the fact that after at most 2r W + D rounds, all the honest nodes have a greater probability to choose the adversarial sub-DAG for their next transactions. Trusted Nodes The use of trusted nodes is what makes IOTA currently safe against this kind of attacks. Indeed, a coordinator node regularly add a site to the DAG, confirming an entire conflict-free sub-DAG. Trusted sites act like milestones and any site confirmed by a trusted site is considered irreversible. However if the trusted node is compromised or is offline for too long, the other nodes are left alone. The current implementation of IOTA uses a trusted node called the coordinator and plans to either remove it, or replace it by a set of distributed trusted nodes. One can observe that the crypto-currency Byteball [START_REF] Churyumov | Byteball: a decentralized system for transfer of value[END_REF] uses a special kind of trusted nodes called witnesses. They are also used to resolve conflicts in the DAG. An important question could be: is the use of trusted node necessary to secure a distributed ledger based on a DAG? Avoiding Forks In the current protocol, conflicting sites cannot be confirmed by the same site. Part of the community already mentioned that this can cause some problem if the latency is high (i.e., if diameter of the communication graph is large). Indeed, by sending two conflicting sites to two different honest nodes, half of the network can start confirming one transaction, and the other half the other transaction. When a node finally receives the two conflicting transactions ( assuming that the convergence of is instantaneous) a single site will be assumed correct and become part of all the main DAG of all the honest node. However all the sites confirming the other conflicting site are discarded, resulting in wasted hashing power and increase in confirmation time for those site as a reattachment must occur. The cost for the adversary is small and constant and the wasted hashing power depends on the maximum latency between any two nodes. On way to avoid this is to include conflicting sites in the DAG (like in the Byteball protocol [START_REF] Churyumov | Byteball: a decentralized system for transfer of value[END_REF] for instance), by issuing a site confirming the two conflicting sites and containing the information of what site is considered valid and what site is considered invalid. This special site called decider site would be the only site allowed to confirm directly or indirectly two conflicting sites. This has the advantage that all the site confirming the invalid one remains valid and do not need to be reattached. However, the same thing can happen if the adversary send two conflicting decider sites to two end of the network. But again, a decider site could be used to resolve any kind of conflict, including this one. Indeed, this may seems like a circular problem that potentially never ends, but every time a decider is issued, a conflict is resolved and the same conflict could have happened even without the decider site. So having decider site should no change the stability of the Tangle and only help avoiding reattaching sites. Conclusion We presented a model to analyze the Tangle and we used it to study the average confirmation time and the average number of unconfirmed transaction over the time. Then, we defined the notion of assiduous honest majority that captures the fact that the honest nodes have more hashing power than the adversarial nodes and that all this hashing power is constantly used to create transactions. We proved that for any tip selection algorithm that has a maximal deterministic tip selection (which is the case for all currently known TSA), the assiduous honest majority assumption is necessary to prevent a double-spending attack on the Tangle. Our analyze shows that honest nodes cannot stay at rest, and should be continuously signing transactions (even empty ones) to increase the weight of their local main sub-DAG. If not, their available hashing power cannot be used to measure the security of the protocol, like we see for the Bitcoin protocol. Indeed, having a huge number of honest nodes with a very large amount of hashing power cannot prevent an adversary from attacking the Tangle if the honest nodes are not using this hashing power. This conclusion may seem intuitive, but the fact that it is true for all tip selection algorithms (that have a deterministic maximal TSA) is something new that have not been proved before. Figure 1 : 1 Figure 1: An example of a Tangle where each site has a weight of 1. In each site, the first number is its score and the second is its cumulative weight. The two tips (with dashed border) are not confirmed yet and have cumulative weight of 1. Lemma 1 . 1 If the number of tips is N and k new sites are issued, then the probability P N k →N of having N tips in the next round is: where a b denotes the Stirling number of the second kind S(a, b). Figure 2 : 2 Figure 2: Stationary distribution of the number of tips, for different values of λ. For each value of λ, one can see that the number of tips is really well centered around the average Figure 3 : 3 Figure 3: Expected number of round before the first confirmation, depending on the arrival rate of transaction. We see that it tends to 1.26 with λ. Recall that Conf = N avg /λ where N avg refers to the average number of tips in the stationary state. Figure 4 : 4 Figure4: A and A det are two possible extensions of G. The rectangle site conflicts with all site in A det so that when executing the TSA on G ∪ A ∪ A det , tips either from A or from A det are selected. The strategy to construct A det can be either to increase the number of children of G tips or to increase their weight ; both ways are presented here.
46,437
[ "2043" ]
[ "217648", "199013", "1003405" ]
01757907
en
[ "spi" ]
2024/03/05 22:32:10
2017
https://hal.science/hal-01757907/file/AIM2017_Klimchik_Pashkevich_Caro_Furet_HAL.pdf
Alexandr Klimchik email: a.klimchik@innopolis.ru Anatol Pashkevich email: anatol.pashkevich@imt-atlantique.fr Stéphane Caro email: stephane.caro@ls2n.fr Benoît Furet email: benoit.furet@univ-nantes.fr Calibration of Industrial Robots with Pneumatic Gravity Compensators Keywords: Industrial robot, stiffness modeling, elastostatic calibration, pneumatic gravity compensator, design of calibration experiments niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. I. INTRODUCTION Advancements in shipbuilding and aeronautic industries demand high-precision and high-speed machining of huge hulls and fuselage components. For these tasks, industrial robots are more attractive comparing to conventional CNCmachines because of large and easily extendable workspace, capability to process complex-shape parts and high-speed motion capability. However, processing of modern and contemporary materials, which are widely used in these industries, requires high processing forces affecting robot positioning accuracy [START_REF] Zhu | An off-line programming system for robotic drilling in aerospace manufacturing[END_REF][START_REF] Guillo | Impact & improvement of tool deviation in friction stir welding: Weld quality & real-time compensation on an industrial robot[END_REF][START_REF] Denkena | Enabling an Industrial Robot for Metal Cutting Operations[END_REF]. To reduce the external force impact on the positioning accuracy, robotic experts usually apply technique that is based on the compliance error estimation via the manipulator stiffness modelling [START_REF] Dumas | Joint stiffness identification of industrial serial robots[END_REF][START_REF] Nubiola | Absolute calibration of an ABB IRB 1600 robot using a laser tracker[END_REF][START_REF] Alici | Enhanced stiffness modeling, identification and characterization for robot manipulators[END_REF][START_REF] Klimchik | Stiffness modeling for perfect and non-perfect parallel manipulators under internal and external loadings[END_REF] and relevant error compensation in the online or offline mode [START_REF] Schneider | Stiffness modeling of industrial robots for deformation compensation in machining[END_REF][START_REF] Klimchik | Compliance Error Compensation in Robotic-Based Milling[END_REF][START_REF] Klimchik | Compliance error compensation technique for parallel robots composed of non-perfect serial chains[END_REF][START_REF] Slavkovic | A method for off-line compensation of cutting force-induced errors in robotic machining by tool path modification[END_REF]. This approach is very efficient if the stiffness and geometric model parameters of the manipulator as well as the external forces are known. To estimate them, additional experimental studies are usually carried out [START_REF] Dumas | Joint stiffness identification of industrial serial robots[END_REF][START_REF] Klimchik | Identification of the manipulator stiffness model parameters in industrial environment[END_REF][START_REF] Wu | Geometric calibration of industrial robots using enhanced partial pose measurements and design of experiments[END_REF][START_REF] Hollerbach | Model Identification[END_REF], which allow user to obtain an extended geometric model suitable for compliance error compensation. Another practical solution is based on enhancement the robot stiffness by means of increasing the link cross-sections. However, it leads to increasing of the robot components masses causing additional end-effector deflections, which are usually reduced by means of different types of gravity com-pensators. However, integration of mechanical compensators into manipulator kinematics essentially complicates the stiffness modelling, because conventional serial architecture is transformed into the quasi-serial one that contains a kinematic closed loop. The stiffness modelling of the industrial manipulators with mechanical gravity compensators is quite a new problem. There are rather limited number of works dealing with the impact of gravity compensators on the manipulator forcedeflection relation [START_REF] Wu | Geometric calibration of industrial robots using enhanced partial pose measurements and design of experiments[END_REF][START_REF] Klimchik | Identification of geometrical and elastostatic parameters of heavy industrial robots[END_REF][START_REF] Klimchik | Stiffness Modeling of Robotic Manipulator with Gravity Compensator[END_REF], while there are some work devoted to the compensator design [START_REF] Arakelian | Gravity compensation in robotics[END_REF][START_REF] Cho | Design of a Static Balancing Mechanism for a Serial Manipulator With an Unconstrained Joint Space Using One-DOF Gravity Compensators[END_REF] and softwarebased balancing solutions [START_REF] De Luca | A PD-type regulator with exact gravity cancellation for robots with flexible joints[END_REF][START_REF] De Luca | PD control with on-line gravity compensation for robots with elastic joints: Theory and experiments[END_REF]. In contrast, for conventional strictly serial manipulators [START_REF] Alici | Enhanced stiffness modeling, identification and characterization for robot manipulators[END_REF][START_REF] Chen | Conservative congruence transformation for joint and Cartesian stiffness matrices of robotic hands and fingers[END_REF][START_REF] Salisbury | Active stiffness control of a manipulator in Cartesian coordinates[END_REF] and strictly parallel mechanisms [START_REF] Klimchik | Stiffness modeling for perfect and non-perfect parallel manipulators under internal and external loadings[END_REF][START_REF] Yan | Stiffness analysis of parallelogram-type parallel manipulators using a strain energy method[END_REF][START_REF] Li | Stiffness analysis for a 3-PUU parallel kinematic machine[END_REF][START_REF] Deblaise | A systematic analytical method for PKM stiffness matrix calculation[END_REF][START_REF] Gosselin | Stiffness analysis of parallel mechanisms using a lumped model[END_REF][START_REF] Merlet | Parallel Mechanisms and Robots[END_REF] there were developed a number of methods for the stiffness analysis. At the same time, only limited number of works deals with stiffness modelling of socalled quasi-serial architectures incorporating internal closed-loops [START_REF] Klimchik | Identification of geometrical and elastostatic parameters of heavy industrial robots[END_REF][START_REF] Klimchik | Serial vs. quasi-serial manipulators: Comparison analysis of elasto-static behaviors[END_REF][START_REF] Subrin | New Redundant Architectures in Machining: Serial and Parallel Robots[END_REF][START_REF] Guo | A multilevel calibration technique for an industrial robot with parallelogram mechanism[END_REF][START_REF] Vemula | Stiffness Based Global Indices for Structural Evaluation of Anthropomorphic Manipulators[END_REF]. To our knowledge, the simplest and efficient way to take into account the influence of gravity compensator is utilization of non-linear virtual springs in the frame of the conventional VJM technique [START_REF] Klimchik | Stiffness modeling for perfect and non-perfect parallel manipulators under internal and external loadings[END_REF][START_REF] Pashkevich | Enhanced stiffness modeling of manipulators with passive joints[END_REF][START_REF] Pashkevich | Stiffness analysis of overconstrained parallel manipulators[END_REF]. This approach was originally proposed in our previous works [START_REF] Klimchik | Identification of geometrical and elastostatic parameters of heavy industrial robots[END_REF][START_REF] Klimchik | Stiffness Modeling of Robotic Manipulator with Gravity Compensator[END_REF] and successfully applied to the manipulators the springbased gravity compensators. However, some additional efforts are required to adapt it to the case of robots with pneumatic compensators, which progressively replace their counterparts in new models of heavy industrial robots available on the market. This paper proposes a new modification of the VJMbased stiffness modelling technique for the quasi-serial industrial manipulators with a pneumatic gravity compensator that creates a kinematic closed-loop violating the stiffness modelling principles, which are used for pure serial robot architecture. The main attention is paid to the identification of the model parameters and calibration experiment planning. The developed approach is confirmed by the experimental results that deal with compliance error compensation for robotic cell employed in manufacturing of large dimensional components. To address these problems the remainder of the paper is organized as follows. Section 2 presents the stiffness modelling for pneumatic gravity compensator. Section 3 is devoted to the elastostatic calibration and design of calibration experiments. Section 4 deals with experimental study. Section 5 summarizes the main contributions of the paper. The mechanical structure and principal components of pneumatic gravity compensator considered here is presented in Fig. 1a; its equivalent model is shown in Fig. 1b. The mechanical compensator is a passive mechanism incorporating a constant cross-section cylinder and a constant volume gas reservoir. The volume occupied by the gas linearly depends on the piston position that defines the internal pressure of the cylinder. It is clear that this mechanism can be treated as a non-linear virtual spring influencing on the manipulator stiffness behavior. It is worth mentioning that in general case the gas temperature has impact on the pressure inside the tank, which defines the compensating force. Nevertheless, one can assume that in the case of continuous or periodical manipulator movements the gas temperature remains almost constant, i.e. the process of the gas compression-decompression can be assumed to be the isothermal one. In the frame of the manipulator model, the compensator is attached to the first and second links that creates a closedloop acting on the second actuated joint. This particularity allows us to adapt the conventional stiffness model of the serial manipulator (with constant joint stiffness matrix θ K ) by introducing the configuration dependent joint stiffness matrix θ () Kq that takes into account the compensator impact and depends on the vector of actuated coordinates q . In this case, the Cartesian stiffness matrix C K of the robotic ma- nipulator can be presented in the following form Calibration of industrial robots with pneumatic gravity compensators    1 1T C θ θ θ ()      K J K q J   where θ J is the Jacobian with respect to the virtual joint coordinates θ (in the case of industrial robots it is usually equivalent to the kinematic Jacobian computed with respect to actuated coordinates q ). Thus, to obtain the stiffness model of the industrial robot with the pneumatic gravity compensator it is required to determine the non-linear joint stiffness matrix θ () Kq describing elasticity of both actuators and the gravity compensation mechanism. It should be mentioned that in the majority of works devoted to the stiffness analysis of the serial manipulators the matrix θ K is assumed to be a constant and strictly diagonal one [START_REF] Salisbury | Active stiffness control of a manipulator in Cartesian coordinates[END_REF][START_REF] Gosselin | Stiffness analysis of parallel mechanisms using a lumped model[END_REF][START_REF] Klimchik | Stiffness matrix of manipulators with passive joints: computational aspects[END_REF]. To find the desired matrix θ () PP varies with the robot motions and non-linearly depends on the angle 2 q . Below, this distances are denoted as follows: Kq 12 , L P P  , 12 , a P P  , 01 , s PP  . In addition, let us introduce parameters  ,  , x a and y a defining relevant locations of points P0, P1, P2 (see Fig. 1b). This allows us to compute the compensator length s using the following expression  2 2 2 2 • • •co 2 s( ) aL s a L q        which defines the non-linear function 2 () s q . For this geometry, the impact of the gravity compensator can be taken into account by replacing the considered quasiserial architecture by the serial one, where the second joint stiffness coefficient is modified in order to include elasticity of both the actuator and compensator. To find relevant nonlinear expression for this coefficient, let us present the static torque in the second joint 2 M as a geometric sum of two components. The first of them is caused by the deflection 2 q  in the mechanical transmission of the second actuated joint and can be expressed in a usual way as  2 2 2 2 sin( •• ) qS L a M K q F q s       where both the force S F and the compensator length s de- pend on the joint variable 0 2 2 2 qq q   . To find the compensating force S F , let us use the iso- thermal process assumption that yields the relation P V const  , where P is the tank pressure, 00 () V A s s V     is the corresponding internal volume, A is the piston area, 0 s is compensator link length correspond- ing to zero compensating force and 0 V is the tank volume corresponding to the atmospheric pressure () •• ( sin( ) ) q A s s L A s s V a M K q P q s          and present in a more compact form  2 2 0 2 2 0 s n( ) • i • q V q M ss a L q s KP s s           where a constant 00 / V s V s A   is the equivalent distance. Further, after computing the partial derivative ( ) ( ) ) () V V q V P L a s s a L q ss s s s ss q a L q K q K s s                               which is obviously highly non-linear with respect to manipulator configuration (here, s is also a non-linear function of 2 q ). Nevertheless, it allows us to compute an relevant stiff- ness coefficient 2 K for the equivalent serial chain and direct- ly apply eq. ( 1) to evaluate stiffness of the quasi-serial manipulator with pneumatic gravity compensator. It should be mentioned that in practice the compensator parameters 0 , V s s and actuator stiffness coefficients 1 2 3 6 , , ,..., K K K K are usually not given in the robot datasheets, so they should be identified via dedicated experimental study. For this reason, the following Section focuses on the identification of this extended set of the manipulator elastostatic parameters. III. ELASTOSTATIC PARAMETERS IDENTIFICATION A. Methodology In the frame of the VJM-based modelling approach developed for serial kinematic chains [START_REF] Salisbury | Active stiffness control of a manipulator in Cartesian coordinates[END_REF][START_REF] Pashkevich | Enhanced stiffness modeling of manipulators with passive joints[END_REF] and adapted here for the case of quasi-serial manipulators with pneumatic gravity compensators, the desired stiffness model parameters describe elasticity of the virtual springs located in the actuated joints of the manipulator, and also compensator parameters 0 , V s s of defining preloading of the compensator spring and the equivalent distance for the tank volume , , ,..., K K K K used in previous sec- tion) and the compensator elastic parameters as 0 , V s s . To find the desired set of elastic parameters, the robotic manipulator sequentially passes through several measurement configurations where the external loading is applied to the specially designed end-effector presented in Fig. 2 (it allows us to generate both forces and torques applied to the manipulator). Using the absolute measurement system (the laser tracker Leica AT900, the Cartesian coordinates of the reference points are measured twice, before and after loading. To increase identification accuracy, it is reasonable to have several markers on the end effector (reference points) and to apply the loading of the maximum allowed magnitude. It should be mentioned that to avoid singularities caused by numerical routines, the external force/torque directions should not be the same for all calibration experiments (while from the practical view point the mass-based gravity loading is the most attractive). Thus, the calibration experiments yield the dataset that includes values of the manipulator joint coordinates   i q , applied forces/torques   B. Identification algorithm To take into account the compensator influence while using classical approach developed for strictly serial manipulators without compensators [START_REF] Alici | Enhanced stiffness modeling, identification and characterization for robot manipulators[END_REF], it was proposed below to use in the second joint an equivalent virtual spring with nonlinear stiffness, which depends on the joint coordinate 2 q (see eq. ( 1)). Using this idea, it is convenient to consider several aggregated compliances 2 i k  corresponding to each different value of angle 2 q . This idea allows us to linearize the identification equations with respect to extended set of model parameters and that can be easily solved using standard least-square technique. Let us denote this extended set of desired parameters as TT i i i i ni ni i im    F A F J J J J   that is usually used in stiffness analysis of serial manipulators. Here, ni J denotes the manipulator Jacobian column, superscript '(p)' stands for the Cartesian coordinates (position without orientation). Transformation from i A to () p i B is ra-ther trivial and is based on the extraction from i A the first three lines and inserting in it several zero columns. In this case, the elastostatic parameters identification can be reduced to the following optimization problem  0 ,, ( ) ( ) 1 ( ) ( ) min jc m p T p i i i i k k i F          B k p B k p   which yields to the following solution  1 ( ) ( ) ( ) 11 • TT mm p p p i i i i ii                 k B B B p   where the parameters 1 3 6 , ,..., k k k describe the compliance of the virtual joints #1,#3,...#6, while the rest of them 21 22 , ... kk present an auxiliary dataset allowing to separate the compliance of the joint #2 and the compensator parameters 0 , V s s . Using eq. ( 7), the desired optimization problem can be written as   2 0 2 2 00 2 2 22 1 2 , 0 2 , () ) ( ) m sin ( cos( in ) sin qV q m i iV i i V i ii s V q K s i P L a s s a L q ss s s s ss q aL q s K s                                     where q m is the number of different angles 2 q in the exper- imental data. It is obvious that eq. ( 12) is highly non-linear and can be solved numerically only. Thus, the proposed modification of the previously developed calibration technique allows us to find the manipulator and compensator parameters. An open question, however, is how to find the set of measurement configurations that ensure the lowest impact of the measurement noise. C. Design of calibration experiments The goal of the calibration experiment design is to select a set of robot configurations/external loadings   , ii qF that ensure the best identification accuracy. The key issue is to rang plans of experiments in accordance with selected performance measure. This problem is well known in the classical regression analysis; however, the results are not suitable for non-linear case of the elastostatic calibration and require additional efforts. Here, an industry oriented performance measure is used, which evaluates the calibration plan quality [START_REF] Wu | Geometric calibration of industrial robots using enhanced partial pose measurements and design of experiments[END_REF][START_REF] Klimchik | Identification of geometrical and elastostatic parameters of heavy industrial robots[END_REF]. Its physical meaning is the robot positioning accuracy (under the loading), which is achieved after compliance error compensation based on the identified elastostatic parameters. Assuming that experiments include measurement errors i ε , covariance matrix for the parameters k can be expressed as       1 ( ) ( ) 1 1 ( ) ( ) ( ) ( ) 11 cov( ) E T TT m pp ii i mm p T p p p i i i i i i ii         k B B B ε ε B B B   Following independent identically distributed assumption with zero expectation and standard deviation 2  for the measurement errors, expression (13) can be simplified to    1 2 ( ) ( ) 1 cov( ) T m pp ii i      k B B   Hence, the impact of the measurement errors on the accuracy of the identified parameters k is defined by the matrix ( ) ( ) 1 T m pp ii i  BB (in regression analysis it is known as the in- formation matrix). It is evident that in industrial practice the most important issue is not the parameters identification accuracy, but their impact on the robot positioning accuracy. Considering that the end-effector accuracy varies throughout the workspace and highly depends on the manipulator configuration, it is proposed to evaluate the calibration accuracy in a typical manipulator configuration ("test-pose") provided by the user. For the most of applications of heavy industrial robots, the test pose is usually related to the typical machining configuration 0 q and corresponding external loading 0 F related to the corresponding technological process. For the so-called test-pose the mean square value of the positioning error will be denoted as 2 0  and the matrix () p i A corresponding to it as () 0 p A . It should be noted that that the proposed approach operates with a specific structure of the parameters included in the vector k , where the second joint is presented by several components 21 22 , ... kk while the other joints are described by a single parameter , ... k k k . This motivates further re- arrangement of the vector k and replacing it by several vectors ) cov( ) T j j j   k k k , the performance measure 2 0  can be presented as    1 ( ) ( ) 2 00 2 ) ( 0 1 1 () trace T q T m j p j p m pp ii i j            A A A A   Based on this performance measure, the calibration experiment design can be reduced to the following optimization problem    1 ( ) ( ) 00 1 { , } ( ) ( ) 1 trace min T q T ii m j p j p i pp j i m i          qF A A A A   subject to max , 1.. i F i m  F whose solution gives a set of the desired manipulator configurations and corresponding external loadings. It is evident that its analytical solution can hardly be obtained and a numerical approach is the only reasonable one. IV. EXPERIMENTAL STUDY The developed technique was applied to the elastostatic calibration of robot Kuka KR-120. The parameters to be identified were the compliances j k of the actuated joints and the gravity compensator parameters 0 , V s s . To generate de- flections in the actuated joints, the gravity forces 120 kg were applied to the robot end-effector (see Fig 3). The Cartesian coordinates of three markers located on the tool (see Fig, 2) have been measured before and after the loading. To find optimal measurement configurations for calibration, the design of experiments was used for six different angles 2 q that are distributed between the joint limits. For each 2 q from three to seven optimal measurement configurations were found, which satisfy joint limits and physical constraints related to the possibility carry out experiments. In total 31 different measurement configurations and 186 measurements were considered for the identification, from which 7 physical parameters were obtained. The obtained experimental data have been processed using the identification algorithm presented in Section 3. Identified values for the extended set of joint compliances (for 6 different angles q2) and their confidence intervals are presented in Table 1. As follows from this results, wrist compliances were identified with lower accuracy. The reason for it is smaller shoulder from the applied external forces comparing with manipulator joints. Relatedly small accuracy of first joint due to a smaller number of measurements in the experiments in which the deflections were generated in the first joint. Further, obtained compliances k21… k26 were used to estimate pneumonic compensator parameters by solving optimization problem [START_REF] Klimchik | Identification of the manipulator stiffness model parameters in industrial environment[END_REF]. The identified joint compliances can be used to predict robot deformations under the external loading. V. CONCLUSIONS Gravity compensator The paper presents a new approach for the modelling and identification of the elastostatic parameters of heavy industrial robots with the pneumatic gravity compensator. It proposes a methodology and data processing algorithms for the identification of elastostatic parameters of gravity compensator and manipulator. To increase the identification accuracy, the design of experiments has been used aimed at proper selection of the measurement configurations. In contrast to other works, it is based on the industry oriented performance measure that is related to the robot accuracy under the loading. The advantages of the developed techniques are illustrated by experimental study of the industrial robot Kuka KR-120, for which the joint compliances and parameters of the gravity compensator have been identified. Figure 1 . 1 Figure 1. Pneumatic gravity compensator and its mode , let us consider the compensator geometry in detail. As follows from Fig 1b, the compensator geometrical model contains three principal node points P0, P1, P2., where P0, P1 define the passive joint rotation axes and P2 defines the second actuated joint axis. In this model, two distances 12 , K 2 M 2 is the stiffness coefficient. The second component can be presented as• sin force generated by the gravity compensator. It is clear that sin can be computed from the triangle 12 allows us to express the torque in the following form: Taking into account that compensating force S F depends on the internal and external pressure difference and is computed as 0V . In the frame of this model, let us denote the manipulator joint com- iFFigure 2 . 2 Figure 2. End-effector used for the elastostatic calibration experiments and it model F them in the vector k . In this case the linearize force-deflection relation with respect to this vector can be present in the following form () p ii  p B k  where i p is the vector of the end-effector displacements under the external loading i Figure 3 . 3 Figure 3. Experimental setup for the identification of the elastostatic parameters. TABLE I . I ELASTO-STATIC PARAMETERS OF ROBOT KUKA KR-120 Parameter value CI k1, [rad×μm/N] 1.13 ±0.15 (13.3%) k21, [rad×μm/N] 0.34 ±0.004 (1.1%) k22, [rad×μm/N] 0.36 ±0.005 (1.4%) k23, [rad×μm/N] 0.35 ±0.005 (1.4%) k24, [rad×μm/N] 0.28 ±0.007 (2.6%) k25, [rad×μm/N] 0.32 ±0.011 (3.6%) k26, [rad×μm/N] 0.26 ±0.007 (2.8%) k3, [rad×μm/N] 0.43 ±0.007 (1.8%) k4, [rad×μm/N] 0.95 ±0.31 (31.8%) k5, [rad×μm/N] 3.82 ±0.27 (7.0%) k6, [rad×μm/N] 4.01 ±0.35 (8.7%) ACKNOWLEDGMENTS The work presented in this paper was partially funded by Innopolis University and the project Partenariat Hubert Curien SIAM 2016 France-Thailand. *The work presented in this paper was partially funded by Innopolis University and the project Partenariat Hubert Curien SIAM 2016 France-Thailand
27,334
[ "125", "8778", "10659" ]
[ "460206", "473973", "481387", "481388", "473973", "441569", "481388", "473973" ]
01757920
en
[ "spi" ]
2024/03/05 22:32:10
2017
https://hal.science/hal-01757920/file/ICMIT2017_Gao_Pashkevich_Caro_HAL.pdf
Jiuchun Gao email: jiuchun.gao@ls2n.fr Anatol Pashkevich Sté Phane Caro Optimal Trajectories Generation in Robotic Fiber Placement Systems The paper proposes a methodology for optimal trajectories generation in robotic fiber placement systems. A strategy to tune the parameters of the optimization algorithm at hand is also introduced. The presented technique transforms the original continuous problem into a discrete one where the time-optimal motions are generated by using dynamic programming. The developed strategy for the optimization algorithm tuning allows essentially reducing the computing time and obtaining trajectories satisfying industrial constraints. Feasibilities and advantages of the proposed methodology are confirmed by an application example. Introduction Robotic fiber placement technology has been increasingly implemented recently in aerospace and automotive industries for fabricating complex composite parts [START_REF] Gay | Composite materials: design and applications[END_REF][START_REF] Gallet-Hamlyn | Multiple-use robots for composite part manufacturing[END_REF]. It is a specific technique that uses robotic workcell to place the heated fiber tows on the workpiece surface [START_REF] Peters | Handbook of composites[END_REF]. Corresponding robotic systems usually include a 6-axis industrial robot and a one-axis positioner (see Figure 1), which are kinematically redundant and provides the user with some freedom in terms of optimization of robot and positioner motions. To deal with the robotic system redundancy, a common technique based on the pseudo-inverse of kinematic Jacobian is usually applied. However, as follows from relevant studies, this standard approach does not satisfy the real-life industrial requirements of the fiber placement [START_REF] Kazerounian | Redundancy resolution of serial manipulators based on robot dynamics[END_REF][START_REF] Buss | Introduction to inverse kinematics with jacobian transpose, pseudoinverse and damped least squares methods[END_REF]. In literature, there is also an alternative technique (that deals with multi-goal tasks) that is based on conversion of the original continuous problem to the combinatorial one [START_REF] Gueta | Coordinated motion control of a robot arm and a positioning table with arrangement of multiple goals[END_REF][START_REF] Gueta | Hybrid design for multiple-goal task realization of robot arm with rotating table[END_REF], but it only generates trajectories for point-to-point motions, e.g. for spot welding applications. A slightly different method was introduced in [START_REF] Dolgui | Manipulator motion planning for high-speed robotic laser cutting[END_REF][START_REF] Pashkevich | Multiobjective optimization of robot motion for laser cutting applications[END_REF][START_REF] Zhou | Off-line programming system of industrial robot for spraying manufacturing optimization[END_REF], and it was successfully applied to laser-cutting and arc-welding processes where the tool speed was assumed to be constant (which is not valid in the considered problem). Another approach has been proposed in [START_REF] Debout | Tool path smoothing of a redundant machine: Application to Automated Fiber Placement[END_REF], where the authors concentrated on the tool path smoothing in Cartesian space in order to decrease the manufacturing time in fiber placement applications. For the considered process, where the tool speed variations are allowed (in certain degree), a discrete optimization based methodology was proposed in our previous work [START_REF] Gao | Manipulator Motion Planning in Redundant Robotic System for Fiber Placement Process[END_REF]. It allows the user to convert the original continuous problem to the combinatorial one taking into account particularities of the fiber placement technology and to generate time-optimal trajectories for both the robot and the positioner. Nevertheless, there are still a number of open questions related to selection of the optimization algorithm parameters (i.e., its "tuning") that are addressed in this paper and targeted to the improvement of the algorithm efficiency and the reduction of the computing time. Robotic system model In practice, the procedure of off-line programming for robotic fiber placement is implemented in the following way. The fiber placement path is firstly generated and discretized in CAM system. Further, the obtained set of task points is transformed into the task graph that describes all probable configurations of the robot and the positioner joints. The motion generator module finds the optimal trajectories that are presented as the "best" path on the graph. Finally, the obtained motions are converted into the robotic system program by the post processor. The core for the programming of this task is a set of optimization routines addressed in this paper. To describe the fiber placement task, let us present it as a set of discrete task frames n i F i task ,... 2 , 1 , ) (  , in such a way that the X-axis is directed along the path direction and Z-axis is normal to the workpiece surface pointing outside of it (see Figure 1). Using these notations, the task locations can be described by 4 4 homogenous transformation matrices and the considered task is formalized as follows: where all vectors of positions and orientations are expressed with respect to the workpiece frame (see superscript "w"). To execute the given fiber placement task, the robot tool must visit the frames defined by (1) as fast as possible. ) ( ) ( ) 2 The considered robotic system, shown in Figure 1, is composed of an industrial robot and an actuated positioner. Their spatial configurations can be described by the joint coordinates R q and P q respectively. The task frames can be presented in two ways using the robot and positioner kinematics that are expressed as ) ( R R g q and ) ( P P q g , respectively. To obtain the kinematic model of the whole system that is expressed as a closed loop containing the robot, the workpiece and the positioner, a global frame 0 F is selected. Then, the tool frame , tool F and task frame ) (i task F can be aligned in such a way that: (i) the origins of the two frames coincide; (ii) Z-axes are opposite; (iii) X-axes have the same direction. Due to the foregoing closed-loop, two paths can be followed to express the transformation matrices from the global frame to the task frames, namely, n i q g g i task w i P P Pbase task Tool i R R Rbase ,... 2 , 1 ; ) ( ) ( ) ( ) ( 0 ) ( 0       T T T q T (2) Equation ( 2) does not lead to a unique solution for R q and P q as the robotic system, i.e., robot and positioner, is kinematically redundant. Therefore, the optimum robot and positioner configurations can be searched based on specific criteria. Algorithm for trajectories generation To take advantage of the kinematic redundancy, it is reasonable to partition the desired motion between the robot and the positioner ensuring that the technology tool executes the given task with smooth motion as fast as possible. To present the problem in a formal way, let us define the functions ) (t R q and ) (t q P describing the robot and positioner motion as a function of time ] , 0 [ T t  . Additionally, a sequence of time instants } ,... , { 2 1 n t t t corresponds to the cases where the tool visits the locations defined by (1), and T t t n   , 0 1 . As a result, the problem at hand is formulated as an optimization problem aiming at minimizing the robot processing time ) ( ), ( min t q t P R T q  (3) This problem is subjected to the equality constraints [START_REF] Gallet-Hamlyn | Multiple-use robots for composite part manufacturing[END_REF] and some inequality constraints associated to the capacities of the robot/positioner actuators that are defined by upper bounds of the joint velocities and accelerations. Besides, the collision constraints verifying the intersections between the system components are also taken into account. ) ( 0 0 )) ( ( )) ( ( i task w i P P Pbase task Tool i R R Rbase t q g t g T T T q T      defined in For this considered problem aiming at finding desired continuous function of ) (t R q and ) (t q P , there is no standard approach that can be applied to straightforwardly. The main difficulty here is that the equality constraints are written for the unknown time instants } ,... , { 2 1 n t t t . Besides, this problem is nonlinear and includes a redundant variable. For these reasons, this paper presents a combinatorial optimization based methodology to generate the desired trajectories. For the considered robotic system, there is one redundant variable with respect to the given task. It is convenient here to treat P q as the redundant one since it allows us to use the kinematic models of the robot and the positioner independently and to consider the previous equality constraints. To present the problem in a discrete way, the allowable domain of ] , [ max min P P P q q q  is sampled with the step P q  as m k k q q q P P k P ,... 1 , 0 ; min ) (      , where P P P q q q m    ) ( min max . Then, applying sequentially the positioner direct kinematics and the robot inverse kinematics, a set of possible configuration states for the robotic system can be obtained as n i t q g g t tool task i task w i k P P Pbase Rbase R i k R ,... 2 , 1 ); , )) ( ( ( ) ( ) ( ) ( 0 0 1 ) (        T T T T q , where μ is a configuration index vector corresponding to the robot posture. Therefore, for i th task location, a set of candidate configuration states can be obtained, i.e., T in joint space as above, the original task can be converted into the directed graph shown in Figure 2. It should be noted that some of the configuration cells should be excluded because of violation of the collision constraints or the actuator joint limits. These cases are denoted as "inadmissible" in Figure 2, and are not connected to any neighbor. Here, the allowable connection between the graph nodes are limited to the subsequent configuration states ) , ( ) , ( i k task i k task     L L , and the edge weights correspond to the minimum robot processing time restricted by the maximum velocities and accelerations of the robot and the positioner. Using the discrete search space above, the considered problem is transformed to the classic shortest path searching and the desired solution can be represented as the sequence , where } ...{ } { } { n) , (k ,2) (k ,1) (k 6 ,... 1 , 0 ; ) max( ) , ( max ) ( 1 , ) ( , ) 1 , ( ) , ( 1 1        j q q q dist j k i j k i j i k task i k task i i i i  L L . It should be mentioned that the above expression takes into account the velocity constraints automatically and the acceleration constraints should be considered by means of the following formula: max 1 1 ) ( 1 , ) ( 1 ) ( ) ( 1 , ) ( ) ( ) ( 2 1 1 j i i i i k i j k i k k i j i q t t t t q t q t i i i i                     i j, i j, q q (5) where ) , ( ) 1 ( ) ( 1 1      ,i k task ,i k task i i i dist t L L and ) , ( ) 1 ( ) ( 1     ,i k task ,i k task i i i dist t L L . By discretizing the search space, the original problem is converted to a combinatorial one, which can be solved by using conventional way, e.g. However, this straightforward approach is extremely time-consuming and can be hardly accepted for industrial applications. For example, it takes over 20 hours to find a desired solution in a relatively simple case (two-axis robot and one-axis positioner), where the search space is built for 100 task points and the discretization step 1° (processor Intel® i5 2.67 GHz) [START_REF] Gao | Manipulator Motion Planning in Redundant Robotic System for Fiber Placement Process[END_REF]. Besides, known methods are not able to take into account the acceleration constraints that are necessary here. For these reasons, a problem-oriented algorithm taking into account the particularities of the graph based search space is proposed in this paper. The developed algorithm is based on the dynamic programming principle, aiming at finding the shortest path from } , { 1 ) 1 ( 1 k , k task  L to the current } , { ) ( i ,i k task k i  L . The length of this shortest path is denoted as i k d , . Then, the shortest path for the locations corresponding to the next } , { ) 1 ( k k,i task   L can be obtained by combining the optimal solutions for the previous column } , { ) ( k ,i k task    L and the distances between the task locations with the indices i and i+1, ; } ) , ( { ) ( ) 1 , ( , 1 , ,i k task i k task i k k i k dist d d        L L min (4) This formula is applied sequentially from the second column of the task graph to the last one, and the desired optimal path can be obtained after selection of the minimum length . This proposed algorithm is rather time-efficient since it takes about 30 seconds [START_REF] Gao | Manipulator Motion Planning in Redundant Robotic System for Fiber Placement Process[END_REF] to find the optimal solution for the above mentioned example. Tuning of trajectories generation algorithm For the proposed methodology, the discretization step for the redundant variable is a key parameter, which has a big influence on the algorithm efficiency. An unsuitable discretization step may lead either to a bad solution or high computational time. For this reason, a new strategy for the determination of the discretization step is proposed thereafter to tune the optimization algorithm. Influence of the discretization step Let us consider a simple case study that deals with a three-axis planar robotic system executing a straight-line-task (see Figure 3). For this problem, the fiber placement path is uniformly discretized into 40 segments. Relevant optimization results are presented in Table 1. It is clear that here (as well as in other cases) smaller discretization step should provide better results but there exists a reasonable lower bound related to an acceptable computing time.    2 P q    1 P q    75 . 0 P q    5 . 0 P q    25 . 0 P q    1 . 0 P q Robot processing time are not acceptable because they lead to a robot processing time 20-50% higher than the optimal one. Moreover, in the case of    2 P q , the optimization algorithm generates a bad solution that does not take advantage of the positioner motion capabilities (positioner is locked, var const q R P   q , ). The reason for this phenomenon is that the discretization step here is so large that the positioner step-time is always higher than the robot moving time between the subsequent task points. Another interesting phenomenon can be observed for slightly smaller discretization steps, where the algorithm may produce non-smooth intermittent rotation of the positioner (start-stop motion: (without acceleration constraints), the optimization algorithm generates solution that includes only several steps where the positioner is not locked. In addition, it is noteworthy that in the case of with acceleration constraints, the discretization step reduction from 2° to 1° leads to even worse solution, where the robot processing time is about 10% higher. This phenomenon can be explained by heuristic integration of the acceleration constraints into the optimization algorithm, which may slightly violate the dynamic programming principle. Nevertheless, further reduction of P q  allows to restore the expected algorithm behavior. Hence, to apply the developed technique in practice, users need some simple "rules of thumb" that allows setting an initial value of P q  . Then, the optimization algorithm can be applied several times (sequentially decreasing P q  ) until the objective function convergence. To reduce computing time in the case of small P q  , some local optimization techniques have been also developed by the authors. Initial tuning of the optimization algorithm To find a reasonable initial value of the discretization step, let us investigate in details robot and positioner motions between two sequential task locations. It is clear that for smooth positioner motions, it is required that corresponding increments of the coordinate P q should include at least one discretization step P q  . To find the maximum value of P q  , let us denote   as the increment of P q for the movement between two adjacent task locations (Pi-Pi+1) and s  as the length of the path segment. It is clear that s  can be also treated as the arc length between Pi and Pi+1 around the positioner joint axis. Let us assume that the distance from a path point to the rotational axis is r, and rmax represents the furthest task location with respect to the positioner axis. To avoid undesired intermittent positioner rotations, the following constraint      max r s should be verified, since the positioner velocity is usually smaller than the velocity of the robot. The latter inequality can be rewritten in terms of the robot/positioner motion time as max max max ) ( P R q v r s          , which can be further     P q where number of the positioner steps is no less one. Hence, the initial value of P q  should be at least equal to ) ( max max max max r q v s q P R P       in order to provide acceptable motions of the robot and positioner. For instance, for the previous case study, this expression gives the discretization step about 0.5° that allows to generate trajectories that are very close to the optimal ones, namely, the robot processing time is only 1% higher than the minimum value. Conclusions This paper contributes to optimization of robot/positioner motions in redundant robotic systems for the fiber placement process. It proposes a new strategy for the optimization algorithm tuning. The developed technique converts the continuous optimization into a combinatorial one where dynamic programming is applied to find time-optimal motions. The proposed strategy of the optimization algorithm tuning allows essentially decreasing the computing time and generating desired motions satisfying industrial constraints. Feasibilities and advantages of the presented technique are confirmed by a case study. Future research will focus on application of those results in real-life industrial environments. Figure 1 . 1 Figure 1. Typical robotic fiber placement system (6-axis robot and one-axis positioner). Figure 2 . 2 Figure 2. Graph-based representation of the discrete search space After presenting function (robot processing time) can be presented as the sum of the edge weights the final column. Therefore, the desired path is described by the recorded indices Figure 3 . 3 Figure 3. Three-axis planar robotic system and straight-line-task To estimate the reasonable discretization step for the considered fiber placement problem, let us analyze Table1in more details. From Table1, the discretization steps T=1.90s T=1.84s T=1.54s T=1.30s T=1.29s T=1.29s (without acc-constraint) (38s comp.) (2m comp.) (4m comp.) (9m comp.) (47m comp.) (5h comp.) Robot processing time T=1.90s T=2.11s T=1.60s T=1.30s T=1.29s T=1.29s (with acc-constraint) (67s comp.) (4m comp.) (8m comp.) (17m comp.) (1.2h comp.) (9h comp.)  P  ,  2 { 1  , }  75 . 0 q Acknowledgments This work has been supported by the China Scholarship Council (Grant N° 201404490018). The authors also acknowledge CETIM for the motivation of this research work.
19,487
[ "8778", "10659" ]
[ "473973", "473973", "481388", "473973", "441569" ]
01757864
en
[ "sdv" ]
2024/03/05 22:32:10
2018
https://amu.hal.science/hal-01757864/file/Desmarchelier%20et%20al%20final%20version.pdf
Charles Desmarchelier Véronique Rosilio email: veronique.rosilio@u-psud.fr David Chapron Ali Makky Damien Preveraud Estelle Devillard Véronique Legrand-Defretin Patrick Borel Damien P Prévéraud Molecular interactions governing the incorporation of cholecalciferol and retinyl-palmitate in mixed taurocholate-lipid micelles Keywords: bioaccessibility, surface pressure, bile salt, compression isotherm, lipid monolayer, vitamin A, vitamin D, phospholipid ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction Retinyl esters and cholecalciferol (D 3 ) (Figure 1) are the two main fat-soluble vitamins found in foods of animal origin. There is a renewed interest in deciphering their absorption mechanisms because vitamin A and D deficiency is a public health concern in numerous countries, and it is thus of relevance to identify factors limiting their absorption to tackle this global issue. The fate of these vitamins in the human upper gastrointestinal tract during digestion is assumed to follow that of dietary lipids [START_REF] Borel | Vitamin D bioavailability: state of the art[END_REF]. This includes emulsification, solubilization in mixed micelles, diffusion across the unstirred water layer and uptake by the enterocyte via passive diffusion or apical membrane proteins [START_REF] Reboul | Proteins involved in uptake, intracellular transport and basolateral secretion of fat-soluble vitamins and carotenoids by mammalian enterocytes[END_REF]. Briefly, following consumption of vitamin-rich food sources, the food matrix starts to undergo degradation in the acidic environment of the stomach, which contains several enzymes, leading to a partial release of these lipophilic molecules and to their transfer to the lipid phase of the meal. Upon reaching the duodenum, the food matrix is further degraded by pancreatic secretions, promoting additional release from the food matrix, and both vitamins then transfer from oil-in-water emulsions to mixed micelles (and possibly other structures, such as vesicles, although not demonstrated yet). As it is assumed that only free retinol can be taken up by enterocytes, retinyl esters are hydrolyzed by pancreatic enzymes, namely pancreatic lipase, pancreatic lipase-related protein 2 and cholesterol ester hydrolase [START_REF] Desmarchelier | The distribution and relative hydrolysis of tocopheryl acetate in the different matrices coexisting in the lumen of the small intestine during digestion could explain its low bioavailability[END_REF]. Bioaccessible vitamins are then taken up by enterocytes via simple passive diffusion or facilitated diffusion mediated by apical membrane proteins (Desmarchelier et al. 2017). The apical membrane protein(s) involved in retinol uptake by enterocytes is(are) yet to be identified but in the case of D 3 , three proteins have been shown to facilitate its uptake: NPC1L1 (NPC1 like intracellular cholesterol transporter 1), SR-BI (scavenger receptor class B member 1) and CD36 (Cluster of differentiation 36) [START_REF] Reboul | Proteins involved in uptake, intracellular transport and basolateral secretion of fat-soluble vitamins and carotenoids by mammalian enterocytes[END_REF]. Both vitamins then transfer across the enterocyte towards the basolateral side. The transfer of vitamin A is mediated, at least partly, by the cellular retinol-binding protein, type II (CRBPII), while that of vitamin D is carried out by unknown mechanisms. Additionally, a fraction of retinol is re-esterified by several enzymes (Borel & Desmarchelier 2017). Vitamin A and D are then incorporated in chylomicrons in the Golgi apparatus before secretion in the lymph. The solubilization of vitamins A and D in mixed micelles, also called micellarization or micellization, is considered as a key step for their bioavailability because it is assumed that the non-negligible fraction of fat-soluble vitamin that is not micellarized is not absorbed [START_REF] Desmarchelier | The distribution and relative hydrolysis of tocopheryl acetate in the different matrices coexisting in the lumen of the small intestine during digestion could explain its low bioavailability[END_REF]. Mixed micelles are mainly made of a mixture of bile salts, phospholipids and lysophospholipids, cholesterol, fatty acids and monoglycerides [START_REF] Hernell | Physical-chemical behavior of dietary and biliary lipids during intestinal digestion and absorption. 2. Phase analysis and aggregation states of luminal lipids during duodenal fat digestion in healthy adult human beings[END_REF]). These compounds may form various self-assembled structures, e.g., spherical, cylindrical or disk-shaped micelles [START_REF] Walter | Intermediate structures in the cholate-phosphatidylcholine vesicle-micelle transition[END_REF][START_REF] Leng | Kinetics of the micelle-to-vesicle transition ; aquous lecithin-bile salt mixtures[END_REF] or vesicles, depending on their concentration, the bile salt/phospholipid ratio [START_REF] Walter | Intermediate structures in the cholate-phosphatidylcholine vesicle-micelle transition[END_REF], the phospholipid concentration, but also the ionic strength, pH and temperature of the aqueous medium (Madency & Egelhaaf 2010;[START_REF] Salentinig | Self-assembled structures and pKa value of oleic acid in systems of biological relevance[END_REF][START_REF] Cheng | Mixtures of lecithin and bile salt can form highly viscous wormlike micellar solutions in water[END_REF]. Fat-soluble micronutrients display large variations with regards to their solubility in mixed micelles [START_REF] Sy | Effects of physicochemical properties of carotenoids on their bioaccessibility, intestinal cell uptake, and blood and tissue concentrations[END_REF][START_REF] Gleize | Form of phytosterols and food matrix in which they are incorporated modulate their incorporation into mixed micelles and impact cholesterol micellarization[END_REF] and several factors are assumed to account for these differences (Desmarchelier & Borel 2017, for review). The mixed micelle lipid composition has been shown to significantly affect vitamin absorption. For example, the substitution of lysophospholipids by phospholipids diminished the lymphatic absorption of vitamin E in rats [START_REF] Koo | Phosphatidylcholine inhibits and lysophosphatidylcholine enhances the lymphatic absorption of alpha-tocopherol in adult rats[END_REF]. In rat perfused intestine, the addition of fatty acids of varying chain length and saturation degree, i.e. butyric, octanoic, oleic and linoleic acid, resulted in a decrease in the rate of D 3 absorption [START_REF] Hollander | Vitamin D-3 intestinal absorption in vivo: influence of fatty acids, bile salts, and perfusate pH on absorption[END_REF]. The effect was more pronounced in the ileal part of the small intestine following the addition of oleic and linoleic acid. It was suggested that unlike short-and medium-chain fatty acids, which are not incorporated into micelles, long-chain fatty acids hinder vitamin D absorption by causing enlargement of micelle size, thereby slowing their diffusion towards the enterocyte. Moreover, the possibility that D 3 could form self-aggregates in water [START_REF] Meredith | The Supramolecular Structure of Vitamin-D3 in Water[END_REF], although not clearly demonstrated, has led to question the need of mixed micelles for its solubilization in the aqueous environment of the intestinal tract lumen [START_REF] Rautureau | Aqueous solubilisation of vitamin D3 in normal man[END_REF][START_REF] Maislos | Bile salt deficiency and the absorption of vitamin D metabolites. In vivo study in the rat[END_REF]. This study was designed to compare the relative solubility of D 3 and RP in the aqueous phase rich in mixed micelles that exists in the upper intestinal lumen during digestion, and to dissect, by surface tension and surface pressure measurements, the molecular interactions existing between these vitamins and the mixed micelle components that explain the different solubility of D 3 and RP in mixed micelles. Materials and methods Chemicals 2-oleoyl-1-palmitoyl-sn-glycero-3-phosphocholine (POPC) (phosphatidylcholine, ≥99%; Mw 760.08 g/mol), 1-palmitoyl-sn-glycero-3-phosphocholine (Lyso-PC) (lysophosphatidylcholine, ≥99%; Mw 495.63 g/mol), free cholesterol (≥99%; Mw 386.65 g/mol), oleic acid (reagent grade, ≥99%; Mw 282.46 g/mol), 1-monooleoyl-rac-glycerol (monoolein, C18:1,-cis-9, Mw 356.54 g/mol), taurocholic acid sodium salt hydrate (NaTC) (≥95%; Mw 537.68 g/mol) ), cholecalciferol (>98%; Mw 384.64 g/mol; melting point 84.5°C; solubility in water: 10 -4 -10 -5 mg/mL; logP 7.5) and retinyl palmitate (>93.5%; Mw 524.86 g/mol; melting point 28.5°C; logP 13.6) were purchased from Sigma-Aldrich (Saint-Quentin-Fallavier, France). Chloroform and methanol (99% pure) were analytical grade reagents from Merck (Germany). Ethanol (99.9%), n-hexane, chloroform, acetonitrile, dichloromethane and methanol were HPLC grade reagents from Carlo Erba Reagent (Peypin, France). Ultrapure water was produced by a Milli-Q ® Direct 8 Water Purification System (Millipore, Molsheim, France). Prior to all surface tension, and surface pressure experiments, all glassware was soaked for an hour in a freshly prepared hot TFD4 (Franklab, Guyancourt, France) detergent solution (15% v/v), and then thoroughly rinsed with ultrapure water. Physico-chemical properties of D 3 and RP were retrieved from PubChem (https://pubchem.ncbi.nlm.nih.gov/). Micelle formation The micellar mixture contained 0.3 mM monoolein, 0.5 mM oleic acid, 0.04 mM POPC, 0.1 mM cholesterol, 0.16 mM Lyso-PC, and 5 mM NaTC [START_REF] Reboul | Lutein transport by Caco-2 TC-7 cells occurs partly by a facilitated process involving the scavenger receptor class B type I (SR-BI)[END_REF]. Total component concentration was thus 6.1 mM, with NaTC amounting to 82 mol%. Two vitamins were studied: crystalline D 3 and RP. Mixed micelles were formed according to the protocol described by [START_REF] Desmarchelier | The distribution and relative hydrolysis of tocopheryl acetate in the different matrices coexisting in the lumen of the small intestine during digestion could explain its low bioavailability[END_REF]. Lipid digestion products (LDP) (monoolein, oleic acid, POPC, cholesterol and Lyso-PC, total concentration 1.1 mM) dissolved in chloroform/methanol (2:1, v/v), and D 3 or RP dissolved in ethanol were transferred to a glass tube and the solvent mixture was carefully evaporated under nitrogen. The dried residue was dispersed in Tris buffer (Tris-HCl 1mM, CaCl 2 5mM, NaCl 100 mM, pH 6.0) containing 5 mM taurocholate, and incubated at 37 °C for 30 min. The solution was then vigorously mixed by sonication at 25 W (Branson 250W sonifier; Danbury, CT, U.S.A.) for 2 min, and incubated at 37 °C for 1 hour. To determine the amount of vitamin solubilized in structures allowing their subsequent absorption by enterocytes (bioaccessible fraction), i.e. micelles and possibly small lipid vesicles, whose size is smaller than that of mucus pores [START_REF] Cone | Barrier properties of mucus[END_REF], the solutions were filtered through cellulose ester membranes (0.22 µm) (Millipore), according to [START_REF] Tyssandier | Processing of vegetable-borne carotenoids in the human stomach and duodenum[END_REF]. The resulting optically clear solution was stored at -20 °C until vitamin extraction and HPLC analysis. D 3 and RP concentrations were measured by HPLC before and after filtration. For surface tension measurements and cryoTEM experiments, the mixed micelle systems were not filtered. Self-micellarization of D 3 Molecular assemblies of D 3 were prepared in Tris buffer using the same protocol as for mixed micelles. D 3 was dissolved into the solvent mixture and after evaporation, the dry film was hydrated for 30 min at 37°C with taurocholate-free buffer. The suspension was then sonicated. All D 3 concentrations reported in the surface tension measurements were obtained from independent micellarization experiments -not from the dilution of one concentrated D 3 solution. Surface tension measurements Mixed micelle solutions were prepared as described above, at concentrations ranging from 5.5 nM to 55 mM, with the same proportion of components as previously mentioned. The surface tension of LDP mixtures hydrated with a taurocholate-free buffer, and that of pure taurocholate solutions were also measured at various concentrations. The solutions were poured into glass cuvettes. The aqueous surface was cleaned by suction, and the solutions were left at rest under saturated vapor pressure for 24 hours before measurements. For penetration studies, glass cuvettes with a side arm were used, allowing injection of NaTC beneath a spread LDP or vitamin monolayer. Surface tension measurements were performed by the Wilhelmy plate method, using a thermostated automatic digital tensiometer (K10 Krüss, Germany). The surface tension g was recorded continuously as a function of time until equilibrium was reached. All experiments were performed at 25 ±1°C under saturated vapor pressure to maintain a constant level of liquid. The reported values are mean of three measurements. The experimental uncertainty was estimated to be 0.2 mN/m. Surface pressure (π) values were deduced from the relationship π = γ 0 -γ, with γ 0 the surface tension of the subphase and γ the surface tension in the presence of a film. Surface pressure measurements Surface pressure-area π-A isotherms of the LDP and LDP-vitamin mixtures were obtained using a thermostated Langmuir film trough (775.75 cm 2 , Biolin Scientific, Finland) enclosed into a Plexiglas box (Essaid et al. 2016). Solutions of lipids in a chloroform/methanol (9:1, v/v) mixture were spread onto a clean buffer subphase. Monolayers were left at rest for 20 minutes to allow complete evaporation of the solvents. They were then compressed at low speed (6.5 Å 2 .molecule -1 .min -1 ) to minimize the occurrence of metastable phases. The experimental uncertainty was estimated to be 0.1 mN/m. All experiments were run at 25 ±1°C. Mean isotherms were deduced from at least three compression isotherms. The surface compressional moduli K of monolayers were calculated using Eq. 1: (Eq. 1) Excess free energies of mixing were calculated according to Eq. 2: (Eq. 2) with X L and A L the molar fraction and molecular area of lipid molecules, and X VIT and A VIT the molar fraction and molecular area of vitamin molecules, respectively [START_REF] Ambike | Interaction of self-assembled squalenoyl gemcitabine nanoparticles with phospholipid-cholesterol monolayers mimicking a biomembrane[END_REF]. Cryo-TEM analysis A drop (5 µL) of LDP-NaTC micellar solution (15 mM), LDP-NaTC-D 3 (3:1 molar ratio) or pure D 3 "micellar suspension" (5 mM, theoretical concentration) was deposited onto a perforated carbon-coated, copper grid (TedPella, Inc); the excess of liquid was blotted with a filter paper. The grid was immediately plunged into a liquid ethane bath cooled with liquid nitrogen (180 °C) and then mounted on a cryo holder [START_REF] Da Cunha | Overview of chemical imaging methods to address biological questions[END_REF]. Transmission electron measurements (TEM) measurements were performed just after grid preparation using a JEOL 2200FS (JEOL USA, Inc., Peabody, MA, U.S.A.) working under an acceleration voltage of 200 kV (Institut Curie). Electron micrographs were recorded by a CCD camera (Gatan, Evry, France). K = -A dπ dA       T ∆ G EXC = X L A L + X VIT A VIT ( ) 0 π ∫ dπ 2.7. Vitamin analysis 2.7.1. Vitamin extraction. D 3 and RP were extracted from 500 µL aqueous samples using the following method [START_REF] Desmarchelier | The distribution and relative hydrolysis of tocopheryl acetate in the different matrices coexisting in the lumen of the small intestine during digestion could explain its low bioavailability[END_REF]: retinyl acetate was used as an internal standard and was added to the samples in 500 µL ethanol. The mixture was extracted twice with two volumes of hexane. The hexane phases obtained after centrifugation (1200 × g, 10 min, 10°C) were evaporated to dryness under nitrogen, and the dried extract was dissolved in 200 µL of acetonitrile/dichloromethane/methanol (70:20:10, v/v/v). A volume of 150 µL was used for HPLC analysis. Extraction efficiency was between 75 and 100%. Sample whose extraction efficiency was below 75% were re-extracted or taken out from the analysis. 2.7.2. Vitamin HPLC analysis. D 3 , and RP and retinyl acetate were separated using a 250 x 4.6-nm RP C18, 5-µm Zorbax Eclipse XDB column (Agilent Technologies, Les Ulis, France) and a guard column. The mobile phase was a mixture of acetonitrile/dichloromethane/methanol (70:20:10, v/v/v). Flow rate was 1.8 mL/min and the column was kept at a constant temperature (35 °C). The HPLC system comprised a Dionex separation module (P680 HPLC pump and ASI-100 automated sample injector, Dionex, Aix-en-Provence, France). D 3 was detected at 265 nm while retinyl esters were detected at 325 nm and were identified by retention time compared with pure (>95%) standards. Quantification was performed using Chromeleon software (version 6.50, SP4 Build 1000) comparing the peak area with standard reference curves. All solvents used were HPLC grade. Statistical analysis Results are expressed as means ± standard deviation. Statistical analyses were performed using Statview software version 5.0 (SAS Institute, Cary, NC, U.S.A.). Means were compared by the non-parametric Kruskal-Wallis test, followed by Mann-Whitney U test as a post hoc test for pairwise comparisons, when the mean difference using the Kruskal-Wallis test was found to be significant (P<0.05). For all tests, the bilateral alpha risk was α = 0.05. Results Solubilization of D 3 and RP in aqueous solutions rich in mixed micelles D 3 and RP at various concentrations were mixed with micelle components (LDP-NaTC). D 3 and RP concentrations were measured by HPLC before and after filtration of aggregates with a diameter smaller than 0.22 µm (Figure 2). D 3 and RP solubilization in the solution that contained mixed micelle solution followed different curves: D 3 solubilization was linear (R²=0.98, regression slope = 0.71) and significantly higher than that of RP, which reached a plateau with a maximum concentration around 125 µM. The morphology of the LDP-NaTC and LDP-NaTC-D 3 samples before filtration was analyzed by cryoTEM. In Figure 3, micelles are too small to be distinguished from ice. At high LDP-NaTC concentration (15 mM) small and large unilamellar vesicles (a), nano-fibers (b) and aggregates (c) are observed (Figure 3A). Both nano-fibers and aggregates seem to emerge from the vesicles. In the presence of D 3 at low micelle and D3 concentration (5 mM LDP-NaTC + 1.7 mM D 3 ) (Figures 3B and3C), the morphology of the nano-assemblies is greatly modified. Vesicles are smaller and deformed, with irregular and more angular shapes (a'). There are also more abundant. A difference in contrast in the bilayers is observed, which would account for leaflets with asymmetric composition. Some of them coalesce into larger structures, extending along the walls of the grid (d). Fragments and sheets are also observed (figure 3B). They exhibit irregular contour and unidentified membrane organization. The bilayer structure is not clearly observable. New organized assemblies appear, such as disk-like nano-assemblies (e) and emulsion-like droplets (f). At higher concentration (15 mM LDP-NaTC + 5 mM D 3 in figure 3D), the emulsion-like droplets and vesicles with unidentified membrane structure (g) are enlarged. They coexist with small deformed vesicules. Compression properties of LDP components, the LDP mixture and the vitamins To better understand the mechanism of D 3 and RP interaction with LDP-NaTC micelles, we focused on the interfacial behavior of the various components of the system. We first determined the interfacial behavior of the LDP components and their mixture in proportions similar to those in the micellar solution, by surface pressure measurements. The π-A isotherms are plotted in Figure 4A. Based on the calculated compressibility modulus values, the lipid monolayers can be classified into poorly organized (K < 100 mN/m, for lyso-PC, monoolein, and oleic acid), liquid condensed (100 < K < 250 mN/m, for POPC and the LDP mixture) and highly rigid monolayers (K > 250 mN/m, for cholesterol) [START_REF] Davies | Interfacial phenomena 2nd ed[END_REF]. The interfacial behavior of the two studied vitamins is illustrated in Figure 4B. D 3 shows a similar compression profile to that of the LDP mixture, with comparable surface area and surface pressure at collapse (A c = 35 Å 2 , π c = 38 mN/m) but a much higher rigidity, as inferred from the comparison of their maximal K values (187.4 mN/m and 115.4 mN/m for D 3 and LDP, respectively). RP exhibits much larger surface areas and lower surface pressures than D 3 . The collapse of its monolayer is not clearly identified from the isotherms, and is estimated to occur at π c = 16.2 mN/m (A c = 56.0 Å 2 ), as deduced from the slope change in the π-A plot. Self-assembling properties of D 3 in an aqueous solution Since D 3 showed an interfacial behavior similar to that of the lipid mixture, and since it could be solubilized at very high concentrations in an aqueous phase rich in mixed micelles (as shown in Figure 2), its self-assembling properties were more specifically investigated. Dried D 3 films were hydrated with the sodium taurocholate free-buffer. Surface tension measurements at various D 3 concentrations revealed that the vitamin could adsorb at the air/solution interface, and significantly lower the surface tension of the buffer to g cmc = 30.6 mN/m. A critical micellar concentration (cmc = 0.45 µM) could be deduced from the γ-log C relationships and HPLC assays. Concentrated samples D 3 samples were analyzed by cryo-TEM (Figure 3E and3F). Different D 3 self-assemblies were observed, including circular nano-assemblies (h) coexisting with nano-fibers (i), and large aggregates (j) with unidentified structure. The analysis in depth of the circular nano-assemblies allowed to conclude that they were disk-like nano-assemblies, rather than nanoparticles. Interaction of LDP with NaTC To better understand how the two studied vitamins interacted with the mixed micelles, we compared the interfacial behaviors of the pure NaTC solutions, LDP mixtures hydrated by NaTC-free buffer, and LDP mixtures hydrated by the NaTC buffered solutions (full mixed micelle composition). The LDP mixture composition was maintained constant, while its concentration in the aqueous medium was increased. The concentration of NaTC in the aqueous phase was also calculated so that the relative proportion of the various components (LDP and NaTC) remained unchanged in all experiments. From the results plotted in Figure 5A, the critical micellar concentration (cmc) of the LDP-NaTC mixture was 0.122 mM (γ cmc = 29.0 mN/m), a concentration 50.8 times lower than the concentration used for vitamin solubilization. The cmc values for the LDP mixture and the pure NaTC solutions were 0.025 mM (γ cmc = 24.0 mN/m), and 1.5 mM (γ cmc = 45.3 mN/m), respectively. Experiments modeling the insertion of NaTC into the LDP film during rehydration by the buffer suggested that only few NaTC molecules could penetrate in the condensed LDP film (initial surface pressure: π i = 28 mN/m) and that the LDP-NaTC mixed film was not stable, as shown by the decrease in surface pressure over time (Figure 5B). Interaction of D 3 and RP with NaTC The surface tension of the mixed NaTC-LDP micelle solutions was only barely affected by the addition of 0.1 or 1 mM D 3 or RP: the surface tension values increased by no more than 2.8 mN/m. Conversely, both vitamins strongly affected the interfacial behavior of the NaTC micellar solution, as inferred from the significant surface tension lowering observed (-7.0 and -8.1 mN/m for RP and D 3 , respectively). Interaction of D 3 and RP with lipid digestion products The interaction between the vitamins and LDP molecules following their insertion into LDP micelles was modeled by compression of LDP/D 3 and LDP/RP mixtures at a 7:3 molar ratio. This ratio was chosen arbitrarily, to model a system in which LDP was in excess. The π-A isotherms are presented in Figures 6A and6B. They show that both vitamins modified the isotherm profile of the lipid mixture, however, not in the same way. In the LDP/D 3 mixture, the surface pressure and molecular area at collapse were controlled by LDP. For LDP/RP, despite the high content in LDP, the interfacial behavior was clearly controlled by RP. From the isotherms in Figures 6A and6B, compressibility moduli and excess free energies of mixing were calculated and compared (Figures 6C, and6D). D 3 increased the rigidity of LDP monolayers, whereas RP disorganized them. The negative ∆G EXC values calculated for the LDP-D 3 monolayers at all surface pressures account for the good mixing properties of D 3 and the lipids in all molecular packing conditions. Conversely for RP, the positive and increasing ∆G EXC values with the surface pressure demonstrate that its interaction with the lipids was unfavorable. Discussion The objective of this study was to compare the solubility of RP and D 3 in aqueous solutions containing mixed micelles, and to decipher the molecular interactions that explain their different extent of solubilization. Our first experiment revealed that the two vitamins exhibit very different solubilities in an aqueous medium rich in mixed micelles. Furthermore, the solubility of D 3 was so high that we did not observe any limit, even when D 3 was introduced at a concentration > 1mM in the aqueous medium. To our knowledge, this is the first time that such a difference is reported. Cryo-TEM pictures showed that D 3 dramatically altered the organization of the various components of the mixed micelles. The spherical vesicles were deformed with angular shapes. The nano-fibers initiating from the vesicles were no longer observed. Large irregular in shape vesicle and sheets, disk-like nano-assemblies and emulsionlike droplets appeared in LDP-NaTC-D 3 mixtures, only. The existence of so many different assemblies would account for a different interaction of D 3 with the various components of mixed micelles, and for a reorganization of the components. D 3 could insert in the bilayer of vesicles and deform them, but also form emulsion-like droplets with fatty acids and monoglyceride. It is noteworthy that these emulsion-like droplets were not observed in pure D 3 samples, nor mixed micelles. Since previous studies have shown that both bile salts and some mixed micelle lipids, e.g. fatty acids and phospholipids, can modulate the solubility of fatsoluble vitamins in these vehicles [START_REF] Yang | Vitamin E and vitamin E acetate solubilization in mixed micelles: physicochemical basis of bioaccessibility[END_REF], we decided to study the interactions of these two vitamins with either bile salts or micelle lipids to assess the specific role of each component on vitamin solubility in mixed micelles. The characteristics of pure POPC, Lyso-PC, monoolein, and cholesterol isotherms were in agreement with values published in the literature [START_REF] Pezron | Monoglyceride Surface-Films -Stability and Interlayer Interactions[END_REF]Flasinsky et al. 2014;[START_REF] Huynh | Structural properties of POPC monolayers under lateral compression: computer simulations analysis[END_REF]. For oleic acid, the surface pressure at collapse was higher (π c = 37 mN/m) and the corresponding molecular area (A c = 26 Å 2 ) smaller than those previously published [START_REF] Tomoaia-Cotisel | Insoluble mixed monolayers[END_REF], likely due to the pH of the buffer solution (pH 6) and the presence of calcium. The interfacial properties of D 3 were close to those deduced from the isotherm published by [START_REF] Meredith | The Supramolecular Structure of Vitamin-D3 in Water[END_REF] for a D 3 monolayer spread from a benzene solution onto a pure water subphase. The molecular areas at collapse are almost identical in the two studies (about 36 Å 2 ), but the surface pressures differ (30 mN/m in Meredith and coworkers' study, and 38 mN/m in ours). Compressibility modulus values show that D 3 molecules form monolayers with higher molecular order than the LDP mixture, which suggests that they might easily insert into LDP domains. As could be expected from its chemical structure, RP exhibited a completely different interfacial behavior compared to D 3 and the LDP, even to lyso-PC which formed the most expanded monolayers of the series, and displayed the lowest collapse surface pressure. The anomalous isotherm profile of lyso-PC has been attributed to monolayer instability and progressive solubilization of molecules into the aqueous phase [START_REF] Heffner | Thermodynamic and kinetic investigations of the release of oxidized phospholipids from lipid membranes and its effect on vascular integrity[END_REF]. The molecular areas and surface pressures for RP have been compared to those measured by [START_REF] Asai | Formation and stability of the dispersed particles composed of retinyl palmitate and phosphatidylcholine[END_REF] for RP monolayers spread from benzene solutions at 25°C onto a water subphase. Their values are much lower than ours, accounting for even more poorly organized monolayers. The low collapse surface pressure could correspond to molecules partially lying onto the aqueous surface, possibly forming multilayers above 16 mN/m as inferred from the continuous increase in surface pressure above the change in slope of the isotherm. The maximal compressibility modulus confirms the poor monolayer order. The significant differences in RP surface pressure and surface area compared to the LDP mixture might compromise its insertion and stability into LDP domains. The dogma in nutrition is that fat-soluble vitamins need to be solubilized in bile salt micelles to be transported to the enterocyte and then absorbed. It is also well known that although NaTC primary micelles can be formed at 2-3 mM with a small aggregation number, concentrations as high as 10-12 mM are usually necessary for efficient lipid solubilization in the intestine [START_REF] Baskin | Bile salt-phospholipid aggregation at submicellar concentrations[END_REF]. Due to their chemical structure bile salts have a facial arrangement of polar and non-polar domains (Madency & Egelhaaf 2010). Their selfassembling (dimers, multimers, micelles) is a complex process involving hydrophobic interaction and cooperative hydrogen bonding, highly dependent on the medium conditions, and that is not completely elucidated. The cmc value for sodium taurocholate in the studied buffer was particularly low compared to some of those reported in the literature for NaTC in water or sodium chloride solutions (3-12 mM) [START_REF] Kratohvil | Concentration-dependent aggregation patterns of conjugated bile-salts in aqueous sodiumchloride solutions -a comparison between sodium taurodeoxycholate and sodium taurocholate[END_REF][START_REF] Meyerhoffer | Critical Micelle Concentration Behavior of Sodium Taurocholate in Water[END_REF][START_REF] Madenci | Self-assembly in aqueous bile salt solutions[END_REF]. At concentrations as high as 10-12 mM, NaTC molecules form elongated cylindrical "secondary" micelles [START_REF] Madenci | Self-assembly in aqueous bile salt solutions[END_REF][START_REF] Bottari | Structure and composition of sodium taurocholate micellar aggregates[END_REF]. The cryoTEM analysis did not allow to distinguish micelles from the ice. In our solubilization experiment, the concentration of NaTC did not exceed 5 mM. Nevertheless, the micelles proved to be very efficient with regards to vitamin solubilization. When bile salts and lipids are simultaneously present in the same environment, they form mixed micelles [START_REF] Hernell | Physical-chemical behavior of dietary and biliary lipids during intestinal digestion and absorption. 2. Phase analysis and aggregation states of luminal lipids during duodenal fat digestion in healthy adult human beings[END_REF]. Bile salts solubilize phospholipid vesicles and transform into cylindrical micelles [START_REF] Cheng | Mixtures of lecithin and bile salt can form highly viscous wormlike micellar solutions in water[END_REF]. [START_REF] Walter | Intermediate structures in the cholate-phosphatidylcholine vesicle-micelle transition[END_REF] suggested that sodium cholate cylindrical micelles evolved from the edge of lecithin bilayer sheets. Most published studies were performed at high phospholipid/bile salt ratio. In our system, the concentration of the phospholipids was very low compared to that of NaTC. We observed however the presence of vesicles, and nano-fiber structures emerging from them. In their cryoTEM analysis, [START_REF] Fatouros | Colloidal structures in media simulating intestinal fed state conditions with and without lypolysis products[END_REF] compared bile salt/phospholipid mixtures to bile salt/phospholipid/fatty acid/monoglyceride ones at concentrations closer to ours. They observed only micelles in bile salt/phospholipid mixtures. However, in the presence of oleic acid and monoolein, vesicles and bilayer sheets were formed. This would account for a reorganization of the lipids and bile salts in the presence of the fatty acid and the monoglyceride. We therefore decided to study the interactions between bile salts and LDP. The results obtained show that the surface tension, the effective surface tension lowering concentration, and cmc Solubilization experiments and the analysis of vitamin-NaTC interaction cannot explain why the LDP-NaTC mixed micelles solubilize D 3 better than RP. Therefore, we studied the interfacial behavior of the LDP mixture in the presence of each vitamin, to determine the extent of their interaction with the lipids. The results obtained showed that D 3 penetrated in LDP domains and remained in the lipid monolayer throughout compression. At large molecular areas, the π-A isotherm profile of the mixture followed that of the LDP isotherm with a slight condensation due to the presence of D 3 molecules. Above 10 mN/m, an enlargement of the molecular area at collapse and a change in the slope of the mixed monolayer was observed. However, the surface pressure at collapse was not modified, and the shape of the isotherm accounted for the insertion of D 3 molecules into LDP domains. This was confirmed by the surface compressional moduli. D 3 interacted with lipid molecules in such manner that it increased monolayer rigidity (K max = 134.8 mN/m), without changing the general organization of the LDP monolayer. The LDP-D 3 mixed monolayer thus appeared more structured than the LDP one. D 3 behavior resembles that of cholesterol in phospholipid monolayers, however without the condensing effect of the sterol [START_REF] Ambike | Interaction of self-assembled squalenoyl gemcitabine nanoparticles with phospholipid-cholesterol monolayers mimicking a biomembrane[END_REF]. The higher rigidity of LDP monolayer in the presence of D 3 could be related to the cryo-TEM pictures showing the deformed, more angular vesicles formed with LDP-NaTC-D 3 . The angular shape would account for vesicles with rigid bilayers [START_REF] Kuntsche | Cryogenic transmission electron microscopy (cryo-TEM) for studying the morphology of collidal drug delivery systems[END_REF]. For RP, the shape of the isotherms show evidence that lipid molecules penetrated in RP domains, rather than the opposite. Indeed, the π-A isotherm profile of the LDP-RP monolayer is similar to that of RP alone. The insertion of lipid molecules into RP domains is also attested by the increase in the collapse surface pressure from 16 to 22 mN/m. Partial collapse is confirmed by the decrease in the compressibility modulus above 22 mN/m. Thus, RP led to a destructuration of the LDP mixed monolayer and when the surface density of the monolayer increased, the vitamin was partially squeezed out from the interface. The calculated ∆G EXC values for both systems suggest that insertion of D 3 into LDP domains was controlled by favorable (attractive) interactions, whereas mixing of RP with LDP was limited due to unfavorable (repulsive) interactions, even at low surface pressures. According to [START_REF] Asai | Formation and stability of the dispersed particles composed of retinyl palmitate and phosphatidylcholine[END_REF], RP can be partially solubilized in the bilayer of phospholipids (up to 3 mol%), and the excess is separated from the phospholipids, and dispersed as emulsion droplets stabilized by a phospholipid monolayer. On the whole, the information obtained regarding the interactions of the two vitamins with NaTC and LDP explain why D 3 is more soluble than RP in an aqueous medium rich in mixed micelles. Both vitamins can insert into pure NaTC domains, but only D 3 can also insert into the LDP domains in LDP-enriched NaTC micelles. Furthermore, the results obtained suggest that this is not the only explanation. Indeed, since it has been suggested that D 3 could form cylindrical micelle-like aggregates [START_REF] Meredith | The Supramolecular Structure of Vitamin-D3 in Water[END_REF], we hypothesize that the very high solubility of D 3 in the aqueous medium rich in mixed micelles was partly due to the solubilization of a fraction of D 3 as self-aggregates. Indeed, we observed that D 3 at concentrations higher than 0.45 µM, could self-assemble into various structures including nano-fibers. To our knowledge, no such structures, especially nanofibers, have been reported for D 3 so far. Rod diameter was smaller than 10 nm, much smaller than for the rods formed by lithocholic acid, for example [START_REF] Terech | Self-assembled monodisperse steroid nanotubes in water[END_REF]. They were similar to those observed in highly concentrated LDP-NaTC mixtures, which seemed formed via desorganization of lipid vesicles. Disk-like and aggregates with unidentified structure, also observed in concentrated D 3 samples, could be related to these nano-fibers. In our solubilization experiments, which were performed at much higher D 3 concentrations, both insertion of D 3 molecules into NaTC and LDP domains, and D 3 self-assembling could occur, depending on the kinetics of insertion of D 3 into the NaTC-DLP mixed micelles. Conclusion The solubilization of a hydrophobic compound in bile salt-lipid micelles is dependent upon its chemical structure and its ability to interact with the mixed micelles components. Most hydrophobic compounds are expected to insert into the bile salt-lipid micelles. The extent of the solubilizing effect is, however, much more difficult to predict. As shown by others before us, mixed micelles components form a heterogeneous system with various molecular assemblies differing in shape and composition. The conditions of the medium (pH, ionic strength and temperature) affect the formation of these molecular assemblies, although we did not study this effect on our system. Our results showed that D 3 displayed a higher solubility in mixed micelle solutions than RP. This difference was attributed to the different abilities of the two vitamins to insert in between micelle components, but it was also explained by the propensity of D 3 , contrarily to RP, to self-associate into structures that are readily soluble in the aqueous phase. It is difficult to predict the propensity of a compound to self-association. We propose here a methodology that was efficient to distinguish between two solubilizing behaviors, and could be easily used to predict the solubilization efficiency of other hydrophobic compounds. Whether the D 3 self-assemblies are available for absorption by the intestinal cells needs further studies. values were very much influenced by LDP. The almost parallel slopes of Gibbs adsorption isotherms for pure NaTC and mixed NaTC-LDP suggest that LDP molecules inserted into NaTC domains, rather than the opposite. This was confirmed by penetration studies, which showed that NaTC (0.1 mM) could hardly penetrate in a compact LDP film. So, during lipid hydration, LDP molecules could insert into NaTC domains. The presence of LDP molecules improved NaTC micellarization.After having determined the interfacial properties of each micelle component and measured the interactions between NaTC and LDP, we assessed the ability of D 3 and RP to solubilize in either NaTC or NaTC-LDP micelles. Surface tension values clearly show that both vitamins could insert in between NaTC molecules adsorbed at the interface, and affected the surface tension in the same way. The interfacial behavior of the molecules being representative of their behavior in the bulk, it is reasonable to think that both D 3 and RP can be solubilized into pure NaTC micelles. For the mixed NaTC-LDP micelles, the change in surface tension was too limited to allow conclusions, but the solubilization experiments clearly indicated that neither vitamin was solubilized to the same extent. Figure 1 : 1 Figure 1: Chemical structures for D 3 and RP. Figure 2 : 2 Figure 2: Solubilization of D 3 and RP in aqueous solutions rich in mixed micelles: (l), Figure 3 : 3 Figure 3: Cryo-TEM morphology of (A) 15 mM mixed LDP-NaTC micelles, (B) and (C) 5 Figure 4 : 4 Figure 4: Mean compression isotherms for (A) the pure micelles components and the LDP Figure 5 : 5 Figure 5: (A) Adsorption isotherms for LDP hydrated in NaTC-free buffer (○), LDP hydrated Figure 6 : 6 Figure 6: π-A isotherms (A,B), compressibility moduli (C) and excess free energies (D) for Ø Figure 1 Acknowledgements: The authors are grateful to Dr Sylvain Trépout (Institut Curie, Orsay, France) for his contribution to cryoTEM experiments and the fruitful discussions. Funding: This study was funded by Adisseo France SAS. Conflicts of interest: DP, ED and VLD are employed by Adisseo. Adisseo markets formulated vitamins for animal nutrition.
42,446
[ "764461", "1293290", "749069", "938088", "18561" ]
[ "180118", "527021", "251210", "251210", "251210", "440261", "414821", "414821", "180118", "527021" ]
01757936
en
[ "spi" ]
2024/03/05 22:32:10
2017
https://hal.science/hal-01757936/file/CK2017_Nayak_Caro_Wenger_HAL.pdf
Abhilash Nayak email: abhilash.nayak@irccyn.ec-nantes.fr Stéphane Caro email: stephane.caro@ls2n.fr Philippe Wenger email: philippe.wenger@ls2n.fr Local and Full-cycle Mobility Analysis of a 3-RPS-3-SPR Series-Parallel Manipulator Keywords: series-parallel manipulator, mobility analysis, Jacobian matrix, screw theory, Hilbert dimension without any proof, and shown to be five in [4] and [3] with an erroneous proof. Screw theory is used to derive the kinematic Jacobian matrix and the twist system of the mechanism, leading to the determination of its local mobility. I turns out that this local mobility is found to be six in several arbitrary configurations, which indicates a full-cycle mobility equal to six. This full-cycle mobility is confirmed by calculating the Hilbert dimension of the ideal made up of the set of constraint equations. It is also shown that the mobility drops to five in some particular configurations, referred to as impossible output singularities. Introduction A series-parallel manipulator (S-PM) is composed of parallel manipulators mounted in series and has merits of both serial and parallel manipulators. The 3-RPS-3-SPR S-PM is such a mechanism with the proximal module being composed of the 3-RPS parallel mechanism and the distal module being composed of the 3-SPR PM. Hu et al. [START_REF] Hu | Analyses of inverse kinematics, statics and workspace of a novel 3-RPS-3-SPR serial-parallel manipulator[END_REF] analyzed the workspace of this manipulator. Hu formulated the Jacobian matrix for S-PMs as a function of Jacobians of the individual parallel modules [START_REF] Hu | Formulation of unified Jacobian for serial-parallel manipulators[END_REF]. In the former paper, it was assumed that the number of local dof of the 3-RPS-3-SPR mechanism is equal to six, whereas Gallardo et al. found out that it is equal to five [START_REF] Gallardo-Alvarado | Mobility and velocity analysis of a limited-dof series-parallel manipulator[END_REF][START_REF] Gallardo-Alvarado | Kinematics of a series-parallel manipulator with constrained rotations by means of the theory of screws[END_REF]. As a matter of fact, it is not straightforward to find the local mobility of this S-PM due to the third-order twist systems of each individual module. It is established that the 3-RPS PM performs a translation and two non pure rotations about non fixed axes, which induce two translational parasitic motions [START_REF] Hunt | Structural kinematics of in-parallel-actuated robot-arms[END_REF]. The 3-SPR PM also has the same type of dof [START_REF] Nayak | Comparison of 3-RPS and 3-SPR parallel manipulators based on their maximum inscribed singularity-free circle[END_REF]. In addition, these mechanisms are known as zero-torsion mechanisms. When they are mounted in series, the axis about which the torsional motion is constrained, is different for a general configuration of the S-PM. Gallardo et al. failed to consider this fact but only those special configurations in which the axes coincide resulting in a mobility equal to five. This paper aims at clarifying that the full-cycle mobility of the 3-RPS-3-SPR S-PM is equal to six with the help of screw theory and some algebraic geometry concepts. Although the considered S-PM has double spherical joints and two sets of three coplanar revolute joint axes, the proposed methodology to calculate the mobility of the manipulator at hand is general and can be applied to any series-parallel manipulator. The paper is organized as follows : The manipulator under study is described in Section 2. The kinematic Jacobian matrix of a general S-PM with multiple modules is expressed in vector form in Section 3. Section 4 presents some configurations of the 3-RPS-3-SPR S-PM with the corresponding local mobility. Section 5 deals with the full-cycle mobility of the 3-RPS-3-SPR S-PM. Manipulator under study The architecture of the 3-RPS-3-SPR S-PM under study is shown in Fig. 1. It consists of a proximal 3-RPS PM module and a distal 3-SPR PM module. The 3-RPS PM is composed of three legs each containing a revolute, a prismatic and a spherical joint mounted in series, while the legs of the 3-SPR PM have these lower pairs in reverse order. Thus, the three equilateral triangular shaped platforms are the fixed base, the coupler and the end effector, coloured brown, green and blue, respectively. The vertices of these platforms are named A i , B i and C i , i = 0, 1, 2. Here after, the subscript 0 corresponds to the fixed base, 1 to the coupler platform and 2 to the end-effector. A coordinate frame F i is attached to each platform such that its origin O i lies at its circumcenter. The coordinate axes, x i points towards the vertex A i , y i is parallel to the opposite side B i C i and by the right hand rule, z i is normal to platform plane. Besides, the circum-radius of the i-th platform is denoted as h i . p i and q i , i = 1, ..., 6 are unit vectors along the prismatic joints while u i and v i , i = 1, ..., 6 are unit vectors along the revolute joint axes. Kinematic modeling of series-parallel manipulators Keeping in mind that the two parallel mechanisms are mounted in series, the end effector twist (angular velocity vector of a body and linear velocity vector of a point on the body) for the 3-RPS-3-SPR S-PM with respect to base can be represented as follows: 0 t 2/0 = 0 t PROX 2/0 + 0 t DIST 2/1 =⇒ 0 ω 2/0 0 v O 2 /0 = 0 ω PROX 2/0 0 v PROX O 2 /0 + 0 ω DIST 2/1 0 v DIST O 2 /1 (1) where 0 t PROX 2/0 is the end effector twist with respect to the base (2/0) due to the proximal module motion and 0 t DIST 2/1 is the end effector twist with respect to the coupler h 1 h 0 h 2 z 1 y 1 x 1 z 0 y 0 x 0 z 2 y 2 x 2 O 2 O 1 O 0 A 0 B 0 C 0 A 1 B 1 C 1 C 2 B 2 A 2 F 0 F 1 F 2 u 2 u 1 u 3 v 2 v 1 v 3 p 1 p 3 p 2 q 1 q 3 3 q 2 PROXIMAL module DISTAL module Fig. 1: A 3-RPS-3-SPR series-parallel manipulator F 0 F n F 1 Module 1 Module 2 Module n F 2 F n-1 Fig. 2: n parallel mechanisms (named modules) arranged in series (2/1) due to the distal module motion. These twists are expressed in the base frame F 0 , hence the left superscript. The terms on right hand side of Eq. ( 1) are not known, but can be expressed in terms of the known twists using screw transformations. To do so, the known twists are first noted down. If the proximal and distal modules are considered individually, the twist of their respective moving platforms with respect to their fixed base will be expressed as a function of the actuated joint velocities : A PROX 0 t PROX 1/0 = B PROX ρ13 =⇒         ( 0 r O 1 A 1 × 0 p 1 ) T 0 p T 1 ( 0 r O 1 B 1 × 0 p 2 ) T 0 p T 2 ( 0 r O 1 C 1 × 0 p 3 ) T 0 p T 3 ( 0 r O 1 A 1 × 0 u 1 ) T 0 u T 1 ( 0 r O 1 B 1 × 0 u 2 ) T 0 u T 2 ( 0 r O 1 C 1 × 0 u 3 ) T 0 u T 3         0 ω PROX 1/0 0 v PROX O 1 /0 = I 3×3 0 3×3   ρ1 ρ2 ρ3   (2) A DIST 1 t DIST 2/1 = B DIST ρ46 =⇒         ( 1 r O 2 A 1 × 1 q 1 ) T 1 q T 1 ( 1 r O 2 B 1 × 1 q 2 ) T 1 q T 2 ( 1 r O 2 C 1 × 1 q 3 ) T 1 q T 3 ( 1 r O 2 A 1 × 1 v 1 ) T 1 v T 1 ( 1 r O 2 B 1 × 1 v 2 ) T 1 v T 2 ( 1 r O 2 C 1 × 1 v 3 ) T 1 v T 3         1 ω DIST 2/1 1 v DIST O 2 /1 = I 3×3 0 3×3   ρ4 ρ5 ρ6   (3) where, 0 t PROX 1/0 is the twist of the coupler with respect to the base expressed in F 0 and1 t DIST 2/1 is the twist of the end effector with respect to the coupler expressed in F 1 . A PROX and A DIST are called forward Jacobian matrices and they incorporate the actuation and constraint wrenches of the 3-RPS and 3-SPR PMs, respectively [START_REF] Joshi | Jacobian analysis of limited-DOF parallel manipulators[END_REF]. B PROX and B DIST are called inverse Jacobian matrices and they are the result of the reciprocal product between wrenches of the mechanism and twists of the joints for the 3-RPS and 3-SPR PMs, respectively. ρ13 = [ ρ1 , ρ2 , ρ3 ] T and ρ46 = [ ρ4 , ρ5 , ρ6 ] T are the prismatic joint velocities for the proximal and distal modules, respectively. k r PQ denotes the vector pointing from a point P to point Q expressed in frame F k . Considering Eq. ( 1), the unknown twists 0 t PROX 2/0 and 0 t DIST 2/1 can be expressed in terms of the known twists 0 t PROX 1/0 and 1 t PROX 2/1 using the following screw transformation matrices [START_REF] Murray | A Mathematical Introduction to Robotic Manipulation[END_REF][START_REF] Binaud | The kinematic sensitivity of robotic manipulators to joint clearances[END_REF]. 0 ω PROX 2/0 0 v PROX O 2 /0 = 2 Ad 1 0 ω PROX 1/0 0 v PROX O 1 /0 (4) with 2 Ad 1 = I 3×3 0 3×3 -0 rO 1 O 2 I 3×3 , 0 rO 1 O 2 =   0 -0 z O 1 O 2 0 y O 1 O 2 0 z O 1 O 2 0 -0 x O 1 O 2 -0 y O 1 O 2 0 x O 1 O 2 0   2 Ad 1 is called the adjoint matrix. 0 rO 1 O 2 is the cross product matrix of vector 0 r O 1 O 2 = [ 0 x O 1 O 2 , 0 y O 1 O 2 , 0 z O 1 O 2 ], pointing from point O 1 to point O 2 expressed in frame F 0 . Similarly, for the distal module, the velocities 1 ω DIST 2/1 and 1 v DIST O 2 /1 can be transformed from frame F 1 to F 0 just by multiplying each of them by the rotation matrix 0 R 1 from frame F 0 to frame F 1 : 0 ω DIST 2/1 0 v DIST O 2 /1 = 0 R 1 1 ω DIST 2/1 1 v DIST O 2 /1 with 0 R 1 = 0 R 1 I 3×3 I 3×3 0 R 1 (5) 0 R 1 is called the augmented rotation matrix between frames F 0 and F 1 . Consequently from Eqs. ( 4) and (5), 0 t 2/0 = 2 Ad 1 0 t PROX 1/0 + 0 R 1 1 t DIST 2/1 (6) Note that Eq. ( 6) amounts to the twist equation derived in [START_REF] Hu | Formulation of unified Jacobian for serial-parallel manipulators[END_REF] whereas Gallardo et al. add the twists of individual modules directly without considering the screw transformations. It is noteworthy that Equation [START_REF] Murray | A Mathematical Introduction to Robotic Manipulation[END_REF] in [START_REF] Gallardo-Alvarado | Mobility and velocity analysis of a limited-dof series-parallel manipulator[END_REF] is incorrect, so are any further conclusions based on this equation. Following Eqs. ( 2) and ( 3), with the assumption that the proximal and distal modules are not in a parallel singularity 1 or in other words, matrices A PROX and A DIST are invertible, 0 t 2/0 = 2 Ad 1 A -1 PROX B PROX ρ13 + 0 R 1 A -1 DIST B DIST ρ46 = 2 Ad 1 A -1 PROX B PROX 0 R 1 A -1 DIST B DIST ρ13 ρ46 = J S-PM ρ13 ρ46 (7) J S-PM is the kinematic Jacobian matrix of the 3-RPS-3-SPR S-PM under study. The rank of this matrix provides the local mobility of the S-PM. Equations ( 6), ( 7) and ( 8) can be extended to a series-parallel manipulator with n number of parallel mechanisms, named modules in this paper, in series as shown in Fig. 2. Thus, the twist of the end effector with respect to the fixed base expressed in frame F 0 can be expressed as follows : 0 t n/0 = n ∑ i=1 0 R (i-1) n Ad i (i-1) t M i i/(i-1) = J 6×3n      ρM 1 ρM 2 . . . ρM n      with 0 R i = 0 R i I 3×3 I 3×3 0 R i , n Ad i = I 3×3 0 3×3 -(i-1) rO i O n I 3×3 and J 6×3n = n Ad 1 A -1 M 0 B M 0 0 R 1 n Ad 2 A -1 M 1 B M 1 ... 0 R n A -1 M n M n (8) where, J 6×3n is the 6 × 3n kinematic Jacobian matrix of the n-module hybrid manipulator. M i stands for the i-th module, A M i and B M i are the forward and inverse Jacobian matrices of M i of the series-parallel manipulator, respectively. ρM i is the vector of the actuated prismatic joint rates for the i-th module. Twist system of the 3-RPS-3-SPR S-PM Each leg of the 3-RPS and 3-SPR parallel manipulators are composed of three joints, but the order of the limb twist system is equal to five and hence there exist five twists associated to each leg. Thus, the constraint wrench system of the i-th leg reciprocal to the foregoing twists is spanned by a pure force W i passing through the spherical joint center and parallel to the revolute joint axis. Therefore, the constraint wrench systems of the proximal and distal modules are spanned by three zero-pitch wrenches, namely, 0 W PROX = 3 i=1 0 W i PROX = span 0 u 1 0 r O 2 A 1 × 0 u 1 , 0 u 2 0 r O 2 B 1 × 0 u 2 , 0 u 3 0 r O 2 C 1 × 0 u 3 1 W DIST = 3 i=1 1 W i DIST = span 1 v 1 1 r O 2 A 1 × 1 v 1 , 1 v 2 1 r O 2 B 1 × 1 v 2 , 1 v 3 1 r O 2 C 1 × 1 v 3 (9) Due to the serial arrangement of the parallel mechanisms, the constraint wrench system of the S-PM is the intersection of the constraint wrench systems of each module. Alternatively, the twist system of the S-PM is the direct sum (disjoint union) of the twist systems of each module. Therefore, the nullspace of the 3 × 6 matrix containing the basis screws of 0 W PROX and 1 W DIST leads to the screws that form the basis of the twist system of each module, 0 T PROX = span{ 0 ξ 1 , 0 ξ 2 , 0 ξ 3 } and 1 T DIST = span{ 1 ξ 4 , 1 ξ 5 , 1 ξ 6 }, respectively. The augmented rotation matrix derived in Eq. ( 5) is exploited to ensure that all the screws are expressed in one frame (F 0 in this case). Therefore, the total twist system of the S-PM can be obtained as follows : 0 T S-PM = 0 T PROX 0 T DIST = span{ 0 ξ 1 , 0 ξ 2 , 0 ξ 3 , 0 R 1 1 ξ 4 , 0 R 1 1 ξ 5 , 0 R 1 1 ξ 6 } (10) The order of the twist system 0 T S-PM yields the local mobility of the whole manipulator. Some general and singular configurations of the 3-RPS-3-SPR S-PM with h 0 = 2, h 1 = 1 and h 2 = 2 are considered and its mobility is listed based on the rank of the Jacobian and the order of the twist system in Table 1. For general configurations like 2 and 3, the mobility is found to be six. The mobility reduces only when some singularities are encountered. For a special configuration when the three platform planes are parallel to each other as shown in the first row of this table, the rotations of the coupler generate translational motions of the end effector. Yet, the torsional axes of both mechanisms coincide and hence, the mechanism cannot perform any rotation about an axis of vertical direction leading to a mobility equal to five. Moreover, a configuration in which any revolute joint axis in the end effector is parallel to its corresponding axis in the fixed base results in a mobility lower than six for the S-PM. For instance, for the 4th configuration in the table, there exists a constraint force f , parallel to the two parallel revolute joint axes resulting in a five dof manipulator locally. Configurations 1 and 4 are the impossible output singularities as identified by Zlatanov et al. [START_REF] Zlatanov | A unifying framework for classification and interpretation of mechanism singularities[END_REF]. It should be noted that if one of the modules is in a parallel singularity, the motion of the moving-platform of the manipulator becomes uncontrollable. A detailed singularity analysis of series-parallel manipulators will be performed in a future work for a better understanding of their behaviour in singular configurations. Full-cycle mobility of the 3-RPS-3-SPR S-PM The full cycle mobility can be obtained by calculating the Hilbert dimension of the set of constraint equations of the mechanism [START_REF] Husty | A Proposal for a New Definition of the Degree of Freedom of a Mechanism[END_REF]. Two Study transformation matrices are considered : 0 X 1 from F 0 to F 1 and 1 Y 2 from F 1 to F 2 composed of Study parameters x i and y i , i = 0, 1, ..., 7, respectively. Thus, the coordinates of points A j , B j and C j , j = 0, 1, 2 and vectors u k and v k , k = 1, 2, can be represented in F 0 to yield sixteen constraint equations (six for the 3-RPS PM, six for the 3-SPR Number Study parameters and configuration Rank of J S-PM Order of 0 T S-PM 1 x i = (1 : 0 : 0 : 0 : 0 : 0 : 0 : 0.75) y i = (1 : 0 : 0 : 0 : 0 : 0 : 0 : 0.8) F 0 F 1 F 2 5 5 2 x i = (0.35 : -0.9 : 0.25 : 0 : 0.57 : 0.27 : -1.76 : -1.33) y i = (1 : 0 : 0 : 0 : 0 : 0 : 0 : -0.8) F 0 F 1 F 2 6 6 3 x i = (0.99 : 0 : -0.10 : 0 : 0 : 0.21 : 0 : 1.92) y i = (-0.79 : -0.59 : 0.16 : 0 : -0.16 : -0.13 : -1.25 : -2.04) F 0 F 1 F 2 6 6 4 x i = (0.99 : 0 : -0.10 : 0 : 0 : 0.21 : 0 : 1.92) y i = (-0.39 : 0 : 0.92 : 0 : 0 : -1.88 : 0 : 0.12) F 0 F 1 F 2 f 5 5 Table 1: Mobility of the 3-RPS-3-SPR S-PM in different configurations PM, Study quadric and normalization equations for each transformations). It was established that the 3-RPS and the 3-SPR parallel mechanisms have two operation modes each, characterized by x 0 = 0, x 3 = 0 and y 0 = 0, y 3 = 0, respectively [START_REF] Schadlbauer | The 3-RPS parallel manipulator from an algebraic viewpoint[END_REF][START_REF] Nayak | Comparison of 3-RPS and 3-SPR parallel manipulators based on their maximum inscribed singularity-free circle[END_REF]. For the S-PM, four ideals of the constraint equations are considered : K 1 , when x 0 = y 0 = 0, K 2 , when x 3 = y 0 = 0, K 3 , when x 0 = y 3 = 0 and K 4 , when x 3 = y 3 = 0. The Hilbert dimension of these ideals over the ring C[h 0 , h 1 , h 2 ] is found to be six 1and hence the global mobility of the 3-RPS-3-SPR S-PM. dimK i = 6, i = 1, 2, 3, 4. (11) Conclusions and future work In this paper, the full-cycle mobility of a 3-RPS-3-SPR PM was elucidated to be six. The kinematic Jacobian matrix of the series-parallel manipulator was calculated with the help of screw theory and the result was extended to n-number of modules. Moreover, the methodology for the determination of the twist system of series-parallel manipulators was explained. The rank of the Jacobian matrix or the order of the twist system gives the local mobility of the S-PM. Global mobility was calculated as the Hilbert dimension of the ideal of the set of constraint equations. In the future, we intend to solve the inverse and direct kinematics using algebraic geometry concepts and to enlist all possible singularities of series-parallel mechanisms. Additionally, it is challenging to consider n-modules (n > 2) and to work on the trajectory planning of such manipulators the number of output parameters is equal to six and lower than the number of actuated joints, which is equal to 3n. Parallel singularity can be an actuation singularity, constraint singularity or a compound singularity[START_REF] Nurahmi | Dimensionally homogeneous extended jacobian and condition number[END_REF][START_REF] Maraje | Operation modes comparison of a reconfigurable 3-PRS parallel manipulator based on kinematic performance[END_REF][START_REF] Amine | Classification of 3T1R parallel manipulators based on their wrench graph[END_REF] The pdf file of the Maple sheet with calculation of Hilbert dimension can be found here : https://www.dropbox.com/s/3bqsn45rszvgdax/Mobility3RPS3SPR.pdf?dl=0 Acknowledgements This work was conducted with the support of both the École Centrale de Nantes and the French National Research Agency (ANR project number: ANR-14-CE34-0008-01).
18,669
[ "1307880", "10659", "16879" ]
[ "111023", "473973", "481388", "473973", "441569", "473973" ]
01757941
en
[ "info", "scco" ]
2024/03/05 22:32:10
2016
https://amu.hal.science/hal-01757941/file/GalaZiegler_CL4LC-2016.pdf
N Úria Gala email: nuria.gala@univ-amu.fr Johannes Ziegler email: johannes.ziegler@univ-amu.fr Reducing lexical complexity as a tool to increase text accessibility for children with dyslexia Lexical complexity plays a central role in readability, particularly for dyslexic children and poor readers because of their slow and laborious decoding and word recognition skills. Although some features to aid readability may be common to many languages (e.g., the majority of 'easy' words are of low frequency), we believe that lexical complexity is mainly language-specific. In this paper, we define lexical complexity for French and we present a pilot study on the effects of text simplification in dyslexic children. The participants were asked to read out loud original and manually simplified versions of a standardized French text corpus and to answer comprehension questions after reading each text. The analysis of the results shows that the simplifications performed were beneficial in terms of reading speed and they reduced the number of reading errors (mainly lexical ones) without a loss in comprehension. Although the number of participants in this study was rather small (N=10), the results are promising and contribute to the development of applications in computational linguistics. Introduction It is a fact that lexical complexity must have an effect on the readability and understandability of text for people with dyslexia [START_REF] Hyönä | Eye fixation patterns among dyslexic and normal readers : effects of word length and word frequency[END_REF]. Yet, many of the existing tools have only focused on the visual presentation of text, such as the use of specific dyslexia fonts or increased letter spacing [START_REF] Zorzi | Extra-large letter spacing improves reading in dyslexia[END_REF]. Here, we investigate the use of text simplification as a tool for improving text readability and comprehension. It should be noted that comprehension problems in dyslexic children are typically a consequence of their problems in basic decoding and word recognition skills. In other words, children with dyslexia have typically no comprehension problems in spoken language. However, when it comes to reading a text, their decoding is so slow and strenuous that it takes up all their cognitive resources. They rarely get to the end of a text in a given time, and therefore fail to understand what they read. Long, complex and irregular words are particularly difficult for them. For example, it has been shown that reading times of children with dyslexia grow linearily with each additional letter [START_REF] Spinelli | Length effect in word naming in reading : role of reading experience and reading deficit in italian readers[END_REF] [START_REF] Ziegler | Developmental dyslexia in different languages : Language-specific or universal[END_REF]. Because children with dyslexia fail to establish the automatic procedures necessary for fluent reading, they tend to read less and less. Indeed, a dyslexic child reads in one year what a normal reader reads in two days [START_REF] Cunningham | What reading does for the mind[END_REF]) -a vicious circle for a dyslexic child because becoming a fluent reader requires extensive training and exposure to written text [START_REF] Ziegler | Modeling reading development through phonological decoding and self-teaching : Implications for dyslexia[END_REF] In this paper, we report an experiment comparing the reading performance of dyslexic children and poor readers on original and simplified corpora. To the best of our knowledge, this is the first time that such an experiment is undertaken for French readers. Our aim was to reduce the linguistic complexity of ten standardized texts that had been developped to measure reading speed. The idea was to identify the words and the structures that were likely to hamper readability in children with reading deficits. Our hypothesis was that simplified texts would not only improve reading speed but also text comprehension. A lexical analysis of the reading errors enabled us to identify what kind of lexical complexity was particularly harmful for dyslexic readers and define what kind of features should be taken into account in order to facilitate readability. Experimental Study Procedure and participants We tested the effects of text simplification by contrasting the reading performance of dyslexic children on original and manually simplified texts and their comprehension by using multiple choice questions at the end of each text. The children were recorded while reading aloud. They read ten texts, five original and five simplified in a counter-balanced order. Each text was read in a session with their speech therapists. The texts were presented on a A4 sheet printed in 14 pt Arial font. The experiment took place between december 2014 and march 2015. After each text, each child had to answer the three multiple-choice comprehension questions without looking at the texts (the questions were the same for the original and the simplified versions of the text). Three possible answers were provided in a randomized order : the correct one, a plausible one taking into account the context, and a senseless one. Two trained speech therapists collected the reading times and comprehension scores, annotated the reading errors, and proposed a global analysis of the different errors (cf. 3.1) [START_REF] Brunel | Simplification de textes pour faciliter leur lisibilité et leur compréhension[END_REF]. Ten children aged between 8 and 12 attending regular school took part in the present study (7 male, 3 female). The average age of the participants was 10 years and 4 months. The children had been formally diagnosed with dyslexia through a national reference center for the diagnosis of learning disabilities. Their reading age1 corresponds to 7 years and 6 months, which meant that they had an average reading delay of 2 years and 8 months. Data set The corpora used to test text simplification is a collection of ten equivalent standardized texts (IReST, International Reading Speed Texts2 ). The samples were designed for different languages keeping the same difficulty and linguistic characteristics to assess reading performances in different situations (low vision patients, normal subjects under different conditions, developmental dyslexia, etc.). The French collection consists on nine descriptive texts and a short story (more narrative style). The texts were analyzed using TreeTagger [START_REF] Schmid | Probabilistic part-of-speech tagging using decision trees[END_REF], a morphological analyzer which performs lemmatization and part-of-speech tagging. The distribution in terms of part-of-speech categories is roughly the same in original and simplified texts, although simplified ones have more nouns and less verbs and adjectives. Table 1 shows the average number of tokens per text and per sentence, the average number of sentences per text, the distribution of main content words and the total number of lemmas : Simplifications Each corpus was manually simplified at three linguistic levels (lexical, syntactic, discursive). It is worth mentioning that, in previous work, text simplifications are commonly considered as lexical and syntactic [START_REF] Carroll | Simplifying Text for Language Impaired readers[END_REF], little attention is generally paid to discourse simplification with a few exceptions. In this study, we decided to perform three kinds of linguistic transformations because we made the hypothesis that all of them would have an effect on the reading performance. However, at the time being, only the lexical simplifications have been analyzed in detail (cf. section 3.2). The manual simplifications were made according to a set of criteria. Because of the absence of previous research on this topic, the criteria were defined by three annotators following the recommendations for readers with dyslexia [START_REF] Ecalle | Des difficultés en lecture à la dyslexie : problèmes d'évaluation et de diagnostic[END_REF] for French and [START_REF] Rello | DysWebxia. A Text Accessibility Model for People with Dyslexia[END_REF] for Spanish. Lexical simplifications. At the lexical level, priority was given to high-frequency words, short words and regular words (high grapheme-phoneme consistency). Content words were replaced by a synonym 3 . The lexical difficulty of a word was determined on the basis of two available resources : Manulex [START_REF] Lété | Manulex : A grade-level lexical database from French elementary-school readers[END_REF] 4 , a grade-level lexical database from French elementary school readers, and FLELex (Franc ¸ois et al., 2014) 5 , a graded lexicon for French as a foreign language reporting frequencies of words across different levels. If the word in the original text had a simpler synonym (an equivalent in a lower level) the word was replaced. For instance, the word consommer ('to consume') has a frequency rate of 3.55 in Manulex, it was replaced by manger ('to eat') that has 30.13. In most of the cases, a word with a higher frequency is also a shorter word : elle l'enveloppe dans ses fils collants pour le garder et le consommer plus tard > ... pour le garder et le manger plus tard ('she wraps it in her sticky net to keep it and eat it later'). Adjectives or adverbs were deleted if there was an agreement among the three annotators, i.e. if it was considered that the information provided by the word was not relevant to the comprehension of the sentence. To give an example, inoffensives ('harmless') was removed in Il y a des mouches inoffensives qui ne piquent pas ('there are harmless flies that do not sting'). In French, lexical replacements often entail morphological or syntactic modifications of the sentence, in these cases the words or the phrases were also modified to keep the grammaticality of the sentence (e.g. determiner and noun agreement) and the same content (meaning). Example, respectively with number and gender agreement : une partie des plantes meurt and quelques plantes meurent ('some plants die'), or la sécheresse ('drought') and au temps sec ('dry wheather'). Syntactic simplifications. Structural simplifications imply a modification on the order of the constituents or a modification of the sentence structure (grouping, deletion, splitting [START_REF] Brouwers | Syntactic French Simplification for French[END_REF]). In French, the canonical order of a sentence is SVO, we thus changed the sentences where this order was not respected (for stylistic reasons) : ensuite poussent des buissons was transformed into ensuite des buissons poussent ('then the bushes grow'). The other syntactic reformulations undertaken on the IReST corpora are the following : passive voice to active voice, and present participle to present tense (new sentence through ponctuation or coordinate conjunction). Discursive simplifications. As for transformations dealing with the coherence and the cohesion of the text, given that the texts were short, we only took into account the phenomena of anaphora resolution, i.e. expliciting the antecedent of a pronoun (the entity which it refers to). Although a sentence where the pronouns have been replaced by the antecedents may be stylistically poorer, we made the hypothesis that it is easier to understand. For instance : leurs traces de passage ('their traces') was replaced by les traces des souris ('the mice traces'). The table 2 gives an idea of the transformations performed in terms of quantity. As clearly showed, the majority of simplifications were lexical : 3. The following reference resources were used : the database www.synonymes.com and the Trésor de la Langue Franc ¸aise informatisé (TLFi) http://atilf.atilf.fr/tlf.htm. 4 Results Two different analyses were performed : one for quantitatively measuring the reading times, the number of errors and the comprehension scores. The second one took specifically into account the lexicon : the nature of the words incorrectly read. Behavioral data analysis Reading Times. The significance of the results was assessed with a pairwise t-test (Student) 6 From this table it can be seen that the overall reading times of simplified texts were significantly shorter than the reading times of original texts. While this result can be attributed to the fact that simplified texts were slightly shorter than original texts, it should be emphasized that reading speed (words per minute), which is independent of the length of a text, was significantly greater in simplified texts than in original texts. Number of errors. The total number of errors included : -(A) the total number of skipped words, repeated words (words read twice), interchanged words, line breaks, repeated lines (line read twice) -(B) the total number of words incorrectly read for lexical reasons (the word read is a pseudo-word or a different word) -(C) the total number of words incorrectly read for grammatical reasons (the word read has the same grammatical category (part-of-speech) but varies on number, gender, tense, mode, person) First of all, it should be noted that participants made fewer errors in simplified texts than in original ones (5,5% vs 7,7%) 7 . The table 4 shows the distribution of all the errors : It can be noted that lexical and grammatical errors occurred equally often 8 . Comprehension scores 6. ** significant results with p < 0.01 7. This difference was significant in a t-test (t = 2,3, p < 0.05) 8. A more detailed analysis of these errors is proposed on section 3.2. The results of the comprehension questionnaire are better for simplified than for original texts (marginal gain 9 ) as shown on table 5 : These results entail that dyslexic children read the simplified version of the corpus without a significant loss of comprehension. If anything, they showed a marginal increase in comprehension scores for simplified texts. Lexical analysis As we were interested in the lexicon of the corpus, an analysis of the content words (i.e. nouns, verbs, adjectives, adverbs) incorrectly read was undertaken in order to better target the reading pitfalls. From our study, we identified 404 occurrences that were incorrectly read, corresponding to 213 different lemmas (to be precise, there were 235 tokens (22 were inflected variants), i.e. arbre and arbres, or restaient, restent, rester). 404 wrong read words corresponds to 26.81 % of the content words of the corpora, which means that more than one word out of four is incorrectly read. It is worth mentioning that we did not count monosyllabic grammatical words as determiners, pronouns or prepositions, although an important number or errors occurred also on those tokens, i.e. le read la ('the'), ces read des ('these'), pour read par ('for'). We make the hypothesis that the readers concentrate their efforts on decoding content words, and not grammatical ones, because they are those that carry the semantic information and are thus important for text comprehension. Besides, as grammatical words are usually very short and frequent in French, they have a higher number of orthographic neighbours and people with dyslexia tend to confuse short similar words. We distinguished the words that were replaced by a pseudo-word (29.46%) and those replaced by other existing words on French vocabulary (70.37%). These figures can be compared with those obtained by Rello and collaborators [START_REF] Rello | A First Approach to the Creation of a Spanish Corpus of Dyslexic Texts[END_REF]. Non-word errors are pronunciations that do not result in an existing word, real-word errors are pronunciations that result in an incorrect but existing word. Non-word errors appear to be higher in English (83%) and in Spanish (79%), but not in French where real-word errors were clearly a majority 10 : Grammatical variants concern variations on gender and number for nouns, and for person, tense and mode for verbs. Lexical remplacements are words read as if they were other words with orthographical similarities (lieu > île, en fait > enfin, commun > connu, etc.). Morphological variants are words of 9. p < 0.1 10. This finding will deserve more attention in future work. the same morphological family (baisse > basse, malchanceux > malchance). As for orthographical neighbours, we specifically distinguish word pairs where the difference is only of one letter (raisins > raisons, bon > don). Concerning word length for all the mentionned features, 36.88% of the words read were replaced by words of strictly the same length (forment > formant, catégorie > *calégorie), 14.11% were replaced by longer ones (utile > utilisé, suffisant > suffisamment), 49.01% were replaced by shorter ones (nourriture > nature, finie > fine, empilées > empli). The average length of the 404 words incorrectly read is 7.65 characters (the shortest has three characters, bon, and the longest 16, particulièrement). The average number of orthographical neighbours is 3.24, with eight tokens having more than ten neighbours : bon, bois, basse, foule, fine, fils, garde, sont ('good, wood, low, crowd, thin, thread, keeps, are'). As far as the grammatical categories are concerned, the majority of the errors were on verbs. They concerned grammatical variants of person, tense (past imparfait > present) and mode (present > present participle). The distribution on part-of-speech tags errors is shown on table 8 In French, it is stated that the more frequent (and easier) structure is CV and V. In our results, 58,69% of the words contain this common structure, while 41,31% present a more complex structure (CVC, CVCC, CYC 11 , etc.) We finally analyzed the consistency of grapheme-to-phoneme correspondences which is particularly irregular in French (silent letters, nasal vowels, etc.) 12 . As mentioned above, the average length of the words incorrectly read is 7.65 and their average in number of phonemes is 4.95. This means that the 11. C is a consonant, V is a vowel, Y is a semi-vowel, i.e. [j] in essayait [e-se-je], [w] in doivent [dwav] 12. This is not the case for other languages, e.g. the Spanish writing system has consistent grapheme-to-phoneme correspondences. average difference between the number of letters and the number of real phonemes is 2.71. Only four tokens were regular (same number of phonemes than letters : existe, mortel, partir, plus ('exists, mortal, leave, plus')). The highest difference is 6 in apparaissent, épargneaient ('appear, saved') with 12 letters and 6 phonemes each, and mangeaient ('ate') with 10 letters and 4 phonemes. All the words incorrectly read were thus irregular as far as grapheme-to-phoneme consistency is concerned. 4 Discussion : determining where complexity is According to the literature, complexity for children with dyslexia should be found on long and less frequent words. More precisely, from the analysis of the reading errors obtained on our first pilot-study, the errors mainly occur on verbs and nouns with complex syllable structure, i.e. irregular grapheme-tophoneme correspondences, words with many orthographic neighbours or many morphological family members which are more frequent. Visual similarity is a source of error, specially for the following pairs 13 : In all the replacements we can observe visual similarities. As shown in table 12 ,the word that is actually read tends to be in most of the cases shorter and more frequent 14 than the original one : To sum up, lexical complexity for dyslexic readers in French is to be found on verbs and nouns longer than seven characters, presenting letters with similar equivalents, with complex syllables and irregular phoneme-to-grapheme consistency. Lexical replacements of words incorrectly read should consider shorter and more frequent words and words with higher grapheme-to-phoneme consistency. Conclusion In this paper we have presented the results of a first pilot-study aiming at testing the effects of text simplification on children with dyslexia. From our results, reading speed is increased without a loss of 13. Other possible similar pairs (not found in our corpora) : t/f, u/v, a/o 14. The frequencies have been extracted from the Manulex database (column including the five levels). comprehension. It is worth mentioning that reading errors were lower on simplified texts (in this experiment, simplified texts contained a majority of lexical simplifications). The comprehensive analyses of reading errors allow us to propose a detailed description of lexical complexity for dyslexic children. The causes of lexical complexity were mainly related to word length (words longer than seven characters), irregular spelling-to-sound correspondences and infrequent syllable structures. The insights obtained as a result of this first pilot-study are currently being integrated into a model aiming at providing better accessibility of texts for children with dyslexia. We are currently working in a new study with children in French schools to refine the features that are to be taken into account in our model. These results will be integrated into a tool that will automatically simplify texts by replacing complex lexical items with simpler ones. TABLE 1 - 1 IReST corpora features before and after manual simplifications. TABLE 2 - 2 Linguistic transformations on the IReST French corpora. Lexical Simplifications 85.91% Direct replacements 57.04% Removals 13.38% Replacements with morphological changes 4.93% Replacements with syntactical changes 10.56% Syntactic Simplifications 9.86% Reformulations 7.75% Constituent order 2.11% Discursive Simplifications 4.23% Total 100 % . http://www.manulex.com 5. http://cental.uclouvain.be/flelex/ TABLE 3 - 3 . The results are shown on table 3 : Significance of the results obtained. Variables Original texts Simplified texts T value Significance Reading times (sec) 159.94 134.70 -3.528 0.006** Reading speed (words per minute) 64.85 71.10 4.105 0.003** TABLE 4 - 4 Distribution of the types of errors in original and simplfied texts. TABLE 5 - 5 Significance of the results obtained. TABLE 6 - 6 Error typology compared accross languages.The overall error typology that we propose is shown on table 7 : Type of lexical replacement Original word English translations Pseudo-word 119 29.46% grenouille > *greniole frog, * Grammatical variant 135 33.42 % oubliaient > oublient forgot, forget Lexical replacement 84 20.79% attendent > attaquent wait, attack Morphological variant 43 10.64% construction > construire build, to build Orthographical neighbour 23 5.69% jaunes > jeunes yellow, young Total 404 100% TABLE 7 - 7 Error typology. : Part-of-speech tags of tokens incorrectly read VERB 196 48.51 % NOUN 115 28.47% ADJECTIVE 48 11.88% ADVERB 25 6.19% Other categories (determiners excluded) 20 4.95% TABLE 8 - 8 Part-of-speech distribution of the tokens in the corpora.We analyzed the syllabe structure of the 404 tokens. The average number of syllables is 2.09, the distribution is shown on table 9 : Number of syllabs 1 syllab 72 30,64% 2 syllabs 96 40,85% 3 syllabs 47 20,00% 4 syllabs 15 6,38% 5 syllabs 5 2,13% 235 100,00% TABLE 9 - 9 Syllabs distribution of the tokens in the corpora. , as shown on table10 : Syllable structure CV 230 47,03% V 57 11,66% CVC 107 21,88% CVCC, CCVC, CYVC 47 9,61% CYV, CCV, VCC, CVY 34 6,95% VC, YV 10 2,04% VCCC, CCYV, CCVCC 4 0,82% 489 100,00% TABLE 10 - 10 Syllable structure. TABLE 11 - 11 Graphical alternations. TABLE 12 - 12 Lexical replacements typology with frequencies of the tokens. We used standardized reading tests to assess the reading level of each child, i.e. lAlouette[START_REF] Lefavrais | Test de l'alouette[END_REF] and PM47[START_REF] Raven | Pm47 : Standard progressive matrices : Sets a[END_REF] and a small battery of tests to assess general cognitive abilities. http ://www.vision-research.eu Acknowledgements We deeply thank the speech therapists Aurore and Mathilde Combes for collecting the reading data and providing a first analysis of the data. We also thank Luz Rello for her valuable insights on parts of the results.
24,271
[ "18582", "12344" ]
[ "862", "849" ]
01757946
en
[ "spi" ]
2024/03/05 22:32:10
2017
https://hal.science/hal-01757946/file/CK2017_WuBaiCaro_HAL.pdf
Guanglei Wu Shaoping Bai Stéphane Caro email: stephane.caro@ls2n.fr Transmission Quality Evaluation for a Class of Four-limb Parallel Schönflies-motion Generators with Articulated Platforms Keywords: Schönflies motion, Jacobian, pressure angle, transmission. 1 This paper investigated the motion/force transmission quality for a class of parallel Schönflies-motion generators built with four identical RRΠ RR-type limbs. It turns out that the determinant of the forward Jacobian matrices for this class of parallel robots can be expressed as the scalar product of two vectors, the first vector being the cross product of the four unit vectors along the parallelograms, the second one being related to the rotation of the mobile platform. The pressure angles, derived from the determinants of forward and inverse Jacobians, respectively, are used for the evaluation of the transmission quality of the robots. Four robots are compared based on the proposed method as illustrative examples. Introduction Parallel robots performing Schönflies motions are well adapted to high-speed pickand-place (PnP) operations [START_REF] Pierrot | Optimal design of a 4-dof parallel manipulator: From academia to industry[END_REF][START_REF] Amine | Singularity conditions of 3T1R parallel manipulators with identical limb structures[END_REF], thanks to their lightweight architecture and high stiffness. A typical robot is the Quattro robot [START_REF]Adept Quattro Parallel Robots[END_REF] by Adept Technologies Inc., the fastest industrial robot available. Its latest version can reach an acceleration up to 15 G with a 2 kg payload, allowing to accomplish four standard PnP cycles per second. Its similar version is the H4 robot [START_REF] Pierrot | H4: a new family of 4-dof parallel robots[END_REF] that consists of four identical limbs and an articulated traveling plate [START_REF] Company | Internal singularity analysis of a class of lower mobility parallel manipulators with articulated traveling plate[END_REF]. Recently, the Veloce. robot [START_REF] Veloce | [END_REF] with a different articulated platform that is connected by a screw pair has been developed. Besides, the four-limb robots with single-platform architecture have also been reported [START_REF] Wu | Architecture optimization of a parallel schönflies-motion robot for pick-and-place applications in a predefined workspace[END_REF][START_REF] Xie | Design and development of a high-speed and high-rotation robot with four identical arms and a single platform[END_REF]. Four-limb parallel robots with an articulated mobile platform are displayed in Fig. 1. It is noteworthy that the H4 robot with the modified mobile platform can be mounted vertically instead of the horizontal installation for the reduced mounting space, to provide a rotation around an axis of vertical direction, which is named as "V4" for convenience in the following study. In the design and analysis of a manipulator, its kinematic Jacobian matrix plays an important role, since the dexterity/manipulability of the robot can be evaluated by the condition number of Jacobians as well as the accuracy/torque capability [START_REF] Merlet | Jacobian, manipulability, condition number, and accuracy of parallel robots[END_REF] be-tween the actuators and end-effector. On the other hand, a problem usually encountered in this procedure is that the parallel manipulators with mixed input or/and output motions, i.e., compound linear and angular motions, will result in dimensionally inhomogeneous Jacobians, thus, the conventional performance indices associated with the Jacobian matrix, such as norm or condition number, will lack in physical significance [START_REF] Kim | New dimensionally homogeneous Jacobian matrix formulation by three end-effector points for optimal design of parallel manipulators[END_REF]. As far as Schönflies-motion generators are concerned, their endeffector generates a mixed motion of three translations and one rotation (3T1R), for which the terms of the kinematic Jacobian matrix do not have the same units. A common approach to overcome this problem is to introduce a characteristic length [START_REF] Altuzarra | Multiobjective optimum design of a symmetric parallel Schönflies-motion generator[END_REF] to homogenize the Jacobian matrix, whereas, the measurement significantly depends on the choice of the characteristic length that is not unique, resulting in biased evaluation, although a "best" one can be found by optimization technique [START_REF] Angeles | Is there a characteristic length of a rigid-body displacement?[END_REF]. Alternatively, an efficient approach to accommodate this dimensional inhomogeneity is to adopt the concept of the virtual coefficient, namely, the transmission index, which is closely related to the transmission/pressure angle. The pressure angle based transmission index will be adopted in this work. This paper presents a uniform evaluation approach for transmission quality of a family of four-limb 3T1R parallel robots with articulated mobile platforms. The pressure angles, derived from the forward and inverse Jacobians straightforward, are used for the evaluation of the transmission quality of the robots. The defined transmission index is illustrated with four robot counterparts for the performance evaluation and comparison. The global coordinate frame F b is built with the origin located at the geometric center of the base platform. The x-axis is parallel to the segment A 2 A 1 (A 3 A 4 ), and the z-axis is normal to the base-platform plane pointing upwards. The moving coordinate frame F p is attached to the mobile platform and the origin is at the geometric center, where X-axis is parallel to segment C 2 C 1 (C 3 C 4 ). Vectors i, j and k represent the unit vectors of x-, yand z-axis, respectively. The axis of rotation of the ith actuated joint is parallel to unit vector u i = R z (α i )i, where R stands for the rotation matrix, and Manipulator Architecture α 1 = -α 2 = α -π/2, α 3 = -α 4 = β + π/2 . Moreover, unit vectors v i and w i are parallel to the segments A i B i and B i C i , respectively, namely, the unit vectors along the proximal and distal links, respectively. C2 C4 H pair (lead: h) P2 (c) Kinematics and Jacobian Matrix of the Robots The Cartesian coordinates of points A i and B i expressed in the frame F b are respectively derived by a i = R cos η i sin η i 0 T (1) b i = bv i + a i ; v i = R z (α i )R x (θ i )j (2) where η i = (2i -1)π/4, i = 1, ..., 4, and θ i is the input angle. Let the mobile platform pose be denoted by χ χ χ = p T φ T , p = x y z T , the Cartesian coordinates of point C i in frame F b are expressed as c i =    sgn(cos η i )rR z (φ )i + sgn(sin η i )cj + p, Quattro (H4) -sgn(cos η i )rR y (φ )i + sgn(cos η i )cj + p, V4 rR z (η i )i + mod(i, 2)hφ /(2π)k + p, Veloce. (3) where sgn(•) stands for the sign function of (•), and mod stands for the modulo operation, h being the lead of the screw pair of the Veloce. robot. The inverse geometric problem has been well documented [START_REF] Pierrot | Optimal design of a 4-dof parallel manipulator: From academia to industry[END_REF]. It can be solved from the following the kinematic constraint equations: (c i -b i ) T (c i -b i ) = l 2 , i = 1, ..., 4 (4) Differentiating Eq. ( 4) with respect to time, one obtains φ rw T i s i + w T i ṗ = θi bw T i (u i × v i ) (5) with w i = c i -b i l ; s i =    sgn(cos η i )R z (φ )j, Quattro (H4) sgn(cos η i )R y (φ )k, V4 mod(i, 2)hφ /(2π)k, Veloce. (6) Equation ( 5) can be cast in a matrix form, namely, A χ χ χ = B θ θ θ (7) with A = e 1 e 2 e 3 e 4 T ; χ χ χ = ẋ ẏ ż φ T (8a) B = diag h 1 h 2 h 3 h 4 ; θ θ θ = θ1 θ2 θ3 θ4 T (8b) where A and B are the forward and inverse Jacobian matrices, respectively, and e i = w T i rw T i s i T ; h i = bw T i (u i × v i ) (9) As along as A is nonsingular, the kinematic Jacobian matrix is obtained as J = A -1 B (10) According to the inverse Jacobian matrix, each limb can have two working modes, which is characterized by the sign "-/+" of h i . In order for the robot not to reach any serial singularity, the mode h i < 0, i = 1, ..., 4, is selected as the working mode for all the robots. Transmission Quality Analysis Our interests are the transmission quality, which is related to the robot Jacobian. The determinant |B| of the inverse Jacobian matrix B is expressed as |B| = 4 ∏ i=1 h i = b 4 4 ∏ i=1 w T i (u i × v i ) (11) sequentially, the pressure angle µ i associated with the motion transmission in the ith limb, i.e., the motion transmitted from the actuated link to the parallelogram, is defined as: µ i = cos -1 w T i (u i × v i ), i = 1, ..., 4 (12) namely, the pressure angle between the velocity of point B i along the vector of u i × v i and the pure force applied to the parallelogram along w i , as shown in Fig. 3(a). where w mn = w m × w n . Taking the Quattro robot as an example, the pressure angle σ amongst limbs, namely, the force transmitted from the end-effector to the passive parallelograms in the other limbs, provided that the actuated joints in these limbs are locked, is derived below: A i B i C i u i v i w i u i ×v i μ i (a) σ = cos -1 (w 14 × w 23 ) T s w 14 × w 23 ( 14 ) wherefrom the geometrical meaning of angle σ can be interpreted as the angle between the minus Y -axis (s is normal to segment P 1 P 2 ) and the intersection line of planes B 1 P 1 B 4 and B 2 P 2 B 3 , where plane B 1 P 1 B 4 (B 2 P 2 B 3 ) is normal to the common perpendicular line between the two skew lines along w 1 and w 4 (w 2 and w 3 ), as depicted in Fig. 3(b). To illustrate the angle σ physically, (w 14 × w 23 ) T s can be rewritten in the following form: (w 14 × w 23 ) T s = w T 14 [w 3 (w 2 • s) -w 2 (w 3 • s)] (15) = w T 23 [w 4 (w 1 • s) -w 1 (w 4 • s)] The angle σ now can be interpreted as the pressure angle between the velocity in the direction of w 1 × w 4 and the forces along w 2 × w 3 imposed by the parallelograms in limbs 2 and 3 to point P, under the assumption that the actuated joints in limbs 1 and 4 are locked simultaneously. The same explanation is applicable for the case when the actuated joints in limbs 2 and 3 are locked. By the same token, the pressure angle for the remaining robot counterparts can be defined. Consequently, the motion κ and force ζ transmission indices (TI) a prescribed configuration are defined as the minimum value of the cosine of the pressure angles, respectively, κ = min(| cos µ i |), i = 1, ..., 4; ζ = | cos σ | (16) To this end, the local transmission index (LTI) [START_REF] Wang | Performance evaluation of parallel manipulators: Motion/force transmissibility and its index[END_REF] is defined as η = min{κ, ζ } = min{| cos µ i |, | cos σ |} ∈ [0, 1] (17) The larger the value of the index η, the better the transmission quality of the manipulator. This index can also be applicable for singularity measurement, where η = 0 means singular configuration. Transmission Evaluation of PnP Robots In this section, the transmission index over the regular workspace, for the Quattro, H4, Veloce. and V4 robots, will be mapped to analyzed their motion/force transmission qualities. According to the technical parameters of the Quattro robot [START_REF]Adept Quattro Parallel Robots[END_REF], the parameters of the robots' base and mobile platforms are given in Table 1, and other parameters are set to R = 275 mm, b = 375 mm and l = 800 mm, respectively. Table 1 Geometrical parameters of the base and mobile platforms of the four-limb robots. The LTI isocontours of the four robots with different rotation angles of mobile platform are visualized in Fig. 4, from which it is seen that the minimum LTI of the Quattro and Veloce. robots are much higher than those of H4 and V4. Moreover, the volumes of the formers with LTI ≥ 0.7 are larger, to formulate larger operational workspace with high transmission quality. This means that the four-limb robots with a fully symmetrical structure have much better transmission performance than the asymmetric robot counterparts. Another observation is that the transmission performance of the robots decreases with the increasing MP rotation angle. As displayed in Fig. 4(a), the transmission index of the Quattro robot have larger values in the central region, which admits a singularity-free workspace with rotational capability φ = ±45 • . Similarly, Fig. 4(c) shows that the Veloce. robot can also have a high-transmission workspace free of singularity with smaller lead of screw pair, which means that this type of mobile platform allows the robot to have high performance in terms of transmission quality and rotational capability of fullcircle rotation. By contrast, the asymmetric H4 and V4 robots result in relatively small operational workspace and relatively low transmission performance, as illustrated in Figs. 4(b) and 4(d), but similar mechanism footprint ratio with same link dimensions and close platform shapes. Conclusions This paper presents the transmission analysis for a class of four-limb parallel Schönflies-motion robots with articulated mobile platforms, closely in connection with two pressure angles derived from the forward and inverse Jacobian matrices, wherein the determinant of the forward Jacobian matrices was simplified in an elegant manner, i.e., the scalar product between two vectors, through the Laplace expansion. The cosine function of the pressure angles based indices are defined to evaluate the transmission quality. It appears that the robot with the screw-pair-based mobile platform, namely, the Veloce., is the best in terms of transmission quality for any orientation of the mobile-platform. Figure 2 ( 2 Figure 2(a) depicts a simplified CAD model of the parallel Schönflies-motion generator, which is composed of four identical RRΠRR 1 -type limbs connecting the base and an articulated mobile platform (MP). The generalized base platform and the different mobile platforms of the four robots are displayed in Figs. 2(b) and 2(c), respectively.The global coordinate frame F b is built with the origin located at the geometric center of the base platform. The x-axis is parallel to the segment A 2 A 1 (A 3 A 4 ), and the z-axis is normal to the base-platform plane pointing upwards. The moving coordinate frame F p is attached to the mobile platform and the origin is at the geometric center, where X-axis is parallel to segment C 2 C 1 (C 3 C 4 ). Vectors i, j and k represent the unit vectors of x-, yand z-axis, respectively. The axis of rotation of the ith actuated joint is parallel to unit vector u i = R z (α i )i, where R stands for the rotation matrix, and α 1 = -α 2 = απ/2, α 3 = -α 4 = β + π/2. Moreover, unit vectors v i and w i are parallel to the segments A i B i and B i C i , respectively, namely, the unit vectors along the proximal and distal links, respectively. Fig. 1 1 Fig. 1 The four-limb PnP robots with different base and mobile platforms: (a) Quattro [1]; (b) H4 [9]; (c) Veloce. [2]; (d) "V4" [12]. Fig. 2 2 Fig. 2 The parameterization of the four-limb robots: (a) simplified CAD model; (b) a generalized base platform; (c) three different mobile platforms for the four robots. Fig. 3 3 Fig. 3 The pressure angles of the four-limb robots in the motion/force transmission: (a) µ i for all robots; (b) σ for Quattro. robots base mobile platform Quattro α = -π/4, β = 3π/4 r = 80 mm, c = 70 mm H4, V4 α = 0, β = π/2 r = 80 mm, c = 70 mm Veloce. α = -π/4, β = 3π/4 r = 100 mm, γ = (2i -1)π/4, h Fig. 4 4 Fig. 4 The LTI isocontours of the robots: (a) Quattro, φ = 0 and φ = 45 • ; (b) H4, φ = 0 and φ = 45 • ; (c) Veloce. with φ = 2π, screw lead h = 20 and h = 50; (d) V4, φ = 0 and φ = 45 • . Acknowledgements The reported work is partly supported by the Fundamental Research Funds for the Central Universities (DUT16RC(3)068) and by Innovation Fund Denmark (137-2014-5).
15,964
[ "10659" ]
[ "224365", "224365", "481388", "473973", "441569" ]
01757949
en
[ "info" ]
2024/03/05 22:32:10
2018
https://inria.hal.science/hal-01757949/file/HMMSuspicious.pdf
Loïc Hélouët email: loic.helouet@inria.fr John Mullins email: john.mullins@polymtl.ca Hervé Marchand email: herve.marchand@inria.fr Concurrent secrets with quantified suspicion A system satisfies opacity if its secret behaviors cannot be detected by any user of the system. Opacity of distributed systems was originally set as a boolean predicate before being quantified as measures in a probabilistic setting. This paper considers a different quantitative approach that measures the efforts that a malicious user has to make to detect a secret. This effort is measured as a distance w.r.t a regular profile specifying a normal behavior. This leads to several notions of quantitative opacity. When attackers are passive that is, when they just observe the system, quantitative opacity is brought back to a language inclusion problem, and is PSPACEcomplete. When attackers are active, that is, interact with the system in order to detect secret behaviors within a finite depth observation, quantitative opacity turns to be a two-player finitestate quantitative game of partial observation. A winning strategy for an attacker is a sequence of interactions with the system leading to a secret detection without exceeding some profile deviation measure threshold. In this active setting, the complexity of opacity is EXPTIME-complete. I. INTRODUCTION Opacity of a system is a property stating that occurrences of runs from a subset S of runs of the system (the secret) can not be detected by malicious users. Opacity [START_REF] Bryans | Opacity generalised to transition systems[END_REF], [START_REF] Badouel | Concurrent secrets[END_REF] can be used to model several security requirements like anonymity and non-interference [START_REF] Goguen | Security policies and security models[END_REF]. In the basic version of non-interference, actions of the system are divided into high (classified) actions and low (public) ones, and a system is non-interferent iff one can not infer from observation of low operations that highlevel actions were performed meaning that occurrence of high actions cannot affect "what an user can see or do". This implicitly means that users have, in addition to their standard behavior, observation capacities. Non-interference is characterized as an equivalence between the system as it is observed by a low-level user and a ideally secure version of it where high-level actions and hence any information flow, are forbidden. This generic definition can be instantiated in many ways, by considering different modeling formalisms (automata, Petri nets, process algebra,...), and equivalences (language equivalence, bisimulation(s),...) representing the discriminating power of an attacker. (see [START_REF] Sabelfeld | Language-based information-flow security[END_REF] for a survey). Opacity generalizes non-interference. The secrets to hide in a system are sets of runs that should remain indistinguishable from other behaviors. A system is considered as opaque if, as observed, one can not deduce that the current execution belongs to the secret. In the standard setting, violation of opacity is a passive process: attackers only rely on their partial observation of runs of the system. Checking whether a system is opaque is a PSPACE-complete problem [START_REF] Cassez | Synthesis of opaque systems with static and dynamic masks[END_REF]. As such, opacity does not take in account information that can be gained by active attackers. Indeed, a system may face an attacker having the capability not only to observe the system but also to interact with him in order to eventually disambiguate observation and detect a secret. A second aspect usually ignored is the quantification of opacity : the more executions leaking information are costly for the attacker, the more secure is the system. In this paper we address both aspects. A first result of this paper is to consider active opacity, that is opacity in a setting where attackers of a system perform actions in order to collect information on secrets of the system. Performing actions in our setting means playing standard operations allowed by the system, but also using observation capacities to infer whether a sensible run is being performed. Checking opacity in an active context is a partial information reachability game, and is shown EXPTIME-complete. We then address opacity in a quantitative framework, characterizing the efforts needed for an attacker to gain hidden information with a cost function. Within this setting, a system remains opaque if the cost needed to obtain information exceeds a certain threshold. This cost is measured as a distance of the attacker's behavior with respect to a regular profile, modeling that deviations are caught by anomaly detection mechanisms. We use several types of distances, and show that quantitative and passive opacity remains PSPACE-complete, while quantitative and active opacity remains EXPTIMEcomplete. Opacity with passive attackers has been addressed in a quantitative setting by [START_REF] Bérard | Quantifying opacity[END_REF]. They show several measures for opacity. Given a predicate φ characterizing secret runs, a first measure quantifies opacity as the probability of a set of runs which observation suffice to claim the run satisfies φ. A second measure considers observation classes (sets of runs with the same observation), and defines the restrictive probabilistic opacity measure as an harmonic mean (weighted by the probability of observations) of probability that φ is false in a given observation class. Our setting differs from the setting of [START_REF] Bérard | Quantifying opacity[END_REF] is the sense that we do not measure secrecy as the probability to leak information to a passive attacker, but rather quantify the minimal efforts required by an active attacker to obtain information. The paper is organized as follows: Section II introduces our model for distributed systems, and the definition of opacity. Section III recalls the standard notion of opacity usually found in the literature and its PSPACE-completeness, shows how to model active attackers with strategies, and proves that active opacity can be solved as a partial information game over an exponential size arena, and is EXPTIME-complete. Section IV introduces quantification in opacity questions, by measuring the distance between the expected behavior of an agent and its current behavior, and solves the opacity question with respect to a bound on this distance. Section V enhances this setting by discounting distances, first by defining a suspicion level that depends on evolution of the number of errors within a bounded window, and then, by averaging the number of anomalies along runs. The first window-based approach does not change the complexity classes of passive/active opacity, but deciding opacity for averaged measures is still an open problem. II. MODEL Let Σ be an alphabet, and let Σ ⊆ Σ. A word of Σ * is a sequence of letters w = σ 1 . . . σ n . We denote by w -1 the mirror of w, i.e., w -1 = σ n . . . σ 1 .The projection of w on Σ ⊆ Σ is defined by the morphism π Σ : Σ * → Σ * defined as π Σ ( ) = , π Σ (a.w) = a.π Σ (w) if a ∈ Σ and π Σ (a.w) = π Σ (w) otherwise. The inverse projection of w is the set of words which projection is w, and is defined as π -1 Σ (w) = {w ∈ Σ * | π Σ (w ) = w}. For a pair of words w, w defined over alphabets Σ and Σ , the shuffle of w and w is denoted by w||w and is defined as the set of words w||w = {w | π Σ (w ) = w ∧ π Σ (w ) = w }. The shuffle of two languages L 1 , L 2 is the set of words obtained as a shuffle of a words of L 1 with a word of L 2 . Definition 1: A concurrent system S = (A, U ) is composed of: • A finite automaton A = (Σ, Q, -→, q 0 , F ) • A finite set of agents U = u 1 , . . . u n , where each u i is a tuple u i = (A i , P i , S i , Σ i o ), where A i , P i , S i are automata and Σ i o an observation alphabet. Agents behave according to their own logic, depicted by a finite automaton A i = (Σ i , Q i , -→ i , q i 0 , F i ) over an action alphabet Σ i . We consider that agents moves synchronize with the system when performing their actions. This allows modeling situations such as entering critical sections. We consider that in A and in every A i , all states are accepting. This way, every sequence of steps of S that conforms to transition relations is a behavior of S. An agent u i observes a subset of actions, defined as an observation alphabet Σ i o ⊆ Σ 1 . Every agent u i possesses a secret, defined as a regular language L(S i ) recognized by automaton S i = (Σ, Q S i , -→ S i , q S 0,i , F S i ). All states of secret automata are not accepting, i.e. some behaviors of an agent u i are secret, some are not. We equip every agent u i with a profile P i = (Σ, Q P i , δ P i , s P 0,i , F P i ), that specifies its "normal" behavior. The profile of an agent is 1 A particular case is Σ i o = Σ i , meaning that agent u i observes only what it is allowed to do. prefix-closed. Hence, F P i = Q P i , and if w.a belongs to profile L(P i ) then w is also in user u i 's profile. In profiles, we mainly want to consider actions of a particular agent. However, for convenience, we define profiles over alphabet Σ, and build them in such a way that L( P i ) = L(P i ) (Σ \ Σ i ) * . We assume that the secret S i of an user u i can contain words from Σ * , and not only words in Σ * i . This is justified by the fact that an user may want to hide some behavior that are sensible only if they occur after other agents actions (u 1 plays b immediately after a was played by another agent). For consistency, we furthermore assume that Σ i ⊆ Σ i o , i.e., an user observes at least its own actions. Two users may have common actions (i.e., Σ i ∩ Σ j = ∅), which allows synchronizations among agents. We denote by Σ U = ∪ i∈U Σ i the possible actions of all users. Note that Σ U ⊆ Σ as the system may have its own internal actions. Intuitively, in a concurrent system, A describes the actions that are feasible with respect to the current global state of the system (available resources, locks, access rights,...). The overall behavior of the system is a synchronized product of agents behaviors, intersected with L(A). Hence, within a concurrent system, agents perform moves that are allowed by their current state if they are feasible in the system. If two or more agents can perform a transition via the same action a, then all agents that can execute a move conjointly to the next state in their local automaton. More formally, a configuration of a concurrent system is a tuple C = (q, q 1 , . . . , q |U | ), where q ∈ Q is a state of A and each q i ∈ Q i is a local state of user u i . The first component of a configuration C is denoted state(C). We consider that the system starts in an initial configuration C 0 = (q 0 , q 1 0 , . . . , q |U | 0 ). A move from a configuration C = (q, q 1 , . . . , q |U | ) to a configuration C = (q , q 1 , . . . , q |U | ) via action a is allowed • if a ∈ Σ U and (q, a, q ) ∈-→, or • if a ∈ Σ U , (q, a, q ) ∈-→, there exists at least one agent u i such that (q i , a, q i ) ∈-→ i , and for every q j such that some transition labeled by a is firable from q j , (q j , a, q j ) ∈-→ j . The local state of agents that cannot execute a remains unchanged, i.e., if agent u k is such that a ∈ Σ k and (q j , a, q j ) ∈-→ j , then q k = q k . A run of S = (A, U ) is a sequence of moves ρ = C 0 a1 -→ C 1 . . . C k . Given a run ρ = C 0 a1 -→ C 1 . . . a k -→ C k , we denote by l(ρ) = a 1 • • • a k its corresponding word. The set of run of S is denoted by Runs(S), while the language L(S) = l(Runs(S)) is the set of words labeling runs of S. We denote by Conf (S) the configurations reached by S starting from C 0 . The size |S| of S is the size of its set of configurations. Given an automaton A, P i , or S i , we denote by δ(q, A, a) (resp δ(q, P i , a), δ(q, S i , a)) the states that are successors of q by a transition labeled by a, i.e. δ(q, A, a) = {q | q a -→ q }. This relation extends to sets of states the obvious way, and to words, i.e. δ(q, A, w.a) = δ(δ(q, A, w), A, a) with δ(q, A, ) = {q}. Last, for a given sub-alphabet Σ ⊆ Σ and a letter a ∈ Σ , we define by ∆ Σ (q, A, a) the set of states that are reachable from q in A by sequences of moves which observation is a. More formally, ∆ Σ (q, A, a) = {q | ∃w ∈ (Σ \ Σ ) * , q ∈ δ(q, A, w.a)}. III. OPACITY FOR CONCURRENT SYSTEMS The standard Boolean notion of opacity introduced by [START_REF] Bryans | Opacity generalised to transition systems[END_REF], [START_REF] Badouel | Concurrent secrets[END_REF] says that the secret of u i in a concurrent system S is opaque to u j if, every secret run of u i is equivalent with respect to u j 's observation to a non-secret run. In other words, u j cannot say with certainty that the currently executed run belongs to L(S i ). Implicitly, opacity assumes that the specification of the system is known by all participants. In the setting of concurrent system with several agents and secrets, concurrent opacity can then be defined as follows: Definition 2 (Concurrent Opacity): A concurrent system S is opaque w.r.t. U (noted U -Opaque) if ∀i = j, ∀w ∈ L(S i ) ∩ L(S), π -1 Σ j o (π Σ j o (w)) ∩ L(S) L(S i ) Clearly, U -opacity is violated if one can find a pair of users u i , u j and a run labeled by a word w ∈ L(S i )∩L(S) such that π -1 Σ j o (π Σ j o (w)) ∩ L(S) ⊆ L(S i ), i.e. after playing w, there in no ambiguity for u j on the fact that w is a run contained in u i s secret. Unsurprisingly, checking opacity can be brought back to a language inclusion question, and is hence PSPACE-complete. This property was already shown in [START_REF] Cassez | Synthesis of opaque systems with static and dynamic masks[END_REF] with a slightly different model (with a single agent j which behavior is Σ * j and a secret defined as a sub-language of the system A). Theorem 3 ( [START_REF] Cassez | Synthesis of opaque systems with static and dynamic masks[END_REF]): Deciding whether S is U -opaque is PSPACE-complete. Proof:[sketch] The proof of PSPACE-completeness consists in first showing that one can find a witness run in polynomial space. One can chose a pair of users u i , u j in logarithmic space with respect to the number of users, and then find a run after which u j can estimate without error that u i is in a secret state. Then, an exploration has to maintain u j 's estimation of possible configuration of status of u i 's secret with |Conf (S)| * |S i | bits. It is also useless to consider runs of length greater than 2 |Conf (S)| * |Si| . So finding a witness is in NPSPACE and using Savitch's lemma [START_REF] Walter | Relationships between nondeterministic and deterministic tape complexities[END_REF] and closure of PSPACE by complementation, opacity is in PSPACE. Hardness comes from a reduction from universality question for regular languages. We refer interested readers to appendix for a complete proof. The standard notion of opacity considers accidental leakage of secret information to an honest user u j that is passive, i.e. that does not behave in order to obtain this information. One can also consider an active setting, where a particular agent u j behaves in order to obtain information on a secret S i . In this setting, one can see opacity as a partial information reachability game, where player u j tries to reach a state in which his estimation of S i s states in contained in F S i . Following the definition of non-interference by Goguen & Messeguer [START_REF] Goguen | Security policies and security models[END_REF], we also equip our agents with observation capacities. These capacities can be used to know the current status of resources of the system, but not to get directly information on other agents states. We define a set of atomic propositions Γ, and assign observable propositions to each state of A via a map O : Q → 2 Γ . We next equip users with additional actions that consist in asking for the truth value of a particular proposition γ ∈ Γ. For each γ ∈ Γ, we define action a γ that consists in checking the truth value of proposition γ, and define Σ Γ = {a γ | γ ∈ Γ}. We denote by a γ (q) the truth value of proposition γ in state q, i.e., a γ (q) = tt if γ ∈ O(q) and ff otherwise. Given a set of states X = {q 1 , . . . q k }, the refinement of X with assertion γ = v where v ∈ {tt, ff } is the set X \γ=v = {q i ∈ X | a γ (q i ) = v}. Refinement easily extends to a set of configurations CX ⊆ Conf (S) with CX \γ=v = {C ∈ CX | a γ (state(C)) = v}. We allow observation from any configuration for every user, hence a behavior of a concurrent system with active attackers shuffles behaviors from L(S), observation actions from Σ * Γ and the obtained answers. To simplify notations, we assume that a query and its answer are consecutive transitions. The set of queries of a particular agent u j will be denoted by Σ Γ j . Adding the capacity to observe states of a system forces to consider runs of S containing queries followed by their answers instead of simply runs over Σ * . We will denote by S Γ the system S executed in an active environment Formally, a run of S Γ in an active setting is a sequence ρ = C 0 e1 -→ S Γ C 1 . . . e k -→ S Γ C k where C 0 , . . . , C k are usual configurations, each e i is a letter from Σ ∪ Σ Γ ∪ {tt, ff }, such that • if e k ∈ Σ Γ then C k+1 ∈ δ(C k , S, e k ). • if e k = a γ ∈ Σ Γ , then e k+1 = a γ (q k-1 )2 , and C k-1 = C k+1 . Intuitively, testing the value of a proposition does not change the current state of the system. Furthermore, playing action a γ from C k-1 leaves the system in the same configuration, but remembering that an agent just made the query a γ . We will write C k = C k-1 (a γ ) to denote this situation. The semantics of S Γ can be easily obtained from that of S. It can be defined as a new labeled transition system LT S Γ (S) = (Conf (S Γ ), -→ S Γ , C 0 ) over alphabet Σ ∪ Σ Γ ∪ {tt, ff } recognizing runs of S Γ . If LT S(S) = (Conf (S), -→) is an LTS defining runs of S, then LT S(S Γ ) can be built by adding a loop of the form C k aγ -→ S Γ C k (a γ ) aγ (q k ) -→ S Γ C k from each configuration C k in Conf (S). We denote by Runs(S Γ ) the set of runs of system S in an active setting with observation actions Σ Γ . As usual, ρ is a secret run of agent u i iff l(ρ) is recognized by automaton S i . The observation of a run ρ by user u j is a word l j (ρ) obtained by projection of l(ρ) on Σ j ∪ Σ Γ j ∪ {tt, ff }. Hence, an observation of user j is a word l j (ρ) = α 1 . . . α k where α m+1 ∈ {tt, ff } if α m ∈ Σ Γ j (α m is a query followed by the corresponding answer). Let w ∈ (Σ j .(Σ Γ j .{tt, ff }) * ) * . We denote by l -1 j (w) the set of runs of S Γ which observation by u j is w. A malicious agent can only rely on his observation of S to take the decisions that will provide him information on other users secret. Possible actions to achieve this goals are captured by the notion of strategy. Definition 4: A strategy for an user u j is a map µ j from Runs(S Γ ) to Σ j ∪ Σ Γ j ∪ { }. We assume that strategies are observation based, that is if l j (ρ) = l j (ρ ), then µ j (ρ) = µ j (ρ ). A run ρ = C 0 e1 -→ C 1 . . . C k conforms to strategy µ j iff, ∀i, µ j (l(C 0 -→ . . . C i )) = implies e i+1 = µ j (l(C 0 -→ . . . C i )) or e i+1 ∈ Σ j ∪ Σ Γ j . Intuitively, a strategy indicates to player u j the next move to choose (either an action or an observation or nothing. Even if a particular action is advised, another player can play before u j does. We will denote by Runs(S, µ j ) the runs of S that conform to µ j . Let µ j be a strategy of u j and ρ ∈ Runs(S Γ ) be a run ending in a configuration C = (q, q 1 , . . . q |U | ), we now define the set of all possible configurations in which S can be after observation l j (ρ) under strategy µ j . It is inductively defined as follows: • ∆ µj (X, S Γ , ) = X for every set of configurations X • ∆µ j (X, S Γ , w.e) =                ∆ Σ j o (∆µ j (X, S Γ , w), S Γ , e) if e ∈ Σj ∆µ j (X, S Γ , w) if e = aγ ∈ Σ Γ j , ∆µ j (X, S Γ , w) \γ(q) if e ∈ {tt, ff } and w = w .aγ for some γ ∈ Γ Now, ∆ µj ({C 0 }, S Γ , w) is the estimation of the possible set of reachable configurations that u j can build after observing w. We can also define a set of plausible runs leading to observation w ∈ (Σ j o ) * by u j . A run is plausible after w if its observation by u j is w, and at every step of the run ending in some configuration C k a test performed by u j refine u j s estimation to a set of configuration that contain C k . More formally, the set of plausible runs after w under strategy µ j is P l j (w) = {ρ ∈ Runs(S, µ j ) | l j (ρ) = w ∧ ρ is a run from C 0 to a configuration C ∈ ∆ µj ({C 0 }, S Γ , w)}. We now redefine the notion of opacity in an active context. A strategy µ j of u j to learn S i is not efficient if despite the use of µ j , there is still a way to hide S i for an arbitrary long time. In what follows, we assume that there is only one attacker of the system. Definition 5 (Opacity with active observation strategy): A secret S i is opaque for any observation strategy to user u j in a system S iff µ j and a bound K ∈ N, such that ∀ρ ∈ Runs(S, µ j ), ρ has a prefix ρ 1 of size ≤ K, l(P l(ρ 1 )) ⊆ L(S i ). A system S is opaque for any observation strategy iff ∀i = j, secret S i is opaque for any observation strategy of u j . Let us comment differences between passive (def. 2) and active opacity (def. 5). A system that is not U-opaque may leak information while a system that not opaque with active observation strategy cannot avoid leaking information if u j implements an adequate strategy. U-opaque systems are not necessarily opaque with strategies, as active tests give additional information that can disambiguate state estimation. However, if a system is U-opaque, then strategies that do not use disambiguation capacities do not leak secrets. Note also that a non-U-opaque system may leak information in more runs under an adequate strategy. Conversely, a non-opaque system can be opaque in an active setting, as the system can delay leakage of information for an arbitrary long time. Based on the definition of active opacity, we can state the following result: Theorem 6: Given a system S = (A, U ) with n agents and a set secrets S 1 , . . . S n , observation alphabets Σ 1 o , . . . Σ n o and observation capacities Σ Γ 1 , . . . , Σ Γ n , deciding whether S is opaque with active observation strategies is EXPTIMEcomplete. Proof:[sketch] An active attacker u j can claim that the system is executing a run ρ that is secret for u i iff it can claim with certainty that ρ is recognized by S i . This can be achieved by maintaining an estimation of the system's current configuration, together with an estimation of S i 's possible states. We build an arena with nodes of the form n = (b, C, s, ES) contains a player's name b (0 or 1): intuitively, 0 nodes are nodes where all agents but u j can play, and 1 nodes are nodes where only agent u j plays. Nodes also contain the current configuration C of S, the current state s of S i , an estimation ES of possible configurations of the system with secret's current state by u j , ES j = {(C 1 , s 1 ), ...(C k , s k )}. The attacker starts with an initial estimation ES 0 = {(C 0 , q S 0,i )}. Then, at each occurrence of an observable move, the state estimation is updated as follows : given a letter a ∈ Σ j o , for every pair (C k , s k ), we compute the set of pairs (C k , s k ) such that there exists a runs from C k to C k , that is labeled by a word w that is accepted from s k and leads to s k in S i and such that l j (w) = a. The new estimation is the union of all pairs computed this way. Moves in this arena represent actions of player u j (from nodes where b = 1 and actions from the rest of the system (see appendix for details). Obviously, this arena is of exponential size w.r.t. the size of configurations of S. A node n = (b, C, s, ES) is not secret if s ∈ F S i , and secret otherwise. A node is ambiguous if there exists (C p , s p ) and (C m , s m ) in ES such that s p ∈ F S i is secret and s m ∈ F S i . If the restriction of ES to it second components is contained in F S i , n leaks secret S i . The set of winning nodes in the arena is the set of nodes that leak S i . Player u j can take decisions only from its state estimation, and wins the game if it can reach a node in the winning set. This game is hence a partial information reachability game. Usually, solving such games requires computing an exponentially larger arena containing players beliefs, and then apply polynomial procedures for a perfect information reachability game. Here, as nodes already contain beliefs, there is no exponential blowup, and checking active opacity is hence in EXPTIME. For the hardness part, we use a reduction from the problem of language emptiness for alternating automata to an active opacity problem. (see appendix for details) Moving from opacity to active opacity changes the complexity class from P SP ACE-complete to EXP T IM E-complete. This is due to the game-like nature of active opacity. However, using observation capacities does not influence complexity: even if an agent u j has no capacity, the arena built to verify opacity of S i w.r.t. u j is of exponential size, and the reduction from alternating automata used to prove hardness does not assume that observation capacities are used. IV. OPACITY WITH THRESHOLD DISTANCES TO PROFILES So far, we have considered passive opacity, i.e. whether a secret can be leaked during normal use of a system, and active opacity, i.e. whether an attacker can force secret leakage with an appropriate strategy and with the use of capacities. In this setting, the behavior of agents is not constrained by any security mechanism. This means that attackers can perform illegal actions with respect to their profile without being discovered, as long as they are feasible in the system. We extend this setting to systems where agents behaviors are monitored by anomaly detection mechanisms, that can raise alarms when an user's behavior seems abnormal. Very often, abnormal behaviors are defined as difference between observed actions and a model of normality, that can be a discrete event model, a stochastic model,.... These models or profiles can be imposed a priori or learnt from former executions. This allows for the definition of profiled opacity, i.e. whether users that behave according to predetermined profile can learn a secret, and active profiled opacity, i.e. a setting where attackers can perform additional actions to refine their knowledge of the system's sate and force secret leakage in a finite amount of time without leaving their normal profile. One can assume that the behavior of an honest user u j is a distributed system is predictable, and specified by his profile P j . The definitions of opacity (def. 2) and active opacity (def. 5) do not consider these profiles, i.e. agents are allowed to perform legally any action allowed by the system to obtain information. In our opinion, there is a need for a distinction between what is feasible in a system, and what is considered as normal. For instance, changing access rights of one of his file by an agent should always be legal, but changing access rights too many times within a few seconds should be considered as an anomaly. In what follows, we will assume that honest users behave according to their predetermined regular profile, and that deviating from this profile could be an active attempt to break the system's security. Yet, even if an user is honest, he might still have possibilities to obtain information about other user's secret. This situation is captured by the following definition of opacity wrt a profile. Definition 7: A system S = (A, U ) is opaque w.r.t. profiles P 1 , . . . P n if ∀i = j, ∀w ∈ L(S i ) ∩ L(S), w ∈ L(P j ) ⇒ π -1 Σ j o (π Σ j o (w)) ∩ L(S) L(S i ) Intuitively, a system is opaque w.r.t profiles of its users if it does not leak information when users stay within their profiles. If this is not the case, i.e. when w ∈ L(P j ), then one can assume that an anomaly detection mechanism that compares users action with their profiles can raise an alarm. Definition 7 can be rewritten as ∀i = j, ∀w ∈ L(S i ) ∩ L(P j ) ∩ L(S), π -1 Σ j o (π Σ j o (w)) ∩ L(S) L(S i ) Hence, P SP ACEcompleteness of opacity in Theorem 3 extends to opacity with profiles: it suffices to find witness runs in L(S)∩L(S i )∩L(P j ). Corollary 8: Deciding whether a system S is opaque w.r.t. a set of profiles P 1 , . . . P n is PSPACE complete. If a system is U-opaque, then it is opaque w.r.t its agents profiles. Using profiles does not change the nature nor complexity of opacity question. Indeed, opacity w.r.t. a profile mainly consists in considering regular behaviors in L(P j ) instead of L(A j ). In the rest of the paper, we will however use profiles to measure how much users deviate from their expected behavior and quantify opacity accordingly. One can similarly define a notion of active opacity w.r.t. profiles, by imposing that choices performed by an attacker are actions that does not force him to leave his profile. This can again be encoded as a game. This slight adaptation of definition 5 does not change the complexity class of the opacity question (as it suffices to remember in each node of the arena a state of the profile of the attacker). Hence active opacity with profiles is still a partial information reachability game, and is also EXPTIME-complete. Passive opacity (profiled or not) holds iff certain inclusion properties are satisfied by the modeled system, and active opacity holds if an active attacker has no strategy to win a partial information reachability game. Now, providing an answer to these opacity questions returns a simple boolean information on information leakage. It is interesting to quantify the notions of profiled and active opacity for several reasons. First of all, profiles can be seen as approximations of standard behaviors: deviation w.r.t. a standard profile can be due to errors in the approximation, that should not penalize honest users. Second, leaving a profile should not always be considered as an alarming situation: if profiles are learned behaviors of users, one can expect that from time to time, with very low frequency, the observed behavior of a user differs from what was expected. An alarm should not be raised as soon as an unexpected event occurs. Hence, considering that users shall behave exactly as depicted in their profile is a too strict requirement. A sensible usage of profiles is rather to impose that users stay close to their prescribed profile. The first step to extend profiled and active opacity to a quantitative setting is hence to define what "close" means. Definition 9: Let u, v be two words of Σ * . An edit operation applied to word u consists in inserting a letter a ∈ Σ in u at some position i, deleting a letter a from u at position i, or substituting a letter a for another letter b in u at position i. Let OP s(Σ) denote the set of edit operations on Σ, and ω(.) be a cost function assigning a weight to each operation in OP s(Σ). The edit distance d(u, v) between u and v is the minimal sum of costs of operations needed to transform u in v. Several edit distances exist, the most known ones are • the Hamming distance ham((u, v)), that assumes that OP s(Σ) contains only substitutions, and counts the number of substitutions needed to obtain u from v (u, v are supposed of equal lengths). • the Levenshtein distance lev((u, v)) is defined as the distance obtained when ω(.) assigns a unit to every operation (insertion, substitution, deletion). One can notice that lev((u, v)) is equal to lev((v, u)), and that max(|u|, |v|) ≥ lev((u, v)) ≥ ||u| -|v||. For a particular distance d(.) among words, the distance between a word u ∈ Σ * and a language R ⊆ Σ * is denoted d(u, R) and is defined as d(u, R) = min{d(u, v) | v ∈ R}. We can now quantify opacity. An expected secure setting is that no secret is leaked when users have behaviors that are within or close enough from their expected profile. In other words, when the observed behavior of agents u 1 , . . . u k resemble the behavior of their profiles P 1 , . . . , P k , no leakage should occur. Resemblance of u i 's behavior in a run ρ labeled by w can be defined as the property d(w, L(P i ))) ≤ K for some chosen notion of distance d(.) and some threshold K fixed by the system designers. In what follows, we will use the Hamming and Levenshtein distances as a proximity measures w.r.t. profiles. However, we believe that this notion of opacity can be extended to many other distances. We are now ready to propose a quantified notion of opacity. Definition 10 (threshold profiled opacity): A system S is opaque wrt profiles P 1 , . . . P n with tolerance K for a distance d iff ∀i = j, ∀w ∈ L(S i ) ∩ L(S), d(w, L(P j )) ≤ K ⇒ π -1 Σ j o (π Σ j o (w)) ∩ L(S) L(S i ) Threshold profiled opacity is again a passive opacity. In some sense, it provides a measure of how much anomaly detection mechanisms comparing users behaviors with their profiles are able to detect passive leakage. Consider the following situation: the system S is opaque w.r.t. profiles P 1 , . . . P n with threshold K + 1 but not with threshold K. Then it means there exists a run of the system with K + 1 anomalies of some user u j w.r.t. profile P j , but no run with K anomalies. If anomaly detection mechanisms are set to forbid execution of runs with more than K anomalies, then the system remains opaque. We can also extend the active opacity with thresholds. Let us denote by Strat K j the set of strategies that forbid actions leaving a profile P j if the behavior of the concerned user u j is already at distance K from P j (the distance can refer to any distance, e.g., Hamming or Levenshtein). Definition 11 (active profiled Opacity): A system S is opaque w.r.t. profiles P 1 , . . . P n with tolerance K iff ∀i = j, µ j ∈ Strat K j such that it is unavoidable for u j to reach a correct state estimation X ⊆ F S i in all runs of Runs(S, µ j ). Informally, definition 10 says that a system is opaque if no attacker u j of the system have a strategy that leaks a secret S i and costs less than K units to reach this leakage. Again, we can propose a game version for this problem, where attacker u j is not only passive, but also has to play his best actions in order to learn u i 's secret. A player u j can attack u i 's secret iff it has a strategy µ j to force a word w ∈ L(S i ) that conforms to µ j , such that d(w, L(P j )) ≤ K and π -1 Σ j o (π Σ j o (w)) ∩ L(S) ⊆ L(S i ). This can be seen as a partial information game between u j and the rest of the system, where the exact sate of each agent is partially known to others. The system wins if it can stay forever is states where u j 's estimates does not allow to know that the secret automaton S i is in one of its accepting states. The arena is built in such a way that u j stops playing differently from its profile as soon as it reaches penalty K. This is again a partial information rechability game and that is decidable on finite arenas [START_REF] Chatterjee | The complexity of partial-observation parity games[END_REF]. Fortunately, we can show (in lemma 12 below) that the information to add to nodes with respect to the games designed for active opacity (in theorem 6) is finite. Lemma 12: For a given automaton G, one can compute an automaton G K that recognizes words at distance at most K of L(G), where the distance is either the Hamming or Levenshtein distance. Proof: Let us first consider the Hamming distance. For an automaton G R = (Q R , -→ R , q 0 R , F R ), we can design an automaton G K ham = (Q K , -→ K , q K 0 , F K ) that recognizes words at a distance at most K from the reference language L(G). We have Q K = Q R × {0..K}, F K = Q × {0. .K}, and q K 0 = (q 0 , 0). Last, we give the transition function: we have ((q, i), a, (q , i)) ∈-→ K iff (q, a, q ) ∈-→ R , and ((q, i), a, (q , i + 1)) ∈-→ K if (q, a, q ) ∈-→ R and i+1 ≤ K, and there exists b = a such that (q, b, q ) ∈-→ R . This way, G K ham recognizes sequences of letters that end on a state (q f , i) such that q f is an accepting state of G R , and i ≤ K. One can easily show that for any accepting path in G K ham ending on state (q f , i) recognizing word w, there exists a path in G R of identical length recognizing a word w that is at hamming distance at most K of w. Similarly, let us consider any accepting path ρ = q 0 R a1 -→ R q 1 . . . an -→ R q f of G R . Then, every path of the form ρ k = (q 0 R , 0) . . . ai1 -→ K (q i1 , 1) . . . (q ik-1 , k -1) a ik -→ K (q ik , k) . . . an -→ K (q f , i) such that i ≤ K and for every (q ij-1 , j -1) aij -→ K (q ij , j), a ij is not allowed in sate q ij is a path that recognizes a word at distance i of a word in R and is also a word of G K R . One can show by induction on the length of paths that the set of all paths recognizing words at distance at most k can be obtained by random insertion of at most k such letter changes in each path of G R . The size of G K ham is exactly |G R | × K. Let us now consider the Levenshtein distance. Similarly to the Hamming distance, we can compute an automaton G K Lev that recognizes words at distance at most K from L(G). Namely, G K Lev = (Q lev , -→ lev , q 0,Lev , F lev ) where Q lev = Q×{0..K}, q 0,lev = (q 0 , 0), F lev = F ×{0..K}. Last the transition relation is defined as ((q, i), a, (q , i)) ∈-→ lev if (q, a, q ) ∈-→, ((q, i), a, (q, i + 1)) ∈-→ lev if q , (q, a, q ) ∈-→ (this transition simulates insertion of letter a in a word), ((q, i), a, (q , i + 1)) ∈-→ lev if ∃(q, b, q ) ∈-→ with b = a (this transition simulates substitution of a character), ((q, i), , (q , i + 1)) ∈-→ lev if ∃(q, a, q ) ∈-→ (this last move simulates deletion of a character from a word in L(G). One can notice that this automaton contains transition, but after and -closure, one obtains an automaton without epsilon that recognizes all words at distance at most K from L(G). The proof of correctness of the construction follows the same lines as for the Hamming distance, with the particularity that one can randomly insert transitions in paths, by playing letters that are not accepted from a state, leaving the system in the same state, and simply increasing the number of differences. Notice that if a word w is recognized by G K Lev with a path ending in a state (q, i) ∈ F Lev , this does not mean that the Levenshtein distance from L(G) is i, as w can be recognized by another path ending in a state (q , j) ∈ F Lev with j < i. s 0 s 1 s 2 a b a, c a s 0 , 0 s 0 , 1 s 0 , 2 s 0 , 3 s 1 , 0 s 1 , 1 s 1 , 2 s 1 , 3 s 2 , 0 s 2 , 1 s 2 , 2 s One can notice that the automata built in the proof of lemma 12 are of size in O(K.|G|), even after -closure. Figure 1 represents an automaton G that recognizes the prefix closure of a.a * .b.(a + c) * , and the automaton G 3 Ham . Theorem 13: Deciding threshold opacity for the Hamming and Levenshtein distance is PSPACE complete. Proof: First of all, one can remark that, for a distance d(.), a system S is not opaque if there exists a pair of users u i , u j and a word w in L(S) ∩ L(S i ) such that d(w, L(P j )) ≤ K, and π -1 Σ j o (π Σ j o (w)) ∩ L(S) ⊆ L(S i ). As already explained in the proof of theorem 3, w belongs to L(S i ) if a state q w reached by S i after reading w belongs to F S i . Still referring to the proof of Theorem 3, one can maintain online when reading letters of w the set reach j (w) of possible configurations and states of S i that are reached by a run which observation is the same as π Σ j o (w). One can also notice that lev(w, L(P j )) ≤ K iff w recognized by P K j,lev , the automaton that accepts words at Levenshtein distance at most K from a word in P j . Again, checking online whether w is recognized by P K j,Lev consists in maintaining a set of states that can be reached by P K j,Lev when reading w. We can denote by reach K j,Lev (w) this set of states. When no letter is read yet, reach K j,Lev ( ) = {q K 0 }, and if lev(w, L(P j )) > K, we have reach K j,Lev (w) = ∅, meaning that the sequence of actions played by user u j have left the profile. We can maintain similarly a set of states reach K j,Ham (w) for the Hamming distance. In what follows, we will simply use reach K j (w) to denote a state estimation using Levenstein of Hamming distance. Hence, non-opacity can be rephrased as existence of a run, labeled by a word w such that reach j (w) ⊆ F S i and reach K j (w) = ∅. The contents of reach j (w) and reach K j (w) after reading a word w can be recalled with a vector of h = |S| + |P K j | bits. Following the same arguments as in Theorem 3, it is also useless to consider runs of size greater than 2 h . One can hence non-deterministically explore the whole set of states reached by reach j (w) and reach K j (w) during any run of S by remembering h bits and a counter which value is smaller of equal to 2 h , and can hence be encoded with at most h bits. So, finding a witness for non-opacity is in NP-SPACE, and by Savitch's theorem and closure by complementation of PSPACE, opacity with a threshold K is in PSPACE. For the hardness part, it suffices to remark that profiled opacity is exactly threshold profiled opacity with K = 0. Theorem 14: Deciding active profiled opacity for the Hamming and Levenshtein distance is EXPTIME-complete. Proof:[sketch] Let us first consider the Hamming distance. One can build an arena for a pair of agents u i , u j as for the proof of theorem 6. This arena is made of nodes of the form (b, C, s, spjk, ES, d) that contain: a bit b indicating if it is u j turn to play and choose the next move, C the current configuration of S , s the current state of S i , the estimation of ES of possible pairs (C, s) of current configuration and current state of the secret by player u j , and spjk a set of states of the automaton P K j,ham that recognizes words that are at Hamming distance at most K from P j . In addition to this information, a node contains the distance d of currently played sequence w.r.t. profile P j . This distance can be easily computed: if all states of P K j,ham memorized in spjk are pairs of state and distance, i.e., spkj = {(q 1 , i 1 ), (q 2 , i 2 ), . . . , (q k , i k )} then d = min{i 1 , . . . , i k }. User u j (the attacker) has partial knowledge of the current state of the system (i.e. a configuration of S and of the state of S i ), perfect knowledge of d. User j wins if it can reach a node in which his estimation of the current state of secret S i is contained in F S i (a non-ambiguous and secret node), without exceeding threshold K. The rest of the system wins if it can prevent player u j to reach a non-ambiguous and secret node of the arena. We distinguish a particular node ⊥ reached as soon as the distance w.r.t. profile P j is greater than K. We consider this node as ambiguous, and every action from it gets back to ⊥. Hence, after reaching ⊥, player u j has no chance to learn S i anymore. The moves from a node to another are the same as in the proof for theorem 6, with additional moves from any node of the form n = (1, q, s, spjk, ES, d) to ⊥ using action a is the cost of using a from n exceeds K. We add an equivalence relation ∼, such that n = (b, q, s, spjk, ES, d) ∼ n = (b , q , s , spjk , ES , d ) iff b = b , spjk = spjk , d = d , and ES = ES . Obviously, u j has a strategy to violate u i 's secret without exceeding distance K w.r.t. its profile P j iff there is a strategy to reach W in = {(b, q, s, spjk, ES, d) | ES ⊆ S F i } for player u j with partial information that does not differentiate states in the equivalence classes of ∼. This is a partial information reachability game over an arena of size in O(2.|Conf (S)|.|S i |.2 |Conf (S)|.|Si|.K.|Pj | ), that is exponential in the size of S and of the secret S i and profile P j . This setting is a partial information reachability game over an arena of exponential size. As in the Boolean setting, the nodes of the arena already contain a representation of the beliefs that are usually computed to solve such games, and hence transforming this partial information reachability game into a perfect information game does not yield an exponential blowup. Hence, solving this reachability game is in EXPTIME. The hardness part is straightforward: the emptiness problem for alternating automaton used for the proof of theorem 6 can be recast in a profiled and quantified setting by setting each profile P i to an automaton that recognizes (Σ Γ i ) * (i.e., users have the right to do anything they want as far as they always remain at distance 0 from their profile). V. DISCOUNTING ANOMALIES Threshold opacity is a first step to improve the standard Boolean setting. However, this form of opacity supposes that anomaly detection mechanisms memorize all suspicious moves of users and never revises their opinion that a move was unusual. This approach can be too restrictive. In what follows, we propose several solutions to discount anomalies. We first start by counting the number of substitutions in a bounded suffix with respect to the profile of an attacker. A suspicion score is computed depending on the number of differences within the suffix. This suspicion score increases if the number of errors in the considered suffix is above a maximal threshold, and it is decreased as soon as this number of differences falls below a minimal threshold. As in former sections, this allows for the definition of passive and active notions of opacity, that are respectively PSPACE-complete and EXPTIME-complete. We then consider the mean number of discrepancies w.r.t. the profile as a discounted Hamming distance. A. A Regular discounted suspicion measure Let u ∈ Σ K .Σ * and let v ∈ Σ * . We denote by d K (u, v) the distance between the last K letters of word u and any suffix of v, i.e. d K (u, v) = min{d(u [|u|-K,|u|] , v ) | v is a suffix of v}. Given a regular language R we define d K (u, R) = min{d K (u, v) | v ∈ R} Lemma 15: Let R be a regular language. For a fixed K ∈ N, and for every k ∈ [0..K], one can compute an automaton C k that recognizes words which suffixes of length K are at Hamming distance k from a suffix of a word of R. We now define a cost model, that penalizes users that get too far from their profile, and decreases this penalty when getting back closer to a normal behavior. For a profile P j and fixed values α, β ≤ K we define a suspicion function Ω j for words in Σ * inductively: Ω j (w) = 0 if |w| ≤ K Ω j (a.w.b) = Ω j (a.w) + 1 if d K (w.b, P j ) ≥ β max(Ω j (a.w) -1, 0) if d K (w.b, P j ) ≤ α As an example, let us take as profile P j the automaton G of One can easily define a notion of passive opacity with respect to a suspicion threshold T . Again, verifying this property supposes finding a witness run of the system that leaks information without exceeding suspicion threshold, which can be done in PSPACE (assuming that T is smaller than 2 |Conf | ). As for profiled opacity, we can define Strat T the set of strategies of an user that never exceed suspicion level T . This immediately gives us the following definitions and results. Definition 16: Let K ∈ N be a suffix size, α, β ≤ K and T ∈ N be a suspicion threshold. S is opaque with suspicion threshold T iff ∀i = j, ∀w ∈ L(S i ) ∩ L(S), Ω j (w) < T implies π -1 Σ j o (π Σ j o (w)) ∩ L(S) L(S i ) . Theorem 17: Opacity with suspicion threshold for the Hamming distance is PSPACE-complete. Definition 18: Let K ∈ N be a suffix size, α, β ≤ K and T ∈ N. S is actively opaque with suspicion threshold T iff ∀i = j there exists no strategy µ j ∈ Start T such that it is unavoidable for u j to reach a correct state estimation X ⊆ F S i in all runs of Runs(S, µ j ). Theorem 19: Active opacity with suspicion threshold for the Hamming distance is EXPTIME-complete. Proof: We build an arena that contains nodes of the form n = (b, C, ES, EC 0 , . . . EC k , sus). C is the actual current configuration of S Γ , ES is the set of pairs (C, s) of configuration and secret sates in which S Γ could be according to the actions observed by u j and according to the belief refinements actions performed by u j . Sets EC 1 . . . EC k remembers sets of states of cost automata C 0 , . . . C K . Each EC i memorizes the states in which C i could be after reading the current word. If EC i contains a final state, then the K last letters of the sequence of actions executed so far contain exactly i differences. Note that only one of these sets can contain an accepting state. Suspicion sus is a suspicion score between 0 and T . When reading a new letter, denoting by p the number of discrepancies of the K last letters wrt profiles, one can update the suspicion score using the definition of C j above, depending on whether p ∈ [0, α], p ∈ [α, β] or p ∈ [β, K]. The winning condition in this game is the set W in = {(b, C, ES, EC 0 , . . . EC k , sus) | ES ⊆ Conf (S) × F S i }. We partition the set of nodes into V 0 = {(b, C, ES, EC 0 , . . . EC k , sus) | b = 0} and V 1 = {(b, C, ES, EC 0 , . . . EC k , sus) | b = 1} . We de-fine moves from (b, C, ES, EC 0 , . . . EC k , sus) to (1b, C, ES, EC 0 , . . . EC k , sus) symbolizing the fact that it is user u j 's turn to perform an action. There is a move from n = (b, C, ES, EC 0 , . . . EC k , sus) to n = (b , C , ES, EC 0 , . . . EC k , sus ) if there is a transition (C, a, C ) in S Γ performed by an user u i = u j , and a is not observable by u j . There is a move from n = (b, C, ES, EC 0 , . . . EC k , sus) to n = (b , C , ES , EC 0 , . . . EC k , sus) if there is a transition (C, a, C ) in S Γ performed by an user u i = u j and a is observable by u j . We have ES = ∆ Σ j o (ES, S Γ , a). Suspicion and discrepancies observation (sets EC i ) remain unchanged as this move does not represent an action played by u j . There is a move from n = (b, C, ES, EC 0 , . . . EC k , sus) to n = (1 -b, C , ES , EC 0 , . . . EC k , sus) if b = 1 and there is a transition (q, a, q ) in S Γ performed by user u j from the current configuration. Set ES is updated as before ES = ∆ Σ j o (ES, S Γ , a) and sets EC i are updated according to transition relation δ suf i of automaton C i , i.e. EC i = δ suf i (E i , a). Similarly, sus is the new suspicion value obtained after reading a. Last, there is a move from n = (b, C, ES, EC 0 , . . . EC k , sus) to n = (b, C, ES , EC 0 , . . . EC k , sus), if there is a sequence of moves (C, a, C(a γ )).(C(a γ ), a γ(q) , C) in S Γ , ES = ES /a γ(q) , and EC i 's and sus are computed as in the former case. As for the proofs of theorems 6 and 14, opacity can be brought back to a reachability game of partial information, and no exponential blowup occurs to solve it. For the hardness, there is a reduction from active profiled opacity. Indeed, active profiled opacity can be expressed as a suspicion threshold opacity, by setting α = β = K = 0, to disallow attackers to leave their profile. B. Discounted Opacity : an open problem A frequent interpretation of discounting is that weights or penalties attached to a decision should decrease progressively over time, or according to the length of runs. This is captured by averaging contribution of individual moves. Definition 20: The discounted Hamming distance between a word u and a language R is the value d(u, R) = ham(u,R) |u| This distance measures the average number of substitutions in a word u with respect to the closest word in R. The next quantitative definition considers a system as opaque if an active attacker can not obtain a secret while maintaining a mean number of differences w.r.t. its expected behavior below a certain threshold. Let λ ∈ Q be a positive rational value. We denote by Strat λ (R) the set of strategies that does not allow an action a after a run ρ labeled by a sequence of actions w if d(w.a, R) > λ. Definition 21 (Discounted active Opacity): A system S is opaque wrt profiles P 1 , . . . P n with discounted tolerance λ iff ∀i = j, µ j ∈ Strat λ (P j ), strategy of agent u j such that it is unavoidable for u j to reach a correct state estimation X ⊆ F i S in all runs of Runs(S, µ j ). A system is opaque in a discounted active setting iff one can find a strategy for u j to reach a state estimation that reveals the secret S i while maintaining a discounted distance wrt P j smaller than λ. At first sight, this setting resembles discounted games with partial information, already considered in [START_REF] Zwick | The complexity of mean payoff games[END_REF]. It was shown that finding optimal strategies for such mean payoff games is in N P ∩ co-N P . The general setting for mean payoff games is that average costs are values of nodes in an arena, i.e. the minimal average reward along infinite runs that one can achieve with a strategy starting from that node. As a consequence, values of nodes are mainly values on connected components of an arena, and costs of moves leading from a component to another have no impact. In out setting, the game is not a value minimization over infinite run, but rather a co-reachability game, in which at any moment in a run, one shall not exceed a mean number of unexpected moves. For a fixed pair of users u i , u j , we can design an arena with nodes of the usual form n = (b, C, ES, l, su) in which b indicates whether it is u j 's turn to play, C is the current configuration of the system, ES the estimation of the current configuration and of the current state of secret S i reached, l is the number of moves played so far, and su the number of moves that differ from what was expected in P j . As before, the winning states for u j are the states where all couples in state estimation refer to an accepting state of S i . In this arena, player u j looses if it can never reach a winning node, or if it plays an illegal move from a node n = (b, C, ES, l, su) such that su+1 l+1 > λ. One can immediately notice that defined this way, our arena is not finite anymore. Consider the arena used in theorem 6, i.e. composed of nodes of the form n = (b, C, ES) that only build estimations of the attacker. Obviously, when ignoring mean number of discrepancies, one can decide whether the winning set of nodes is reachable from the initial node under some strategy in polynomial time (wrt the size of the arena). The decision algorithm builds an attractor for the winning set (see for instance [START_REF] Grädel | Automata Logics, and Infinite Games : A Guide to Current Research[END_REF] for details), but can also be used to find short paths under an adequate strategy to reach W in (without considering mean number of discrepancies). If one of these paths keeps the mean number of discrepancies lower or equal to λ at each step, then obviously, this is a witness for non-opacity. However, if no such path exists, there might still be a way to play longer runs that decrease the mean number of discrepancies before moving to a position that requires less steps to reach the winning set. We can show an additional sufficient condition : Let ρ = n 0 .n 1 . . . n w be a path of the arena in theorem 6 (without length nor mean number of discrepancies recall) from n 0 to a winning node n w . Let d i denote the number of discrepancies with respect to profile P j at step i. Let n i be a node of ρ such that di i ≤ λ and di+1 i+1 > λ. We say that u j can enforce a decreasing loop β = n j .n j+1 . . . n j at node n j if β is a cycle that u j can enforce with an appropriate strategy, and if the mean number of discrepancies is smaller in ρ β = n 0 . . . n j .β than in n 0 . . . n j , and the mean cost of any prefix of β is smaller that λ. A consequence is that the mean cost M β of cycle β is smaller than λ. We then have a sufficient condition: Proposition 22: Let ρ be a winning path in an arena built to check active opacity for users u i , u j such that di i > λ for some i ≤ |ρ|. If there exists a node n b in ρ such that d k k ≤ λ for every k ≤ b and u j can enforce a decreasing loop at n b , then u j has a strategy to learn S i without exceeding mean number of discrepancies λ. Similarly, if B is large enough, playing any prefix of n b+1 . . . n w to reach the winning set does not increase enough the mean number of discrepancies to exceed λ. A lower bound for B such that λ is never exceeded in n 0 . . . n b .β B .n b+1 . . . n w can be easily computed. Hence, if one can find a path in a simple arena withouts mean discrepancy counts, and a decreasing loop in this path, then u j has a strategy to learn S i without exceeding threshold λ. VI. CONCLUSION We have shown several ways to quantify opacity with passive and active attackers. In all cases, checking passive opacity can be brought back to a language inclusion question, and is hence PSPACE-complete. In active settings, opacity violation is brought back to existence of strategies in reachability games over arenas which nodes represent beliefs of agents, and is EXPTIME-complete. Suspicion can be discounted or not. Non-discounted suspicions simply counts the number of anomalies w.r.t. a profile, and raises an alarm when a maximal number K of anomalies is exceeded. We have shown that when anomalies are substitutions, deletions and insertions of actions, words with less than K anomalies w.r.t. the considered profile (words at Hamming or Levenshtein distance ≤ K) are recognized by automata of linear size. This allows to define active and passive profiled opacity, with the same PSPACE/EXPTIME-complete complexities. A crux in the proofs is that words at distance lower than K of a profile are recognized by automata. A natural extension of this work is to see how regular characterization generalizes to other distances. Discounting the number of anomalies is a key issue to avoid constantly raising false alarms. t is reasonable to consider that the contribution to suspicion raised by each anomaly should decrease over time. The first solution proposed in this paper computes a suspicion score depending on the number of discrepancies found during the last actions of an agent. When differences are only substitutions, one can use finite automata to maintain online the number of differences. This allows to enhance the arenas used in the active profiled setting without changing the complexity class of the problem (checking regular discounted suspicion remains EXPTIME-complete). Again, we would like to see if other distances (eg the Levenstein distance) and suspicion scores can be regular, which would allow for the defiition of new opacity measures. Discounted suspicion weights discrepancies between the expected and actual behavior of an agent according to run length. This suspicion measure can be seen as a quantitative game, where the objective is to reach a state leaking information without exceeding an average distance of λ ∈ Q. In our setting, the mean payoff has to be compared to a threshold at every step. This constraint can be recast as a reachability property for timed automata with one stopwatch and linear diagonal constraints on clock values. We do not know yet if this question is decidable but we provide a sufficient condition for discounted opacity violation. In the models we proposed, discounting is performed according to runs length. However, it seems natural to consider discrepancies that have occurred during the last ∆ seconds, rather than This requires in particular considering timed systems and they timed runs. It is not sure that adding timing to our setting preserves decidability, as opacity definitions rely a lot on languages inclusion, which are usually undecidable for timed automata [START_REF] Alur | A theory of timed automata[END_REF]. If time is only used to measure durations elapsed between actions of an attacker, then we might be able to recast the quantitative opacity questions in a decidable timed setting, using decidability results for timed automata with one clock [START_REF] Ouaknine | On the language inclusion problem for timed automata: Closing a decidability gap[END_REF] or event-clock timed automata. APPENDIX PROOF OF THEOREM 3 Proof: Let us first prove that U -opacity is in PSPACE. A system is not opaque if one can find a pair of users u i , u j , and a run w of S such that w ∈ L(S i ) and π -1 Σ j o (π Σ j o (w)) ∩ L(S) ⊆ L(S i ). One can non-deterministically choose a pair of users u i , u j in space logarithmic in n, and check that i = j in logarithmic space. To decide whether a run of S belongs to S i , it is sufficient to know the set of states reached by S i after recognizing w. A word w belongs to L(S i ) is the state q w reached by S i after reading w belongs to F S i . Now, observe that an user u j does not have access to w, but can only observe π Σ j o (w), and may hence believe that the run actually played is any run with identical observation, i.e. any run of π -1 Σ j o (π Σ j o (w)) ∩ L(S). Let ρ be a run of S, one can build online the set of states reach j (w) that are reached by a run which observation is the same as π Σ j o (w). We have reach j ( ) = {q ∈ Q S i | ∃w, q S 0,i w -→ q ∧ π Σ j o (w) = } and reach j (w.a) = {q ∈ Q S i | ∃q ∈ reach j (w), ∃w , q w -→ q ∧ π Σ j o (w ) = a}. Obviously, a word w witnesses a secret leakage from S i to u j if reach j (w) ⊆ F S i . To play a run of S, it is hence sufficient to remember a configuration of S and a subset of states of S i . Let q ρ denote the pair (q, X) reached after playing run ρ. Now we can show that witness runs with at most K 1 = |Conf |.2 |Si| letters observable by u j suffice. Let us assume that there exists a witness ρ of size ≥ K 1 . Then, ρ can be partitioned into ρ = ρ 1 .ρ 2 .ρ 3 such that q ρ1 = q ρ1.ρ2 . Hence, ρ 1 .ρ 3 is also a run that witness a leakage of secret S i to u j , but of smaller size. Hence one can find a witness of secret leakage by a nondeterministic exploration of size at most |Conf |.2 |Si| . To find such run, one only needs to remember a configuration of S (which can be done with log(|S|) bits, all states of reach j (ρ) for the current run ρ followed in S, which can be done with |S i | bits of information, and an integer of size at most K 1 , which requires log |S|.|S i | bits. Finding a witness can hence be done in NPSPACE, and by Savitch's lemma it is in PSPACE. As PSPACE is closed by complement, deciding opacity of a system is in PSPACE. Let us now consider the hardness part. We will reduce the non-universality of any regular language to an opacity problem. As universality is in PSPACE, non-universality is also in PSPACE. The language of an automaton B defined over an alphabet Σ is not universal iff L(B) = Σ * , or equivalently if Σ * L(B). For any automaton B, one can design a system S B with two users u 1 , u 2 such that S 1 = B, L(S 2 ) = a.Σ * for some letter a, A accepts all actions, i.e. is such that L (A) = Σ * , Σ 2 o = Σ 1 o = ∅. Clearly, for every run of S, u 1 observes , and hence leakage can not occur from u 2 to u 1 (one cannot know whether a letter and in particular a was played). So the considered system is opaque iff ∀w ∈ L(S 1 ) ∩ L(S), π -1 Σ 2 o (π Σ 2 o (w)) L(S 1 ). However, as Σ 2 o = ∅, for every w, π -1 Σ 2 o (π Σ 2 o (w)) = Σ * . That is, the system is opaque iff Σ * L(B). PROOF OF THEOREM 6 Proof: An active attacker u j can claim that the system is executing a run ρ that is secret for u i iff it can claim with certainty that ρ is recognized by S i . This can be achieved by maintaining an estimation of the system's current configuration, together with an estimation of S i 's possible states. We build an arena with nodes N 0 ∪ N 1 . Each node of the form n = (b, C, s, ES) contains : • a player's name b (0 or 1). Intuitively, 0 nodes are nodes where all agents but u j can play, and 1 nodes are nodes where only agent u j plays. G ⊆ N 0 ∪ N 1 × N 0 ∪ N 1 . • (n, n ) ∈ δ G if n and n differ only w.r.t. their player's name • (n, n ) ∈ δ G if n = (0, C, s, ES) , n = (1, C , s , ES ) and there exists σ ∈ (Σ \ Σ j ) ∩ Σ j o such that C σ =⇒ C , s σ =⇒ S i ) -→ S i . • (n, n ) ∈ δ G n = (1, C, s, ES), n = (1, C, s, ES ) if there exists γ ∈ Σ Γ j such that ES is the refinement of ES by a γ (state(C)). We assume that checking the status of a proposition does not affect the secrets of other users. We says that a node n = (b, C, s, ES) is not secret if s ∈ F S i , and say that n is secret otherwise. We say that a node is ambiguous if there exists (C p , s p ) and (C m , s m ) in ES such that s p is secret and s m is not. If the restriction of ES to it second components is contained in F S i , we says that n leaks secret S i . We equip the arena with an equivalence relation ∼⊆ N 0 × N 0 ∪ N 1 × N 1 , such that n = (b, C, s, ES) ∼ n = (b , C , s , ES ) iff b = b = 1 and ES = ES . Intuitively, n ≡ n if and only if they are nodes of agent u j , and u j cannot distinguish n from n using the knowledge it has on executions leading to n and to n . Clearly, secret S i is not opaque to agent u j in S iff there exists a strategy to make a leaking node accessible. This can be encoded as a partial information reachability game G = (N 0 N 1 , δ G , ≡, W in), where W in is the set of all leaking nodes. In these games, the strategy must be the same for every node in the same class of ≡ (i.e. where u j has the same state estimation). Usually, partial information games are solved at he cost of an exponential blowup, but we can show that in our case, complexity is better. First, let us compute the maximal size of the arena. A node is of the form n = (b, C, s, ES), hence the size of the arena |G| is in O(2.|Conf |.| § i |.2 |Conf |.|Si| ) (and it can be built in time O(|Conf |.|G|). Partial information reachability games are known to be EXPTIME-complete [START_REF] Reif | Universal games of incomplete information[END_REF]. Note here that only one player is blind, but this does not change the overall complexity, as recalled by [START_REF] Chatterjee | The complexity of partial-observation parity games[END_REF]. However, solving games of partial information consists in computing a "belief" arena G B that explicitly represent players beliefs (a partial information on a state is transformed into a full knowledge of a belief), and then solve the complete information game on arena G B . This usually yields an exponential blowup. In our case, this blowup is not needed, and the belief that would be computed to solve a partial information game simply duplicates the state estimation that already appears in the partial information arena. Hence, deciding opacity with active observation strategies can be done with |U | 2 opacity tests (one for each pair of users) of exponential complexity, in only in EXPTIME. Let us now prove the hardness of opacity with active attackers. We reduce the problem of emptiness of alternating automata to an opacity question. An alternating automaton is a tuple A alt = (Q, Σ, δ, s 0 , F ) where Q contains two distinct subsets of states Q ∀ , Q ∃ . Q ∀ is a set of universal states, Q ∃ is a set of existential states, Σ is an alphabet, δ ⊆ (Q ∀ ∪Q ∃ )×Σ×(Q ∀ ∪Q ∃ ) is a transition relation, s is the initial state and F is a set of accepting states. A run of A alt over a word w ∈ Σ * is an acyclic graph G A alt ,w = (N, -→) where nodes in N are elements of Q × {1 . . . |w|}. Edges in the graph connect nodes from a level i to a level i+1. The root of the graph is (s, 1). Every node of the from (q, i) such that q ∈ Q ∃ has a single successor (q , i+1) such that q ∈ δ(q, w i ) where w i is the i th letter of w. For every node of the from (q, i) such that q ∈ Q ∀ , and for every q such that q ∈ δ(q, w i ), ((q, i), (q , i + 1)) is an edge. A run is complete is all its node with index in 1..|w| -1 have a successor. It is accepting if all path of the graph end in a node in F × {|w|}. Notice that due to non-deterministic choice of a successor for existential states, there can be several runs of A alt for a word w. The emptiness problem asks whether there exists a word w ∈ Σ * that has an accepting run. We will consider, without loss of generality that alternating automata are complete, i.e. all letters are accepted from any state. If there is no transition of the form (q, a, q ) from a sate q, one can nevertheless create a transition to an non-accepting absorbing state while preserving the language recognized by the alternating automaton. Let us now show that the emptiness problem for alternating automata can be recast in an active opacity question. We will design three automata A, A 1 , A 2 . The automata A 1 and A 2 are agents. Agent 1 performs actions from universal sates and agent 2 chooses the next letter to recognize and performs actions from existential states. The automaton A serves as a communication medium between agents, indicates to A 2 the next letter to recognize, and synchronizes agents 1 and 2 when switching the current state of the alternating automaton from an existential state to an universal state or conversely. We define A = (Q s , -→ s , Σ s ) with Σ s = {(end, 2 A); (end, A 1)} ∪ Σ × {2 A, A 1} × (Q ∃ ∪ U ) × {1 A, A 2, 2 A, A 2}. To help readers, the general shape of automaton A is given in Figure 3. States of A are of the form U , (U, σ), W , dU , dq i , wq i for every state in Q, and Eq i for every existential state q i ∈ Q ∃ . The initial state of A is state U if s 0 is an universal state, or s 0 if s 0 is existential. State U has |Σ| outgoing transitions of the form (U, < σ, 2 A >, (U, σ), indicating that the next letter to recognize is σ. It also has a transition of the form (U, < end, 2 A >, end 1 ) indicating that A 2 has decided to test whether A 1 is in a secret state (i.e. simulates an accepting state of A alt ). There is a single transition (end 1 , < end, A 2 >, end 2 ) from state end 1 , and a single transition (end 2 , < Ackend, A 1 >, end 3 ) indicating to A 2 that A 1 has acknowledged end of word recognition. There is a transition ((U, σ), < σ, A → 1 >, (W, σ)) for any state (U, σ), indicating to A 1 that the next letter to recognize from its current universal state is σ. In state W , A is waiting for an universal move from A 1 . Then from W , A can receive the information that A 1 has moved to an universal state, which is symbolized by a pair of transitions (W, < σ, U, 1 A >, dU )) and (dU, < again, A 2 >, U ). There is a transition (W, < σ, q i , 1 → A >, dq i ) for every existential state q i ∈ Q ∃ , followed by a transition (dq i , < σ, q i , A 2 >, Eq i ), indicating to A 2 that the system has moved to recognition of a letter from an existential state q i . There is a transition (Eq i , < σ, 2 A >, (Eq i , σ)) from every state Eq i with q i ∈ Q ∃ and every σ ∈ Σ to indicate that the next letter to recognize is σ. Then, there is a transition ((Eq i , σ), < σ, q j , 2 A >, (W q j , σ)) for every existential move (q i , σ, q j ) ∈ δ. From every state (W q j , σ), there is a transition of the form ((W q j , σ), < σ, q j , A → 1 >, (dq j , σ)) to inform A 1 of A 2 's move. Then, from (Dq j , σ) if q j ∈ Q ∃ , there is a transition of the form ((Dq j , σ), < again, A 1 >, Eq j ) and if q j ∈ Q ∀ , a transition of the form ((dq j , σ), < again, A 1 >, U ), indicating to A 1 that the simulation of the current transition recognizing a letter is complete, and from which state the rest of the simulation will resume. Let us now detail the construction of A 2 . A description of all its transition is given in Figure 4. This automaton has one universal state U , a state W , states of the form (U, σ), a pair of states Eq i and W q i and a state (Eq i , σ) for every σ ∈ Σ and every q i ∈ Q ∃ . Last, A 1 has two states End 1 and End 2 . There is a transition (U, < σ, 2 A >, (U, σ)) from U for every σ ∈ Σ, symbolizing the choice of letter σ as the next letter to recognize when the system simulates an universal state. Note that A 2 needs not know which universal state is currently simulated. Then, there is also a transition ((U, σ), again, U ) returning to U symbolizing the end of a transition of the alternating w q j again σ, A 1, q j d qi d q i σ, 1 A, q i σ, 1 A, q i E qi E qi , σ σ, A 2, q i σ, 2 A W qj , σ σ, q j , 2 A (q j ∈ Q ∃ ) σ, q j , 2 A (q j ∈ Q ∀ ) d qj σ, q j , A 1 E qj End, 2 A End, 2 A again Fig. 3: Automaton A in the proof of theorem 6. automata that returns to an universal state (hence owned by A 2 ). From every state (U, σ) there is a transition ((U, σ), again, U ) and a transition ((U, σ), < σ, q i , A → 2 >, Eq i ) for every existential state q i that has an universal predecessor q with (q, σ, q i ) ∈ δ. From a state Eq i and for every σ ∈ Σ, there is a transition (Eq i , < σ, 2 A >, (Eq i , σ)) symbolizing the choice to recognize σ as the next letter. Then, from every state (Eq i , σ) for every transition of the form (q i , σ, q j ) ∈ δ where q j is existential, there is a transition ((Eq i , σ), < σ, q j , 2 → A >, W q j ). For every transition of the form (q i , σ, q j ) ∈ δ where q j is universal, there is a transition ((Eq i , σ), < σ, q j , 2 → A >, W ). Last, transitions ((W q j , σ), again, Eq j ) and (W, again, U ) complete simulation of recognition of the current letter. Last, A 2 has a transition (U, < end, 2 A >, End 1 ), a transition (Eq i , < end, 2 A >, End 1 ) for every existential state q i ∈ Q ∃ and a transition (end 1 , ackend, End 2 ), symbolizing the decision to end recognition of a word. Let us detail the construction of A 1 . The general shape of this automaton is described in Figure 5. This automaton has two states of the form U q i , (U q i , σ) per universal state and for each σ ∈ Σ. Similarly A 1 has a state Eq i , (Eq i , σ) per existential state and for each σ ∈ Σ. From state U q i there is a transition (U q i , < σ, A → 1 >, (U q i , σ)) to acknowledge the decision to recognize σ. From state (U q i , σ) there exists two types of transitions. For every universal state q j such that (q i , σ, q j ) ∈ δ, Eq i , σ σ, 2 A σ, 2 A σ, q j , 2 A (q j ∈ Q ∀ ) Eq j End, 2 A End, 2 A W q j again σ, q j , 2 A (q j ∈ Q ∃ ) Fig. 4: Automaton A 2 in the proof of theorem 6, simulating existential moves . there is a transition ((U q i , σ), < σ, U, 1 A >, U q j ), symbolizing a move to universal state q j . For every existential state q j such that (q i , σ, q j ) ∈ δ, there is a transition ((U q i , σ), < σ, q j , 1 A >, Eq j ). Similarly, from a state Eq i , there exists a transition (Eq i , < σ, A 1 >, (Eq i , σ)) indicating to A 1 the letter chosen by A 2 . From state (Eq i , σ), there is a transition ((Eq i , σ), < σ, q j , A → 1 >, Eq j ) for every existential state q j such that (q i , σ, q j ) ∈ δ. There is also a transition ((Eq i , σ), < σ, U, 1 A >, U q j ) for every universal state q j such that (q i , σ, q j ) ∈ δ. Notice that the universal state reached is not detailed when A 1 sends the confirmation of a move to A. The remaining transitions are transitions of the form (Eq i , < End, A 1 >, S) and (U q i , < End, A 1 >, Sec) for every accepting state q i ∈ F . We also create transitions of the form Eq i , < End, A 1 >, Sec and U q i , < End, A 1 >, Sec for states that are not accepting. Reaching Sec indicates the failure to recognize a word chosen by A 1 along a path in which universal moves were played by A 1 and existential moves by A 2 . We define a agent u 1 s secret S 1 as the automaton that recognizes all words that allow A 1 to reach sate Sec. Now, we can prove that if a word w is accepted by A alt then the strategy in which A 2 chooses letter w i at its i t h passage through a letter choice state (U or Eq i ), existential transitions appearing in the accepting run of A alt , and then transition < end, 2 A > at the i + 1 th choice, is a strategy to force U q i U q i , σ U q j Eq j σ, 1 A σ, U, 1 A σ, q j , 1 A Eq i Eq j σ, A 1, q j (q j ∈ Q ∃ ) Eq i U q j σ, A 1, q j (q j ∈ Q ∀ ) Eq i Sec End, A 1 (q i ∈ F ) Eq i Sec End, A 1 (q i ∈ F ) U q i Sec End, A 1 (q i ∈ F ) U q i Sec End, A 1 (q i ∈ F ) Fig. 5: Automaton A 1 in the proof of theorem 6, simulating Universal moves . A 1 to reach the secret state. Conversely, one can associate to every run of A, A 1 , A 2 , a word w that is read, and a path in some run that is used to recognize w. If A 2 has a strategy to force A 1 secret leakage, then all path following this strategy lead to a winning configuration. As a consequence, there is a choice of existential moves such that all states simulated along a run of the alternating automaton with these existential moves end in accepting state. Hence, L(A alt ) is empty iff the system composed of A, A 1 , A 2 is opaque. Now, the system built to simulate A alt is of polynomial size in |A alt |, so there is a polynomial size reduction from the emptiness problem for alternating automata to the active opacity question, and active opacity is EXPTIME-complete. PROOF OF LEMMA 15 Proof: One can first recall that for the Hamming and Levenshtein distances, we have d(u, v) = d(u -1 , v -1 ), where u -1 is the mirror of u. Similarly, we have d K (u, R) = d(u -1 [1,K] , R -1 ). Let G R = (Σ, Q, q 0 , δ, F ) be the automaton recognizing language R. We can build an automaton C k that recognizes words of length at least K, which suffixes of length K are at hamming distance at most k of suffixes of length K of words in R. We define C k = (Σ, Q suf k , q suf 0,k , δ suf k , F suf k ). This automaton can be computed as follows : first build G -1 R , the automaton that recognizes mirrors of suffixes of R. This can be easily done by setting as initial states the final states of R, and then reversing the transition relation. Then by adding a K-bounded counter to states of G -1 R , and setting as accepting states states of the form (q, K), we obtain an automaton B -1 that recognizes mirrors of suffixes of R of length K. Then, for every k ∈ [0..K], we can compute B k , the automaton that recognizes mirrors of words of length K that are at distance k from words in B -1 , by adding another counter to states that counts substitutions, and which final states are of the form (q, K, k). Then we can build (by sequential composition of automata for instance) the automaton C k that reads any word in Σ * and then recognizes a word in (B k ) -1 . Fig. 1 : 1 Fig. 1: An automaton G and the automaton G 3 Ham that recognizes words at Hamming distance ≤ 3 of L(G). Figure 1 .Fig. 2 : 12 Fig. 2: Evolution of suspicion wrt profile of Figure 1 when reading word w = a.a.a.c.b.b.a.c.b.a.a. distance d K (w [i.i+5] , P j ) at each letter of w (plain line), and the evolution of the suspicion function (dashed line).One can easily define a notion of passive opacity with respect to a suspicion threshold T . Again, verifying this property supposes finding a witness run of the system that leaks information without exceeding suspicion threshold, which can be done in PSPACE (assuming that T is smaller than 2 |Conf | ). As for profiled opacity, we can define Strat T the set of strategies of an user that never exceed suspicion level T . This immediately gives us the following definitions and results.Definition 16: Let K ∈ N be a suffix size, α, β ≤ K and T ∈ N be a suspicion threshold. S is opaque with suspicion threshold T iff ∀i = j, ∀w ∈ L(S i ) ∩ L(S), Ω j (w) < T implies π -1 Proof: The winning path is of the form ρ = n 0 .n 1 . . . n b .n b+1 . . . n w . Let d b be the number of discrepancies in n 0 .n 1 . . . n b and λ b = d b b . Player u j can choose any integer value B and enforce path ρ B = n 0 .n 1 . . . n b .β B . The mean number of discrepancies in ρ B is equal to d b +B.d β i+B.|β| , i.e. as B increases, this number tends towards M β . s and ES is th set of pairs (C m , s m ) such that there exits a pair (C p , s p ) in ES, and a sequence ρ of transitions from C p to C m , labeled by a word w such that Π j (w) = σ, and one can move in S i from s p to s m by reading w. Note that this set of sequences needs not be finite, but one can find in O(|Conf |) the set of possible pairs that are accessible while reading σ.• (n, n ) ∈ δ G if n = (1, C, s, ES), n = (1, C , s , ES ) and there exists σ ∈ Σ j , a transition C σ -→ C in S, a transition (s, σ, s ) ∈-→ S i and ES is the set of pairs of the form (C m , s m ) such that there exists (C m , s m ) ∈ ES (C m , σ, C m ) ∈-→ and (s m , σ, s m • the current configuration C of S • the current state s of S i • an estimation ES of the system's configuration and secret's current state by u j ,ES j = {(C 1 , s 1 ), ...(C k , s k )}=⇒ C iff there exists a sequence of transitions of S which observation by u j is σ, and s from s to s in S i . Then we define moves among nodes as a relation δ We write C σ =⇒ S i s if there is such a sequence σ This entails that we assume that queries are faster than the rest of the system, i.e. not event can occur between a query and its answer. Hence we have L(S Γ ) ⊆ L(S) (Σ Γ .{tt.ff }) * . We could easily get rid of this hypothesis, by remembering in states of S Γ which query (if any) was sent by an user, and returning the answer at any moment.
81,439
[ "830540", "418", "959111" ]
[ "491208", "491208", "57241" ]
01758006
en
[ "spi" ]
2024/03/05 22:32:10
2017
https://hal.science/hal-01758006/file/EUCOMES2016_Nayak_Nurahmi_Caro_Wenger_HAL.pdf
Abhilash Nayak email: abhilash.nayak@irccyn.ec-nantes.fr Latifah Nurahmi email: latifah.nurahmi@gmail.com Philippe Wenger email: philippe.wenger@irccyn.ec-nantes.fr Stéphane Caro email: stephane.caro@irccyn.ec-nantes.fr Comparison of 3-RPS and 3-SPR Parallel Manipulators based on their Maximum Inscribed Singularity-free Circle Keywords: 3-RPS parallel manipulator, 3-SPR parallel manipulator, operation modes, singularity analysis, maximum inscribed circle radius 1 . Then, the parallel singularities of the 3-SPR and 3-RPS parallel manipulators are analyzed in order to trace their singularity loci in the orientation workspace. An index, named Maximum Inscribed Circle Radius (MICR), is defined to compare the two manipulators under study. It is based on their maximum singularity-free workspace and the ratio between their circum-radius of the movingplatform to that of the base. Introduction Zero torsion parallel mechanisms have proved to be interesting and versatile. In this regard, the three degree of freedom lower mobility 3-RPS parallel manipulator (PM) has many practical applications and has been analyzed by many researchers [START_REF] Schadlbauer | Husty : A Complete Kinematic Analysis of the 3-RPS Parallel Manipulator[END_REF][START_REF] Schadlbauer | The 3-RPS Parallel Manipulator from an Algebraic Viewpoint[END_REF]. Interchanging the free moving platform and the fixed base in 3-RPS manipulator results in the 3-SPR manipulator as shown in figure 1, retaining three degrees of freedom. The study of 3-SPR is limited in the literature. An optimization algorithm was used in [START_REF] Lukanin | Inverse Kinematics, Forward Kinematics and Working Space Determination of 3-DOF Parallel Manipulator with S-P-R Joint Structure[END_REF] to compute the forward and inverse kinematics of 3-SPR manipulator. After the workspace generation it is proved that the 3-SPR has a bigger working space volume compared to the 3-RPS manipulator. The orthogonality of rotation matrices is exploited in [START_REF] Lu | Position and Workspace Analysis of 3-SPR and 3-RPS Parallel Manipulators[END_REF] to perform the forward and inverse kinematics along with the simulations of 3-SPR mechanism. Control of a hydraulic actuated 3-SPR PM is demonstrated in [START_REF] Mark | Kinematic Modeling of a Hydraulically Actuated 3-SPR-Parallel Manipulator for an Adaptive Shell Structure[END_REF] with an interesting application on adaptive shell structure. This paper focuses on the comparison of kinematics and singularities of the 3-RPS and 3-SPR parallel manipulators and is organized as follows: initially, the de-sign of 3-SPR PM is detailed and the design of the 3-RPS PM is recalled. The second section describes the derivation of the constraint equations of the 3-SPR manipulator based on the algebraic geometry approach [START_REF] Schadlbauer | Husty : A Complete Kinematic Analysis of the 3-RPS Parallel Manipulator[END_REF][START_REF] Nurahmi | Operation modes and singularities of 3-PRS parallel manipulators with different arrangements if P-joints[END_REF]. The primary decomposition is computed over these constraint equations and it shows that the 3-SPR has identical operation modes as the 3-RPS PM. Moreover, the actuation and constraint singularities are described with singularity loci plots in the orientation workspace. Finally, an index called the singularity-free maximum inscribed circle radius is introduced to compare the maximum singularity free regions of 3-RPS and 3-SPR manipulators from their home position. In [START_REF] Briot | Singularity Analysis of Zero-Torsion Parallel Mechanisms[END_REF], maximum tilt angles for any azimuth for 3-RPS PM are plotted for different ratios of platform to base circumradii. However, these plots correspond to only one operation mode since the notion of operation modes was not considered in this paper. That being the case, this paper offers a complete singularity analysis in terms of MICR for both the manipulators. These plots are useful in the design choice of a manipulator based on their platform to base circumradii ratios and their operation modes. Manipulator architectures x 1 x 0 y 1 y 0 z 1 z 0 ∑ 0 ∑ 1 B 3 B 1 B 2 A 3 A 1 A 2 s 1 s 2 s 3 r 1 r 3 r 2 h 1 h 2 O 1 O 0 Fig. 1 3-SPR parallel manipulator h 1 h 2 x 0 y 0 z 0 ∑ 0 x 1 y 1 z 1 ∑ 1 r 1 r 3 r 2 A 3 A 1 A 2 B 1 B 3 B 2 Fig. 2 3-RPS parallel manipulator Figure 1 shows a general pose of the 3-SPR parallel manipulator with 3 identical legs each comprising of a spherical, prismatic and revolute joints. The triangular base and the platform of the manipulator are equilateral. Σ 0 is the fixed co-ordinate frame attached to the base with the origin O 0 coinciding with the circum-centre of the triangular base. The centres of the spherical joints, namely A 1 , A 2 and A 3 bound the triangular base. x 0 -axis of Σ 0 is considered along O 0 A 1 which makes the y 0 -axis parallel to A 2 A 3 and the z 0 -axis normal to the triangular base plane. h 2 is the circum-radius of the triangular base. The moving platform is bounded by three points B 1 , B 2 and B 3 that lie on the revolute joint axes s 1 , s 2 and s 3 . Moving co-ordinate frame Σ 1 is attached to the moving platform whose x 1 -axis points from the origin O 1 to B 1 , y 1 -axis being orthogonal to the line segment B 2 B 3 and the z 1 -axis normal to the triangular platform. Circum-radius of this triangle with B i (i = 1, 2, 3) as vertices is defined as h 2 . The prismatic joint of the i-th (i = 1, 2, 3) leg is always perpendicular to the respective revolute joint axis in each leg. Hence the prevailing orthogonality of A i B i to s i (i = 1, 2, 3) no matter the motion of the platform is a constraint of the manipulator. The distance between the points A i and B i (i = 1, 2, 3) is defined by the prismatic joint variables r i . The architecture of the 3-SPR PM is similar to that of the 3-RPS PM except that the order of the joints in each leg is reversed. The architecture of 3-RPS is recalled in figure 2 where the revolute joints are attached to the fixed triangular base with circum-radius h 1 while the spherical joints are attached to the moving platform with circum-radius h 2 . Constraint equations of the 3-SPR parallel manipulator The homogeneous coordinates of A i and B i in the frames Σ 0 and Σ 1 respectively are expressed as follows: r 0 A 1 = [1, h 1 , 0, 0] T , r 0 A 2 = [1, - 1 2 h 1 , - 1 2 √ 3 h 1 , 0] T , r 0 A 3 = [1, - 1 2 h 1 , 1 2 √ 3 h 1 , 0] T r 1 B 1 = [1, h 2 , 0, 0] T , r 1 B 2 = [1, - 1 2 h 2 , - 1 2 √ 3 h 2 , 0] T , r 1 B 3 = [1, - 1 2 h 2 , 1 2 √ 3 h 2 , 0] T (1) To express the coordinates of B i in the frame Σ 0 , a coordinate transformation matrix must be used. In this context, the Study parametrization of a spatial Euclidean transformation matrix M ∈ SE(3) is utilized and is represented as: M = x 0 2 + x 1 2 + x 2 2 + x 3 2 0 T 3×1 M T M R , M T =     -2 x 0 y 1 + 2 x 1 y 0 -2 x 2 y 3 + 2 x 3 y 2 -2 x 0 y 2 + 2 x 1 y 3 + 2 x 2 y 0 -2 x 3 y 1 -2 x 0 y 3 -2 x 1 y 2 + 2 x 2 y 1 + 2 x 3 y 0     , M R =     x 0 2 + x 1 2 -x 2 2 -x 3 2 -2 x 0 x 3 + 2 x 1 x 2 2 x 0 x 2 + 2 x 1 x 3 2 x 0 x 3 + 2 x 1 x 2 x 0 2 -x 1 2 + x 2 2 -x 3 2 -2 x 0 x 1 + 2 x 3 x 2 -2 x 0 x 2 + 2 x 1 x 3 2 x 0 x 1 + 2 x 3 x 2 x 0 2 -x 1 2 -x 2 2 + x 3 2     (2) where M T and M R represent the translational and rotational parts of the transformation matrix M respectively. The parameters x i , y i , i ∈ {0, ..., 3} are called the Study parameters. Matrix M maps every displacement SE(3) to a point in a 7dimensional projective space P 7 and this mapping is known as Study s kinematic mapping. An Euclidean transformation will be represented by a point P∈ P 7 if and only if the following equation and inequality are satisfied: x 0 y 0 + x 1 y 1 + x 2 y 2 + x 3 y 3 = 0 ( 3 ) x 0 2 + x 1 2 + x 2 2 + x 3 2 = 0 (4) All the points that satisfy equation ( 3) belong to the 6-dimensional Study quadric. The points that do not satisfy the inequality (4) lie in the exceptional generator x 0 = x 1 = x 2 = x 3 = 0. To derive the constraint equations, we can express the direction of the vectors s 1 , s 2 and s 3 in homogeneous coordinates in frame Σ 1 as: s 1 1 = [1, 0, -1, 0] T , s 1 2 = [1, - 1 2 √ 3 , 1 2 , 0] T , s 1 3 = [1, 1 2 √ 3 , 1 2 , 0] T (5) In the fixed coordinate frame Σ 0 , B i and s i can be expressed using the transformation matrix M : r 0 B i = M r 1 B i ; s 0 i = M s 1 i i = 1, 2, 3 (6) As it is clear from the manipulator architecture, the vector along A i B i , namely r 0 B i -r 0 A i is orthogonal to the axis s i of the i-th revolute joint which after simplification yields the following three equations: (r 0 B i -r 0 A i ) T s i = 0 =⇒    g 1 := x 0 x 3 = 0 g 2 := h 1 x 1 2 -h 1 x 2 2 -2 x 0 y 1 + 2 x 1 y 0 + 2 x 2 -2 x 3 y 2 = 0 g 3 := 2 h 1 x 0 x 3 + h 1 x 1 x 2 + x 0 y 2 + x 1 y 3 -x 2 y 0 -x 3 y 1 = 0 (7) The actuation of prismatic joints leads to three additional constraint equations. The Euclidean distance between A i and B i must be equal to r i for the i-th leg of the manipulator. As a result, A i B i 2 = r 2 i leads to three additional equations g 4 = g 5 = g 6 = 0, which are quite lengthy and are not displayed in this paper due to space limitation. Two other equations are considered such that the solution represents a transformation in SE(3). The study-equation g 7 = 0 in Equation (3) constrains the solutions to lie on the Study quadric. g 8 = 0 is the normalization equation respecting the inequality [START_REF] Lu | Position and Workspace Analysis of 3-SPR and 3-RPS Parallel Manipulators[END_REF]. Solving these eight constraint equations provides the direct kinematic solutions for the 3-SPR parallel manipulator. g 7 := x 0 y 0 + x 1 y 1 + x 2 y 2 + x 3 y 3 = 0 ; g 8 := x 0 2 + x 1 2 + x 2 2 + x 3 2 -1 = 0 (8) Operation modes Algebraic geometry offers an organized and an effective methodology to deal with the eight constraint equations. A polynomial ideal consisting of equations g i (i = 1, ..., 8) is defined with variables {x 0 , x 1 , x 2 , x 3 , y 0 , y 1 , y 2 , y 3 } over the coefficient ring C[h 1 , h 2 , r 1 , r 2 , r 3 ] as follows: I =< g 1 , g 2 , g 3 , g 4 , g 5 , g 6 , g 7 , g 8 > (9) The vanishing set or the variety V (I) of this ideal I consists of the solution to direct kinematics as points in P 7 . However, in this context, only the number of operation modes are of concern irrespective of the joint variable values. Hence, the sub-ideal independent of the prismatic joint length, r i is considered: J =< g 1 , g 2 , g 3 , g 7 > (10) The primary decomposition of ideal J is calculated to obtain three simpler ideals J i (i = 1, 2, 3). The intersection of the resulting primary ideals returns the ideal J . From a geometrical viewpoint, the variety V (J ) can be written as the union of the varieties of the primary ideals V (J i ), i = 1, 2, 3 [START_REF] Cox | Shea: Ideals, Varieties, and Algorithms (Series: An Introduction to Computational Algebraic Geometry and Commutative Algebra[END_REF]. J = 3 i=1 J i or V (J ) = 3 i=1 V (J i ) (11) Among the three primary ideals obtained as a result of primary decomposition, it is important to note that J 1 and J 2 contain x 0 and x 3 as their first elements, respectively. The third ideal, J 3 is obtained as J 3 =< x 0 , x 1 , x 2 , x 3 > and is discarded as the variety V (J 3 ∪ g 8 ) is null over the field of interest C. As a result, the 3-SPR PM has two operation modes, represented by x 0 = 0 and x 3 = 0. In fact, g 1 = 0 in Equation [START_REF] Briot | Singularity Analysis of Zero-Torsion Parallel Mechanisms[END_REF] shows the presence of these two operation modes. It is noteworthy that the 3-RPS PM also has two operation modes as described in [START_REF] Schadlbauer | The 3-RPS Parallel Manipulator from an Algebraic Viewpoint[END_REF]. The analysis is completed by adding the remaining constraint equations to the primary ideals J 1 and J 2 . Accordingly, two ideals K 1 and K 2 are obtained. As a consequence, the ideals K i correspond to the two operation modes and can be studied separately. K i = J i ∪ < g 4 , g 5 , g 6 , g 8 > i = 1, 2 (12) The system of equations in the ideals K 1 and K 2 can be solved for a particular set of joint variables to obtain the Study parameters and hence the pose of the manipulator. These Study parameters can be substituted back in equation ( 2) to obtain the transformation matrix M. According to the theorem o f Chasles this matrix now rep-resents a discrete screw motion from the identity position (when the fixed frame Σ 0 and the moving frame Σ 1 coincide) to the moving-platform pose. The displacement about the corresponding discrete screw axis (DSA) defines the pose of the moving platform. 4.1 Ideal K 1 : Operation mode 1 : x 0 = 0 For operation mode 1, the moving platform is always found to be displaced about a DSA by 180 degrees [START_REF] Kong | Reconfiguration analysis of a 3-DOF parallel mechanism using Euler parameter quaternions and algebraic geometry method[END_REF]. Substituting x 0 = 0 and solving for y 0 , y 1 , y 3 from the ideal K 1 shows that the translational motions can be parametrized by y 2 and the rotational motions by x 1 , x 2 and x 3 [START_REF] Schadlbauer | Operation Modes in Lower-Mobility Parallel Manipulators[END_REF]. 4.2 Ideal K 2 : Operation mode 2 : x 3 = 0 For operation mode 2, the moving platform is displaced about a DSA with a rotation angle α calculated from cos( α 2 ) = x 0 . It is interesting to note that the DSA in this case is always parallel to the xy-plane [START_REF] Kong | Reconfiguration analysis of a 3-DOF parallel mechanism using Euler parameter quaternions and algebraic geometry method[END_REF]. Substituting x 3 = 0 and solving for y 0 , y 2 , y 3 from the ideal K 2 shows that the translational motions can be parametrized by y 1 and the rotational motions by x 0 , x 1 and x 2 [START_REF] Schadlbauer | Operation Modes in Lower-Mobility Parallel Manipulators[END_REF]. Singularity analysis The Jacobian of the 3-SPR manipulator in this context is defined as J i and the manipulator reaches a singular position when its determinant vanishes.: J i = ∂ g j ∂ x k , ∂ g j ∂ y k where i = 1, 2 ; j = 1, ..., 8 ; k = 0, ..., 3 (13) Actuation and constraint singularities Computing the determinant S i : det(J i ) results in a hyper-variety of degree 8 in both the operation modes: S 1 : x 3 • p 7 (x 1 , x 2 , x 3 , y 0 , y 1 , y 2 , y 3 ) = 0 and S 2 : x 0 • p 7 (x 0 , x 1 , x 2 , y 0 , y 1 , y 2 , y 3 ) = 0 (14) The 7 degree polynomials describe the actuation singularities when the prismatic joints are actuated and that exist within each operation mode whereas x 0 = x 3 = 0 describes the constraint singularity that exhibits the transition between K 1 and K 2 . Singularity Loci The actuation singularities can be expressed in the orientation workspace by parametrizing the orientation of the platform in terms of Euler angles. In particular, the Study parameters can be expressed in terms of the Euler angles azimuth (φ ), tilt (θ ) and torsion (ψ) [?]: x 0 = cos( θ 2 )cos( φ 2 + ψ 2 ) x 1 = sin( θ 2 )cos( φ 2 - ψ 2 ) x 2 = sin( θ 2 )sin( φ 2 - ψ 2 ) x 3 = cos( θ 2 )sin( φ 2 + ψ 2 ) (15) Since K 1 and K 2 are characterized by x 0 = 0 and x 3 = 0, substituting them in equation ( 15) makes the torsion angle (ψ) null, verifying the fact that, like its 3-RPS counterpart, the 3-SPR parallel manipulator is a zerotorsion manipulator. Accordingly, the x i parameters can be written in terms of tilt(θ ) and azimuth(φ ) only. The following method is used to calculate the determinant of J i in terms of θ , φ and Z, the altitude of the moving platform from the fixed base. The elements of the translational part M T of matrix M in equation ( 2) are considered as M T = [X,Y, Z] T that represent the translational displacement in the coordinate axes x, y and z respectively. Then, the constraint equations are derived in terms of X,Y, Z, x 0 , x 1 , x 2 , x 3 , r 1 , r 2 , r 3 . From these equations, the variables X,Y, r 1 , r 2 and r 3 are expressed as a function of Z and x i and are substituted in the determinant of the Jacobian. Finally, the corresponding x i are expressed in terms of Euler angles, which yields a single equation describing the actuation singularity of the 3-SPR PM in terms of Z, θ and φ . Fixing the value of Z and plotting the determinant of the Jacobian for φ ∈ [-180 0 , 180 0 ] and θ ∈ [0 0 , 180 0 ] depicts the singularity loci. The green curves in figure 3(a) and 3(b) show the singularity loci for operation mode 1 and operation mode 2 respectively with h 1 = 1, h 2 = 2 and Z = 1. Maximum Inscribed Circle Radius for 3-RPS and 3-PRS PMs From the home position of the manipulator (θ = φ = 0), a circle is drawn that has the maximum tilt value for any azimuth within the singularity-free region [START_REF] Briot | Singularity Analysis of Zero-Torsion Parallel Mechanisms[END_REF]. The radius of this circle is called the Maximum Inscribed Circle Radius (MICR). In Figure 3, the red circle denotes the maximum inscribed circle where the value of MICR is expressed in degrees. The MICR is used as a basis to compare the 3-SPR and the 3-RPS parallel manipulators as they are analogous to each other in aspects like number of operation modes and direct kinematics. The 3-SPR PM has higher MICR values and hence larger singularity free regions compared to that of the 3-RPS PM in compliance with [START_REF] Lukanin | Inverse Kinematics, Forward Kinematics and Working Space Determination of 3-DOF Parallel Manipulator with S-P-R Joint Structure[END_REF][START_REF] Lu | Position and Workspace Analysis of 3-SPR and 3-RPS Parallel Manipulators[END_REF]. For 3-RPS parallel manipulator, there exists rarely any difference in the MICR values for different operation modes whereas in 3-SPR PM, the second operation mode has higher values of MICR compared to operation mode 1. The values of MICR ranges from 0 0 to 130 0 in operation mode 1, but from 0 0 to 160 0 in operation mode 2 for 3-SPR PM. In addition, for 3-RPS PM, the ratio h 1 : h 2 influences operation mode 1 more than operation mode 2. It is apparent that the MICR values have a smaller range for different ratios in case of operation mode 2. On the contrary, for 3-SPR PM, high MICR values can be seen for operation mode 2, for lower ratios of h 1 : h 2 . Therefore, the MICR plots can be exploited in choosing the ratio of the platform to the base in accordance with the required application. Conclusions In this paper, 3-RPS and 3-SPR parallel manipulators were compared based on their operation modes and singularity-free workspace. Initially, the operation modes of the 3-SPR PM were enumerated. It turns out that the 3-SPR parallel manipulator has two operation modes similar to the 3-RPS PM. The parallel singularities were computed for both the manipulators and the singularity loci were plotted in their orientation workspace. Furthermore, an index called the singularity-free maximum inscribed circle radius was defined. MICR was plotted as a function of the Z coordinate of the moving-platform for different ratios of the platform circum-radius to the base circum-radius. It shows that, compared to the 3-RPS PM, the 3-SPR PM has higher MICR values and hence a larger singularity free workspace for a given altitude. For the ratios of the platform to base size, higher values of MICR are observed in operation mode 2 than in operation mode 1 for the 3-SPR mechanism and is viceversa for the 3-RPS mechanism. In fact, the singularity-free MICR curves open up many design possibilities for both mechanisms suited for a particular application. It will also be interesting to plot the MICR curves for constraint singularities and other actuation modes like 3-RPS and 3-SPR manipulators and to consider the parasitic motions of the moving-platform within the maximum inscribed circles. The investigation of MICR not started from the identity condition (θ = φ = 0 degrees) has to be considered too. Future work will deal with those issues. Fig. 3 3 - 3 Fig. 3 3-SPR singularity loci and the maximum inscribed singularity-free circle (a) Operation mode 1 (b) Operation mode 2 Z h 1 1 in Figures 4 and 5 . 115 vs MICR is plotted for different ratios of h 2 : h The maximum value of MICR is limited to 160 degrees for all the figures and Z h 1 varies from 0 to 4 while eight ratios of h 2 : h 1 are considered. The data cursor in Figures 5(a) and 5(b) correspond to the red circles with MICR = 25.22 and 30.38 degrees in Figures 3(a) and 3(b), respectively. The MICR plots give useful information on the design choice of 3-RPS or 3-SPR parallel manipulators. M 1 h2MFig. 4 14 Fig. 4 MICR vs. Z h 1 for the 3-RPS manipulator (a) Operation mode 1 (b) Operation mode 2 M 1 h2MFig. 5 15 Fig. 5 MICR vs. Z h 1 for the 3-SPR manipulator (a) Operation mode 1 (b) Operation mode 2
20,967
[ "1307880", "16879", "10659" ]
[ "111023", "473973", "481388" ]
01758038
en
[ "spi" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01758038/file/ARK2016_Gagliardini_Gouttefarde_Caro_DynamicFeasibleWorkspace.pdf
Lorenzo Gagliardini email: lorenzo.gagliardini@irt-jules-verne.fr Marc Gouttefarde email: marc.gouttefarde@lirmm.fr S Caro Determination of a Dynamic Feasible Workspace for Cable-Driven Parallel Robots Keywords: Cable-Driven Parallel Robots, Workspace Analysis, Dynamic Feasible Workspace come L'archive ouverte pluridisciplinaire Introduction Several industries, e.g. the naval and renewable energy industries, are facing the necessity to manufacture novel products of large dimensions and complex shapes. In order to ease the manufacturing of such products, the IRT Jules Verne promoted the investigation of new technologies. In this context, the CAROCA project aims at investigating the performance of Cable Driven Parallel Robots (CDPRs) to manufacture large products in cluttered industrial environments [START_REF] Gagliardini | Discrete reconfiguration planning for cable-driven parallel robots[END_REF][START_REF] Gagliardini | A reconfigurable cable-driven parallel robot for sandblasting and painting of large structures[END_REF]. CDPRs are a particular class of parallel robots whose moving platform is connected to the robot fixed base frame by a number of cables as illustrated in Fig. 1. CDPRs have several advantages such as a high payload-to-weight ratio, a potentially very large workspace, and possibly reconfiguration capabilities. The equilibrium of the moving platform of a CDPR is classically investigated by analyzing the CDPR workspace. In serial and rigid-link parallel robots, the workspace is commonly defined as the set of end-effector poses where a number of kinematic constraints are satisfied. In CDPRs, the workspace is usually defined as the set of poses where the CDPR satisfies one or more conditions including the static or the dynamic equilibrium of the moving platform, with the additional constraint of non-negative cable tensions. Several workspaces and equilibrium conditions have been studied in the literature. The first investigations focused on the static equilibrium and the Wrench Closure Workspace (WCW) of the moving platform, e.g. [START_REF] Fattah | Workspace and design analysis of cable-suspended planar parallel robots[END_REF][START_REF] Gouttefarde | Analysis of the wrench-closure workspace of planar parallel cable-driven mechanisms[END_REF][START_REF] Roberts | On the inverse kinematics, statics, and fault tolerance of cable-suspended robots[END_REF][START_REF] Stump | Workspaces of cable-actuated parallel manipulators[END_REF][START_REF] Verhoeven | Advances in Robot Kinematics, chap. Estimating the controllable workspace of tendon-based Stewart platforms[END_REF]. Since cables can only pull on the moving platform, a pose belongs to the WCW if and only if any wrench can be applied by means of non-negative cable tensions. Feasible equilibria of the moving platform can also be analyzed using the Wrench Feasible Workspace (WFW) [START_REF] Bosscher | Wrench-feasible workspace generation for cabledriven robots[END_REF][START_REF] Bouchard | On the ability of a cable-driven robot to generate a prescribed set of wrenches[END_REF][START_REF] Gouttefarde | Interval-analysis-based determination of the wrenchfeasible workspaceof parallel cable-driven robots[END_REF]. By definition, the WFW is the set of wrench feasible platform poses where a pose is wrench feasible when the cables can balance a given set of external moving platform wrenches while maintaining the cable tensions in between given lower and upper bounds. The Static Feasible Workspace (SFW) is a special case of the WFW, where the sole wrench induced by the moving platform weight has to be balanced [START_REF] Pusey | Design and workspace analysis of a 6-6 cablesuspended parallel robot[END_REF]. The lower cable tension bound, τ min , is defined in order to prevent the cables from becoming slack. The upper cable tension bound, τ max , is defined in order to prevent the CDPR from being damaged. The dynamic equilibrium of the moving platform can be investigated by means of the Dynamic Feasible Workspace (DFW). By definition, the DFW is the set of dynamic feasible moving platform poses. A pose is said to be dynamic feasible if a prescribed set of moving platform accelerations is feasible, with cable tensions lying in between given lower and upper bounds. The concept of dynamic workspace has already been investigated in [START_REF] Barrette | Determination of the dynamic workspace of cable-driven planar parallel mechanisms[END_REF] for planar CDPRs. Barrette et al. solved the dynamic equations of a planar CDPR analytically, providing the possibility to compute the boundary of the DFW. This strategy cannot be directly applied to spatial CDPRs due to the complexity of their dynamic model. In 2014, Kozlov studied in [START_REF] Kozlov | A graphical user interface for the design of cable-driven parallel robots[END_REF] the possibility to investigate the DFW by using a tool developed by Guay et al. for the analysis of the WFW [START_REF] Guay | Measuring how well a structure supports varying external wrenches[END_REF]. However, the dynamic model proposed by Kozlov considers the moving platform as a point mass, neglecting centrifugal and Coriolis forces. This paper deals with a more general definition of the DFW. With respect to the definitions proposed in [START_REF] Barrette | Determination of the dynamic workspace of cable-driven planar parallel mechanisms[END_REF][START_REF] Kozlov | A graphical user interface for the design of cable-driven parallel robots[END_REF], the DFW considered in the present paper takes into account: (i) The inertia of the moving platform; (ii) The external wrenches applied on the moving platform; (iii) The centrifugal and the Coriolis forces corresponding to a given moving platform twist. The Required Wrench Set (RWS), defined here as the set of wrenches that the cables have to apply on the moving platform in order to satisfy its dynamic equilibrium, is calculated as the sum of these three contributions to the dynamic equilibrium. Then, the corresponding DFW is computed by means of the algorithm presented in [START_REF] Gouttefarde | Advances in Robot Kinematics, chap[END_REF] to analyze the WFW. Dynamic Model The CDPR dynamic model considered in this paper consists of the dynamics of the moving platform. A dynamic model taking into account the dynamics of the winches could also be considered but is not used here due to space limitations. Additionally, assuming that the diameters of the cables and the pulleys are small, the dynamics of the pulleys and the cables is neglected. The dynamic equilibrium of the moving platform is described by the following equation Wτ - I p p -C ṗ + w e + w g = 0 ( 1 ) where W is the wrench matrix that maps the cable tension vector τ into a platform wrench, and ṗ = ṫ ω p = ẗ α , (2) where ṫ = [ṫ x , ṫy , ṫz ] T and ẗ = [ẗ x , ẗy , ẗz ] T are the vectors of the moving platform linear velocity and acceleration, respectively, while ω = [ω x , ω y , ω z ] T and α = [α x , α y , α z ] T are the vectors of the moving platform angular velocity and acceleration, respectively. The external wrench w e is a 6-dimensional vector expressed in the fixed reference frame F b and takes the form w e = f T e , m T e T = [ f x , f y , f z , m x , m y , m z ] T (3) f x , f y and f z are the x, y and z components of the external force vector f e . m x , m y and m z are the x, y and z components of the external moment vector m e , respectively. The components of the external wrench w e are assumed to be bounded as follows f min ≤ f x , f y , f z ≤ f max (4) m min ≤ m x , m y , m z ≤ m max (5) According to ( 4) and ( 5), the set [w e ] r , called the Required External Wrench Set (REWS), that the cables have to balance is a hyper-rectangle. The Center of Mass (CoM) of the moving platform, G, may not coincide with the origin of the frame F p attached to the platform. The mass of the platform being denoted by M, the wrench w g due to the gravity acceleration g is defined as follows w g = MI 3 M Ŝp g ( 6 ) where I 3 is the 3 × 3 identity matrix, MS p = R [Mx p , My p , Mz p ] T is the first momentum of the moving platform defined with respect to frame F b . The vector S p = [x p , y p , z p ] T defines the position of G in frame F p . M Ŝp is the skew-symmetric matrix associated to MS p . The matrix I p represents the spatial inertia of the platform I p = MI 3 -M Ŝp M Ŝp I p (7) where I p is the inertia tensor matrix of the moving platform, which can be computed by the Huygens-Steiner theorem from the moving platform inertia tensor, I g , defined with respect to the platform CoM I p = RI g R T - M Ŝp M Ŝp M (8) R is the rotation matrix defining the moving platform orientation and C is the matrix of the centrifugal and Coriolis wrenches, defined as C ṗ = ω ωMS p ωI p ω ( 9 ) where ω is the skew-symmetric matrix associated to ω. 3 Dynamic Feasible Workspace Standard Dynamic Feasible Workspace Studies on the DFW have been realised by Barrette et al. in [START_REF] Barrette | Determination of the dynamic workspace of cable-driven planar parallel mechanisms[END_REF]. The boundaries of the DFW have been computed for a generic planar CDPR developing the equations of its dynamic model. Since this method cannot be easily extended to spatial CDPRs, Kozlov proposed to use the method described in [START_REF] Guay | Measuring how well a structure supports varying external wrenches[END_REF] in order to compute the DFW of a fully constrained CDPR [START_REF] Kozlov | A graphical user interface for the design of cable-driven parallel robots[END_REF]. The proposed method takes into account the cable tension limits τ min and τ max in checking the feasibility of the dynamic equilibrium of the moving platform for the following bounded sets of accelerations ẗmin ≤ ẗ ≤ ẗmax (10) α min ≤ α ≤ α max (11) where ẗmin , ẗmax , α min , α max are the bounds on the moving platform linear and rotational accelerations. These required platform accelerations define the so-called Required Acceleration Set (RAS), [ p] r . The RAS can be projected into the wrench space by means of matrix I p , defined in [START_REF] Gouttefarde | Geometry selection of a redundantly actuated cable-suspended parallel robot[END_REF]. The set of wrenches [w d ] r generated by this linear mapping is defined as the Required Dynamic Wrench Set (RDWS). No external wrench is applied to the moving platform. Accordingly, the DFW is defined as follows Definition 1. A moving platform pose is said to be dynamic feasible when the moving platform of the CDPR can reach any acceleration included in [ p] r according to cable tension limits expressed by [τ] a . The Dynamic Feasible Workspace is then the set of dynamic feasible poses, [p] DFW . [ p] DFW = (t, R) ∈ R 3 × SO(3) : ∀ p ∈ [ p] r , ∃τ ∈ [τ] a s.t. Wτ -A p = 0 (12) In the definition above, the set of Admissible Cable Tensions (ACT) is defined as [τ] a = {τ | τ min ≤ τ i ≤ τ max , i = 1, . . . , m} (13) Improved Dynamic Feasible Workspace The DFW described in the previous section has several limitations. The main drawback is associated to the fact that the proposed DFW takes into account neither the external wrenches applied to the moving platform nor its weight. Furthermore, the model used to verify the dynamic equilibrium of the moving platform neglects the Coriolis and the centrifugal wrenches associated to the CDPR dynamic model. At a given moving platform pose, the cable tensions should compensate both the contribution associated to the REWS, [w e ] r , and the RDWS, [w d ] r . The components of the REWS are bounded according to (4) and ( 5) while the components of the RDWS are bounded according to [START_REF] Gouttefarde | Advances in Robot Kinematics, chap[END_REF] and [START_REF] Guay | Measuring how well a structure supports varying external wrenches[END_REF]. The dynamic equilibrium of the moving platform is described by [START_REF] Barrette | Determination of the dynamic workspace of cable-driven planar parallel mechanisms[END_REF], where C is related to the Coriolis and centrifugal forces of the moving platform and w g to its weight. These terms depend only on the pose and the twist of the moving platform. For given moving-platform pose and twist, these terms are constant. Therefore, the DFW definition can be modified as follows. Definition 2. A moving platform pose is said to be dynamic feasible when, for a given twist ṗ, the CDPR can balance any external wrench w e included in [w e ] r , while the moving platform can assume any acceleration p included in [ p] r . The Dynamic Feasible Workspace is the set of dynamic feasible poses, [p] DFW . [p] DFW : ∀w e ∈ [w e ] r , ∀ p ∈ [ p] r , ∃τ ∈ [τ] a s.t. Wτ -I p p-C ṗ+w e +w g = 0 (14) In this definition, we may note that the feasibility conditions are expressed according to three wrench space sets. The first set, [w d ] r , can be computed by projecting the vertices of [ p] r into the wrench space. For a 3-dimensional case study (6 DoF case), [ p] r consists of 64 vertices. The second component, [w e ] r , consists of 64 vertices as well. Considering a constant moving platform twist, the last component of the dynamic equilibrium, w c = {C ṗ + w g }, is a constant wrench. The composition of these sets generates a polytope, [w] r , defined as the Required Wrench Set (RWS). [w] r can be computed as the convex hull of the Minkowski sum over [w e ] r , [w d ] r and w c , as illustrated in Fig. 2: [w] r = [w e ] r ⊕ [w d ] r ⊕ w c (15) Thus, Def. 2 can be rewritten as a function of [w] r . Definition 3. A moving platform pose is said to be dynamic feasible when the CDPR can balance any wrench w included in [w] r . The Dynamic Feasible Workspace is the set of dynamic feasible poses, [p] DFW . [p] DFW : ∀w ∈ [w] r , ∃τ ∈ [τ] a s.t. Wτ -I p p + w e + w c = 0 (16) The mathematical representation in ( 16) is similar to the one describing the WFW. As a matter of fact, from a geometrical point of view, a moving platform pose will be dynamic feasible if [w] r is fully included in [w] a [w] r ⊆ [w] a (17) Consequently, the dynamic feasibility of a pose can be verified by means of the hyperplane shifting method [START_REF] Bouchard | On the ability of a cable-driven robot to generate a prescribed set of wrenches[END_REF][START_REF] Gouttefarde | Advances in Robot Kinematics, chap[END_REF][START_REF] Guay | Measuring how well a structure supports varying external wrenches[END_REF]. The distances between the facets of the avail- -100 N ≤ f x , f y , f z ≤ 100 N (19) -1 Nm ≤m x , m y , m z ≤ 1 Nm (20) Similarly, the range of accelerations of the moving platform is limited according to the following inequalities: -2 m/s 2 ≤ ẗx , ẗy , ẗz ≤ 2 m/s 2 (21) -0.1 rad/s 2 ≤α x , α y , α z ≤ 0.1 rad/s 2 (22) For the foregoing conditions, the improved DFW of the CDPR covers the 47.96% of its volume. Figure 4(a) illustrates the improved DFW of the CDPR under study. The results have been compared with respect to the dynamic feasibility conditions described by Def. 1. By considering only the weight and the inertia of the moving platform, the DFW covers the 63.27% of the volume occupied by the DFW, as shown in Fig. 4(b). Neglecting the effects of the external wrenches and the Coriolis forces, the volume of the DFW is 32% larger than the the volume of the improved DFW. Similarly, by neglecting the inertia of the CDPR and taking into account only the external wrenches w e , the WFW occupies the 79.25% of the CDPR volume. By taking into account only the weight of the moving platform, the SFW covers 99.32% of the CDPR volume. These results are summarized in Tab. 1. Conclusion This paper introduced an improved dynamic feasible workspace for cable-driven parallel robots. This novel workspace takes into account: (i) The inertia of the moving platform; (ii) The external wrenches applied on the moving platform and (iii) The centrifugal and the Coriolis forces induced by a constant moving platform twist. As an illustrative example, the static, wrench-feasible, dynamic and improved dynamic workspaces of a spatial suspended cable-driven parallel robot, with the dimensions of a prototype developed in the framework of the IRT JV CAROCA project, are traced. It turns out that the IDFW of the CDPR under study is respectively 1.32 times, 1.65 times and 2.07 times smaller than its DFW, WFW and SFW. Fig. 1 1 Fig. 1 Example of a CDPR design created in the framework of the IRT JV CAROCA project. Fig. 2 2 Fig. 2 Computation of the RWS [w] r . Example of a planar CDPR with 3 actuators and 2 translational DoF. Fig. 3 3 Fig.3Layout of the CoGiRo cable-suspended parallel robot[START_REF] Gouttefarde | Geometry selection of a redundantly actuated cable-suspended parallel robot[END_REF] with the size of the IRT JV CAROCA prototype. Fig. 4 4 Fig. 4 (a) Improved DFW and (b) DFW of the CDPR under study covering 47.96% and 63.27% of its volume, respectively. Table 1 1 Comparison of SFW , W FW , DFW and IDFW of the CDPR under study. Covered Volume of the CDPR 99.32% 79.25% 63.27% 47.95% Workspace type SFW W FW DFW IDFW Acknowledgements This research work is part of the CAROCA project managed by IRT Jules Verne (French Institute in Research and Technology in Advanced Manufacturing Technologies for Composite, Metallic and Hybrid Structures). The authors wish to associate the industrial and academic partners of this project, namely, STX, DCNS, AIRBUS and CNRS.
17,565
[ "923232", "170861", "10659" ]
[ "235335", "388165", "481388" ]
01758077
en
[ "spi" ]
2024/03/05 22:32:10
2017
https://hal.science/hal-01758077/file/ARK2016_Platis_Rasheed_Cardou_Caro.pdf
Angelos Platis Tahir Rasheed Philippe Cardou Stéphane Caro Isotropic Design of the Spherical Wrist of a Cable-Driven Parallel Robot Keywords: Parallel mechanism, cable-driven parallel robot, parallel spherical wrist, wrenches, dexterity Because of their mechanical properties, parallel mechanisms are most appropriate for large payload to weight ratio or high-speed tasks. Cable driven parallel robots (CDPRs) are designed to offer a large translation workspace, and can retain the other advantages of parallel mechanisms. One of the main drawbacks of CD-PRs is their inability to reach wide ranges of end-effector orientations. In order to overcome this problem, we introduce a parallel spherical wrist (PSW) end-effector actuated by cable-driven omni-wheels. In this paper we mainly focus on the description of the proposed design and on the appropriate placement of the omni-wheels on the wrist to maximize the robot dexterity. Introduction Several applications could benefit from CDPRs endowed with large orientation workspaces, such as entertainment and manipulation and storage of large and heavy parts. This component of the workspace is relatively small in existing CDPR designs.To resolve this problem, a parallel spherical wrist (PSW) end-effector is introduced and connected in series with the translational 3-DOF CDPR to provide an unbounded singularity-free orientation workspace. IRCCyN, École Centrale de Nantes, 1 rue de la Noë, 44321, Nantes, France, e-mail: {Angelos.Platis, Tahir.Rasheed}@eleves.ec-nantes.fr Laboratoire de robotique, Département de génie mécanique, Université Laval, Quebec City, QC, Canada. e-mail: pcardou@gmc.ulaval.ca CNRS-IRCCyN, 1 rue de la Noë, 44321, Nantes, France, e-mail: stephane.caro@irccyn.ecnantes.fr 1 This paper focuses on the kinematic design and analyis of a PSW actuated by the cables of a CDPR providing the robot independent translation and orientation workspaces. CDPRs are generally capable of providing a large 3-dofs translation workspace, normally needed four cables, which enable the user to control the point where all of them are concentrated [START_REF] Bahrami | Optimal design of a spatial four cable driven parallel manipulator[END_REF], [START_REF] Hadian | Kinematic isotropic configuration of spatial cable-driven parallel robots[END_REF]. Robots that can provide large orientation workspace have been developed using spherical wrist in the past few years that allows the end-effector to rotate with unlimited rolling, in addition to a limited pitch and yaw movements [START_REF] Bai | Modelling of a spherical robotic wrist with euler parameters[END_REF], [START_REF] Wu | Dynamic modeling and design optimization of a 3-dof spherical parallel manipulator[END_REF]. Eclipse II [START_REF] Kim | Eclipse-ii: a new parallel mechanism enabling continuous 360-degree spinning plus three-axis translational motions[END_REF] is an interesting robot that can provide unbounded 3-dofs translational motions, however its orientation workspace is constrained by structural interference and rotation limits of the spherical joints. Several robots have been developed in the past having decoupled translation and rotational motions. One interesting concept of such a robot is that of the Atlas Motion Platform [START_REF] Hayes | Atlas motion platform: Full-scale prototype[END_REF] developed for simulation applications. Another robot with translation motions decoupled from orientation motions can be found in [START_REF] Yime | A novel 6-dof parallel robot with decoupled translation and rotation[END_REF]. The decoupled kinematics are obtained using a triple spherical joint in conjunction with a 3-UPS parallel robot. In order to design a CDPR with a large orientation workspace, we introduce a parallel spherical wrist (PSW) end-effector actuated by cable-driven omni-wheels. In this paper we mainly focus on the description of the proposed design and on the appropriate placement of the omni-wheels on the wrist to maximize the robot dexterity. Manipulator Architecture The end-effector is a sphere supported by actuated omni-wheels as shown in Fig. 1. The wrist contians three passive ball joints at the bottom and three active omniwheels being driven through drums. Each cable makes several loops around each drum. Both ends are connected to two servo-actuated winches, which are fixed to the base. When two servo-actuated winches connected to the same cable turn in the same direction, the cable circulates and drives the drum and its associated omniwheel. When both servo-actuated winches turn in opposite directions, the length of the cable loop changes, and the sphere centre moves. To increase the translation workspace of the CDPR, another cable is attached, which has no participation in the omni-wheels rotation. The overall design of the manipulator is shown in Fig. 2. We have in total three frames. First, the CDPR base frame (F 0 ), which is described by its center O 0 having coordinates x 0 , y 0 , z 0 . Second, the PSW base frame (F 1 ), which has its center O 1 at the geometric center of the sphere and has coordinates x 1 , y 1 , z 1 . Third, the spherical end-effector frame (F 2 ) is attached to the end-effector. Its centre O 2 coincides with that of the PSW base frame (O 2 ≡ O 1 ) and its coordinates are x 2 , y 2 , z 2 . Exit points A i are the cable attachment points that link the cables to the base. All exit points are fixed and expressed in the CDPR reference frame F 0 . Anchor points B i are the platform attachment points. These points are not fixed as they depend to winch #1 to winch #2 actuated omni-wheel passive ball joint drum to winch #7 Fig. 1: Isotropic design of the parallel spherical wrist on the vector P, which is the vector that contains the pose of the moving platform expressed in the CDPR reference frame F 0 . The remaining part of the paper aims at finding the appropriate placement of the omni-wheels on the wrist to maximise the robot dexterity. Kinematic Analysis of the Parallel Spherical Wrist Parameterization To simplify the parameterization of the parallel spherical wrist, some assumptions are made. First, all the omni-wheels are supposed to be normal the sphere. Second, the contact points of the omni-wheels with the sphere lie in the base of an inverted cone where its end is the geometrical center of the sphere parametrized by angle α. x 0 y 0 z 0 O 0 F 0 x 1 y 1 z 1 z 2 x 2 y 2 O 1,2 F 1 F 2 A 1 A 2 A 3 A 4 B 1 B 2 B 3 B 4 1 2 3 4 5 6 7 Fig. 2: Concept idea of the manipulator Third, the three contact points form an equilateral triangle as shown in [START_REF] Hayes | Atlas motion platform: Full-scale prototype[END_REF][START_REF] Hayes | Atlas motion platform generalized kinematic model[END_REF]. Fourth, the angle between the tangent to the sphere and the actuation force produced by the ith actuated omni-wheel is named β i , i = 1, 2, 3, and β 1 = β 2 = β 3 = β . Figure 3 illustrates the sphere, one actuated omni-wheel and the main design variables of the parallel spherical wrist. Π i is the plane tangent to the sphere and passing through the contact point G i between the actuated omni-wheel and the sphere. ω i denotes the angular velocity vector of the ith actuated omni-wheel. s i is a unit vector along the tangent line T that is tangent to the base of the cone and coplanar to plane Π i . w i is a unit vector normal to s i . f ai depicts the transmission force lying in plane Π i due to the actuated omni-wheel. α is the angle defining the altitude of contact points G i (α ∈ [0, π]). β is the angle between the unit vectors s and v i (β ∈ [-Π 2 , Π 2 ]). As the contact points G i are the corners of an equilateral triangle, the angle between the contact point G 1 and the contact points G 2 and G 3 is equal to γ. R is the radius of the sphere. r i is radius of the i th actuated omni-wheel. φi is the angular velocity of the omni-wheel. u i , v i , n i are unit vectors at point G i and i, j, k are unit vectors along x 2 , y 2 , z 2 respectively. In order to analyze the kinematic performance of the parallel spherical wrist, an equivalent parallel robot (Fig. 4) having six virtual legs is presented, each leg having a spherical, a prismatic and another spherical joints connected in series. Three legs have an actuated prismatic joint (green), whereas the other three legs have a locked prismatic joints (red). Here, the kinematics of the spherical wrist is analyzed with screw theory and an equivalent parallel robot represented in Fig. 4. Kinematic Modeling Fig. 4(a) represents the three actuation forces f ai , i = 1, 2, 3 and the three constraint forces f ci , i = 1, 2, 3 exerted by the actuated omni-wheels on the sphere. The three constraint forces intersect at the geometric center of the sphere and prevent the latter from translating. The three actuation forces generated by the three actuated omniwheels allow us to control the three-dof rotational motions of the sphere. Fig. 4(b) depicts a virtual leg corresponding to the effect of the ith actuated omni-wheel on the sphere. The kinematic model of the PSW is obtained by using the theory of reciprocal screws [START_REF] Ball | A treatise on the theory of screws[END_REF][START_REF] Hunt | Kinematic geometry of mechanisms[END_REF] as follows: A t = B φ ( 1 ) where t is the sphere twist, φ = φ1 φ2 φ3 T is the actuated omni-wheel angular velocity vector. A and B are respectively the forward and inverse kinematic Jacobian matrices of the PSW and take the form: A = A rω A rp 0 3×3 I 3 (2) B = I 3 0 3×3 (3) 0 G 2 G 3 G 1 B 2 B 3 B 1 A 2 A 3 A 1 Locked Prismatic Joint Actuated Prismatic Joint f a1 f a2 f a3 f c1 f c2 f c3 G A rω =   R(n 1 × v 1 ) T R(n 2 × v 2 ) T R(n 3 × v 3 ) T   and A rp =   v T 1 v T 2 v T 3   (4) As the contact points on the sphere form an equilateral triangle, γ = 2π/3. As a consequence, matrices A rω and A rp are expressed as functions of the design parameters α and β : A rω = R 2    -2CαCβ -2Sβ 2SαCβ CαCβ + √ 3Sβ Sβ - √ 3CαCβ 2SαCβ CαCβ - √ 3Sβ Sβ + √ 3CαCβ 2SαCβ    (5) A rp = 1 2    -2CαSβ 2Cβ 2SαSβ CαSβ - √ 3Cβ -( √ 3CαSβ +Cβ ) 2SαSβ CαSβ + √ 3Cβ √ 3CαSβ -Cβ 2SαSβ    (6) where C and S denote the cosine and sine functions, respectively. Singularity Analysis As matrix B cannot be rank deficient, the parallel spherical wrist meets singularities if and only if (iff) matrix A is singular. From Eqs. ( 5) and ( 6), matrix A is singular Ι 0 G 2 G 3 G 1 f a1 f a2 f a3 (a) β = ±π/2 0 G 2 G 3 G 1 f a1 f a2 f a3 (b) α = π/2 and β = 0 det(A) = 3 √ 3 2 R 3 SαCβ (1 -S 2 αC 2 β ) = 0 (7) namely, if α = 0 or π; if β = ±π/2; if α = π/2 and β = 0 or ±π. Figs. 5a and 5b represent two singular configurations of the parallel spherical wrist under study. The three actuation forces f a1 , f a2 and f a3 intersect at point I in Fig. 5a. The PSW reaches a parallel singularity and gains an infinitesimal rotation (uncontrolled motion) about an axis passing through points O and I in such a configuration. The three actuation forces f a1 , f a2 and f a3 are coplanar with plane (X 1 OY 1 ) in Fig. 5b. The PSW reaches a parallel singularity and gains two-dof infinitesimal rotations (uncontrolled motions) about an axes that are coplanar with plane (X 1 OY 1 ) in such a configuration. Kinematically Isotropic Wheel Configurations This section aims at finding a good placement of the actuated omni-wheels on the sphere with regard to the manipulator dexterity. The latter is evaluated by the condition number of reduced Jacobian matrix J ω = rA -1 rω which maps angular velocities of the omni-wheels φ to the required angular velocity of the end-effector ω. From Eqs. ( 5) and ( 6), the condition number κ F (α, β ) of J ω based on the Frobenius norm [START_REF] Angeles | Fundamentals of Robotic Mechanical Systems: Theory, Methods and Algorithms[END_REF] is expressed as follows: Figure 6 depicts the inverse condition number of matrix A based on the Frobenius norm as a function of angles α and β . κ F (α, β ) is a minimum when its partial derivatives with respect to α and β vanish, namely, κ F (α, β ) = 1 3 3S 2 αC 2 β + 1 S 2 αC 2 β (1 -S 2 αC 2 β ) (8) κα (α, β ) = ∂ κ ∂ α = Cα(3S 2 αC 2 β -1)(S 2 αC 2 β + 1) 18S 3 αC 2 β (S 2 αC 2 β -1) 2 κ = 0 (9) κβ (α, β ) = ∂ κ ∂ β = - Sβ (3S 2 αC 2 β -1)(S 2 αC 2 β + 1) 18S 2 αC 3 β (S 2 αC 2 β -1) 2 κ = 0 ( 10 ) and its Hessian matrix is semi-positive definite. As a result, κ F (α, β ) is a minimum and equal to 1 along the hippopede curve, which is shown in Fig. 6 and defined by the following equation: 3S 2 αC 2 β -1 = 0 [START_REF] Yime | A novel 6-dof parallel robot with decoupled translation and rotation[END_REF] This hippopede curve amounts to the isotropic loci of the parallel spherical wrist. Figure 7 illustrates some placements of the actuated omni-wheels on the sphere leading to kinematically isotropic wheel configurations in the parallel spherical wrist. It should be noted that the three singular values of matrix A rω are equal to the ratio between the sphere radius R and the actuated omni-wheel radius r along the hippopede curve, namely, the velocity amplification factors of the PSW are the same and constant along the hippopede curve. If the rotating sphere were to carry a camera, a laser or a jet of some sort, then the reachable orientations would be limited by interferences with the omni-wheels. α = 35.26 • , β = 0 • α = 65 • , β = 50.43 • α = 50 • , β = 41.1 • α = 80 • , β = 54.11 • Fig. 7: Kinematically isotropic wheel configurations in the parallel spherical wrist Therefore, a designer would be interested in choosing a small value of alpha, so as to maximize the field of view of the PSW. As a result, the following values have been assigned to the design parameters α and β : α = 35.26 • (12) β = 0 • (13) in order to come up with a kinematically isotropic wheel configuration in the parallel spherical wrist and a large field of view. The actuated omni-wheels are mounted in pairs in order to ensure a good contact between them and the sphere. A CAD modeling of the final solution is represented in Fig. 1. Conclusion This paper presents the novel concept of mounting a parallel spherical wrist in series with a CDPR, while preserving a fully-parallel actuation scheme. As a result, the actuators always remain fixed to the base, thus avoiding the need to carry electric power to the end-effector and minimizing its size, weight and inertia. Another original contribution of this article is the determination of the kinematically isotropic wheel configurations in the parallel spherical wrist. These configurations allow the designer to obtain a very good primary image of the design choices. To our knowledge, these isotropic configurations were never reported before, although several researchers have studied and used omni-wheel-actuated spheres. Future work includes the development of a control scheme to drive the end-effector rotations while accounting for the displacements of its centre, and also making a small scale prototype of the robot. 1 Fig. 3 : 13 Fig. 3: Parameterization of the parallel spherical wrist Fig. 4 : 4 Fig. 4: (a) Actuation and constraint wrenches applied on the end-effector of the spherical wrist (b) Virtual i th leg with actuated prismatic joint Fig. 5 : 5 Fig. 5: Singular configurations of the parallel spherical wrist Fig. 6 : 6 Fig. 6: Inverse condition number of the forward Jacobian matrix A based on the Frobenius norm as a function of design parameters α and β
15,554
[ "10659" ]
[ "111023", "111023", "473973", "109505", "481388" ]
01688104
en
[ "spi" ]
2024/03/05 22:32:10
2013
https://hal.science/hal-01688104/file/tagutchou_2013.pdf
J P Tagutchou Dr L Van De Steene F J Escudero Sanz S Salvador Gasification of Wood Char in Single and Mixed Atmospheres of H 2 O and CO 2 Keywords: biomass, gasification, kinetics, mixed atmosphere, reactivity In gasification processes, char-H 2 O and char-CO 2 are the main heterogenous reactions that are responsible for carbon conversion into H 2 and CO. These two reactions are generally looked at independently without considering interactions between them. The objective of this work was to compare kinetics of each reaction alone to kinetics of each reaction in a mixed atmosphere of H 2 O and CO 2 . A char particle was gasified in a macro thermo gravimetry reactor at 900 ı C successively in H 2 O/N 2 , CO 2 /N 2 , and H 2 O/CO 2 /N 2 atmospheres. INTRODUCTION The process of biomass conversion to syngas (H 2 C CO) involves a number of reactions. The first step is drying and devolatilization of the biomass, which leads to the formation of gas (noncondensable species), tar (gaseous condensable species), and a solid residue called char. Gas and tar are generally oxidized to produce H 2 O and CO 2 . The solid residue (the subject of this work) is converted to produce syngas (H 2 C CO) thanks to the following heterogeneous reactions: C C H 2 O ! CO C H 2 ; (1) C C CO 2 ! 2CO; (2) C C O 2 ! CO=CO 2 : (3) Many studies have been conducted on char gasification in reactive H 2 O, CO 2 , or O 2 atmospheres. The reactivity of char during gasification processes depends on the reaction temperature and on the concentration of the reactive gas. Additionally, these heterogeneous reactions are known to be surface reactions, involving a so-called "reactive surface." While the role of temperature and reactive gas partial pressure are relatively well understood, clearly defining and quantifying the reactive surface remains a challenge. The surface consists of active sites located at the surface of pores where the adsorption/desorption of gaseous molecules takes place. The difficulty involved in determining this surface can be explained by a number of physical and chemical phenomena that play an important role in the gasification process: (i) The whole porous surface of the char may not be accessible to the reactive gas, and may itself not be reactive. The pore size distribution directly influences the access of reactive gas molecules to active sites [START_REF] Roberts | A kinetic analysis of coal char gasification reactions at high pressures[END_REF]. It has been a common practice to use the total specific surface area measured using the standard BET test as the reactive surface. However, it has been established that a better indicator is the surface of only pores that are larger than several nm or tens of nm [START_REF] Commandré | The high temperature reaction of carbon with nitric oxide[END_REF]. (ii) As the char is heated to high temperatures, a reorganization of the structure occurs. The concentration of available active sites of carbon decreases and this has a negative impact on the reactivity of the char. This phenomenon is called thermal deactivation. (iii) The minerals present in the char have a catalytic effect on the reaction and help increase the reactivity of the char. Throughout the gasification process, there is a marked increase in the mass fraction of catalytic elements contained in the char with a decrease in the mass of the carbon. Due to the complexity of the phenomena and the difficulty to distinguish the influence of each phenomenon on reactivity, a surface function (referred to as SF in this article) is usually introduced in models to describe the gasification of carbon and to globally account for all of the physical phenomenon [START_REF] Sorensen | Determination of reactivity parameters of model carbons, cokes and flame-chars[END_REF][START_REF] Gobel | Dynamic modelling of char gasification in a fixed-bed[END_REF]. While single H 2 O and CO 2 atmospheres have been extensively studied, only a few authors have studied the gasification of a charcoal biomass in mixed atmospheres. Kinetic model classically proposed for the gasification of carbon residues is as follows: d m.t/ dt D R.t/:m.t/: (4) The reactivity of charcoal with a reactant j is often split into intrinsic reactivity r j , which only depends on temperature T and partial pressure p of the reactive gas, and the surface function F: R.t/ D F .X.t//:r j .T:p/: (5) As discussed above, the surface function F depends on many phenomena. In a simplifying approach, many authors express it as a function of the conversion X. METHODOLOGY Using a thermogravimetry (macro-TG) apparatus, gasification of char particles was characterized in three different reactive atmospheres: single H 2 O atmosphere, single CO 2 atmosphere, and a mixed atmosphere containing both CO 2 and H 2 O. Experimental Set-up The macro-TG reactor used in this work is described in detail in [START_REF] Mermoud | Influence of the pyrolysis heating rate on the steam gasification rate of large wood char particles[END_REF] N 2 -at a controlled temperature. The particles are continuously weighed to monitor conversion of the charcoal. The particles were left in the hot furnace swept by nitrogen and maintained until their weight stabilized, attesting to the removal of possible residual volatile matter or re-adsorbed species. The atmosphere then turned into a gasifying atmosphere, marking the beginning of the experiment. Preparation and Characterization of the Samples The material used in this study was charcoal from maritime pine wood chips. Charcoal was produced using a pilot scale screw pyrolysis reactor. The pyrolysis operating conditions were chosen to produce a char with high fixed carbon content, i.e., a temperature of 750 ı C, a 1 h residence time, and 15 kg/h of flow rate in a 200-mm internal diameter electrically heated screw. Based on previous studies, the heating rate in the reactor was estimated to be 50 ı C/min [START_REF] Fassinou | Pyrolysis of Pinus pinaster in a two-stage gasifier: Influence of processing parameters and thermal cracking of tar[END_REF]. After pyrolysis, samples with a controlled particle size were prepared by sieving, and the thickness of particles was subsequently measured using an electronic calliper. Particles with a thickness of 1.5 and 5.5 mm were selected for all the experiments. Table 1 lists the results of proximate and ultimate analysis of the charcoal particles. The amount of fixed carbon was close to 90%, attesting to the high quality of the charcoal. The amount of ash, a potential catalyzer, was 1.4%. GASIFICATION OF CHARCOAL IN SINGLE ATMOSPHERES Operating Conditions All experiments were carried out at a temperature of 900 ı C and at atmospheric total pressure. For each gasifying atmosphere, the mole fraction was chosen to cover values encountered in industrial reactors; experiments were performed at respectively 10, 20, and 40% mole fraction, respectively, for both H 2 O and CO 2 . In order to deal with the variability of the composition of biomass chips, each experiment was carried out with three to five particles in the grid basket. Care was taken to ensure there was no interaction between the particles. Each experiment was repeated at least three times. Results and Interpretations From the mass m(t) at any time, the conversion progress X was calculated according to Eq. ( 6): where m 0 and m ash represent, respectively, the initial mass of the char and the mass of ash at the end of the process. Figure 2 shows the conversion progress versus time for all the experiments. For char-H 2 O experiments, good repeatability was observed. Before 50% conversion, dispersion was small (<5%), while after 50% conversion, it could reach 10%. An average gasification rate was calculated for each experiment at X D 0:5 as 0:5=t (in s 1 ). It was 2.5 times larger in 40% steam than in 10% steam. X.t/ D m 0 m.t/ m 0 m ash ; (6) For char-CO 2 experiments, much larger dispersion was observed. It is difficult to give an explanation for this result. The gasification rate was 2.4 times higher in 40% CO 2 than in 10% CO 2 . Moreover, the results revealed a strange evolution in 20% CO 2 : the reaction was considerably slowed down after 60% conversion. This was also observed by [START_REF] Standish | Gasification of single wood charcoal particles in CO 2[END_REF] during their experiments on gasification of charcoal particles in CO 2 at a concentration of 20% CO 2 . At a given concentration (for instance 40%) steam gasification was on average three times faster than CO 2 gasification. Determination of Surface Functions (SF) In practice, the SF can be derived without using a model by plotting R=R 50 (where R 50 is the reactivity for X D 50%). The reactivity R was obtained by derivation of the X curves. It was not possible to plot the values of SF when X tends towards 1 because by the end of the experiment, the decrease in mass was very small leading to a too small signal/noise ratio to enable correct derivation of the signal and calculation of R. At the beginning of the experiments, the derivative was also too noisy for accurate determination. Thus, for small values of X ranging from zero to 0.15, F .X/ was assumed to be constant and equal to F .X D 0:15/. In addition, from a theoretical point of view, F .X/ should be determined using intrinsic values of R, i.e., from experiments in which no limitation by heat or mass transfer occurs. In practice, it has been shown in the literature that experiments with larger particles can be used [START_REF] Sorensen | Determination of reactivity parameters of model carbons, cokes and flame-chars[END_REF]. It is shown in Figure 3 that the results obtained for small particles (1.5 mm thickness) were similar to those for larger particles (5.5 mm thickness). All results are plotted as F .X/ versus X in Figure 4 for the two reactant gases. For the atmospheres with 10 and 40% CO 2 , it is interesting to note that good repeatability was obtained for the SF when the evolution of X over time showed bad repeatability. While the reactivity of the three samples differed, the SF remained the same. Conversely, in 20% CO 2 , the repeatability of the test appeared to be good in the X D f .t/ plot (Figure 2), but results led to quite different shapes for the SF after 60% conversion. An average value for repeatability experiments was then determined and is plotted in Figure 5. From these results, polynomials were derived for F .X/, as shown in Table 2. It was clearly observed that the 5th order was the most suitable to fit simultaneously all the experimental results of F .X/ in the different atmospheres with the best correlation coefficients. The results show that except in 20% CO 2 , the SF are monotonically increasing functions. For this representation, where the SF are normalized to 1 at X D 0:5, the plots indicate a small increase (from 0.6 to 1) when X increases from 0.1 to 0.5, and a very strong increase (to 4 or 5) when X tends towards 0.9. In experiments with 10, 20, and 40% H 2 O, the SF appeared not to be influenced by the concentration of steam. When CO 2 was the gasifying agent, a strong influence of the concentration was observed, confirming the strange behavior observed in Figure 2 in 20% CO 2 . The function for 10% CO 2 was similar to that of H 2 O (whatever the concentration). A decreasing SF was found with 20% CO 2 for X between 0.6 and 0.75. This evolution has never previously been reported in the literature. Referring to the discussion about the phenomena that are taken into account in the SF, it is not possible to attribute this irregular shape to a physical phenomenon. Figure 6 plots several SF from the literature, normalized at X D 0:5 to enable comparison. Expressions, such as ˛-order of .1 X/, and polynomial forms commonly used for biomass were retained. The SF obtained in 10% H 2 O, which is similar to that obtained in 40% CO 2 , has been added in the figure. It can be observed that up to 50% conversion, most of the SF published in the literature are similar. At higher conversions, all SF follow an exponential type function, but differ significantly in their rate of increase. The results of the authors' experiments (10% H 2 O) are within the range of values reported in the literature. GASIFICATION OF CHARCOAL IN H 2 O C CO 2 ATMOSPHERES To investigate mixed atmospheres, experiments were conducted using 20% H 2 O with the addition of alternatively 10, 20, and 40% CO 2 . The results of conversion versus time are plotted in Figure 7. For each mixed atmosphere, the average results obtained in the single atmospheres are given as references. Rather good repeatability was observed. It can be seen that adding CO 2 to H 2 O accelerated steam gasification. Indeed, mixing, respectively, 10, 20, and 40% of CO 2 with 20% of H 2 O increased the rate of gasification by 20, 33, and 57%, respectively, compared to the rate of gasification in 20% H 2 O alone. This is a new result, since in the literature, studies on biomass gasification concluded on that steam gasification was inhibited by CO 2 [START_REF] Ollero | The CO 2 gasification kinetics of olive residue[END_REF]. In the 20% H 2 O C 10% CO 2 atmosphere, the average gasification rate was 0.745 10 3 s 1 , which is approximately equal to the sum of the gasification rates obtained in the two separate atmospheres: 0.740 10 3 s 1 . This was also the case for the mixed atmosphere 20% H 2 O C 20% CO 2 . In the 20% H 2 O C 40% CO 2 atmosphere, the average gasification rate was 1.19 10 3 s 1 , i.e., 20% higher than the sum of the gasification rates obtained in the two single atmospheres. In other words, cooperation between CO 2 and H 2 O led to unexpected behaviors. A number of considerations can help interpret this result. First, the geometrical structure of the two molecules-polar and non-linear for H 2 O and linear and apolar for CO 2 -predestines them to different adsorption mechanisms on potentially different active carbon sites [START_REF] Slasli | Modelling of water adsorption by activated carbons: Effects of microporous structure and oxygen content[END_REF]. The presence of hydrophilic oxygen, such as [-O], at the surface of char leads to the formation of hydrogen bonds, which could hinder H 2 O adsorption and favor that of CO 2 [START_REF] Stoeckli | The characterization of microporosity in carbons with molecular sieve effects[END_REF]. In the same way, as it is a non-organic molecule, H 2 O can only access hydrophobic sites while CO 2 , which is an organic molecule, can access both hydrophilic and hydrophobic sites. According to [START_REF] Stoeckli | The characterization of microporosity in carbons with molecular sieve effects[END_REF], due to constriction or molecular sieve effects, CO 2 molecules have access to micropores of materials while those of H 2 O, which are assumed to be bigger, do not. For one of the previous reasons or for any other reason, CO 2 molecules can access internal micropores more easily than H 2 O molecules, and can therefore open certain pores, making them accessible to H 2 O molecules. The assumption that H 2 O and CO 2 molecules reacted with different sites and that no competition occurred is not sufficient to explain the increase of 20% in the gasification rate under mixed atmospheres. Point 0 can be proposed as an explanation, but a more precise explanation requires further research work. [START_REF] Roberts | Char gasification in mixtures of CO 2 and H 2 O: Competition and inhibition[END_REF] recently concluded that CO 2 has an inhibitory effect on H 2 O gasification, in contradiction to the authors' results. It is believed that the conclusions of [START_REF] Roberts | Char gasification in mixtures of CO 2 and H 2 O: Competition and inhibition[END_REF] are valid in their experimental conditions only, and with a little hindsight may be called into question. Figure 8 gives the plots of SF obtained with the three mixed atmospheres and for all repeatability tests. Again, the repeatability of experiments was excellent until X D 0:6; this attests to the good quality of experiments and confirms that the variations in SF after 60% conversion are due to specific phenomena. Figure 9 compares all the average SF obtained in mixed atmosphere. From these curves, it can be seen that the curve is similar when the amount of CO 2 was modified from 10 to 40%. Thus, an average 5th-order polynomial expression for mixed atmosphere is given in Eq. ( 7): F .X/ D 130:14X 5 264:67X 4 C 192:38X 3 57:90X 2 C 7:28X C 0:25: (7) CONCLUSION The gasification of wood char particles during gasification in three atmospheres, i.e., H 2 O, CO 2 , and H 2 O/CO 2 , was experimentally investigated. The formulation adopted enables to split the reactivity R.t/ into kinetic parameters, r j , and all physical aspects, i.e., reactive surface evolution, thermal annealing, catalytic effects, into a surface function SF, F .X/, as follows: The repeatability of the derived SF was always very good until X D 0:6, which attests to the good quality of the experiments. For higher values of X, significant dispersion was observed, despite the use of several particles for each experiment. The SF depends on the nature of the reactant gas, and-in the case of CO 2 -on the concentration of the gas. A SF that surprisingly decreased with increasing X in the range 0.6-0.75 was obtained with CO 2 atmosphere in this work. d An important result of this article is that the addition of CO 2 in a H 2 O atmosphere led to an acceleration of gasification kinetic. In a mixture of 20% H 2 O and 40% CO 2 , the gasification rate was 20% higher than the sum of the gasification rates in the two single atmospheres. FIGURE 2 2 FIGURE 2 Conversion progress versus time during gasification at 900 ı C in single atmospheres (10, 20, and 40% H 2 O and 10, 20, and 40% CO 2 ). (color figure available online) FIGURE 3 3 FIGURE 3 SF for the two cases of 1.5 mm and 5.5 mm particles in steam atmosphere. (color figure available online) FIGURE 4 4 FIGURE 4 SF for each experimental result obtained in a single atmosphere. FIGURE 5 5 FIGURE 5 Average SF obtained in each single atmosphere. (color figure available online) FIGURE 7 7 FIGURE 7 Experimental results obtained in mixed atmospheres (A: 10% H 2 O and 20% CO 2 ; B: 20% H 2 O and 20% CO 2 ; and C: 20% H 2 O and 40% CO 2 ). For each mixed atmosphere, the corresponding average experimental results for single atmospheres are shown in thick solid line (20% H 2 O single atmosphere) and in thick dashed lines (CO 2 single atmospheres). (color figure available online) FIGURE 8 8 FIGURE 8 SF obtained in different mixed atmospheres for all experimental repeatability tests. FIGURE 9 9 FIGURE 9 Average SF obtained in the different mixed atmospheres. and is presented in Figure1. It consists of positioning several charcoal particles in a grid basket inside the reactor at atmospheric pressure. The reactor is swept by the oxidizing agent-H 2 O or CO 2 in FIGURE 1 Macro thermogravimetry experimental apparatus. (1) Electric furnace; (2) Quartz tube; (3) Extractor; (4) Preheater; (5) Evaporator; (6) Water feeding system; (7) Water flow rate; (8) Leakage compensation; (9) Suspension basket; (10) Weighing system; (T i ) Regulation thermocouples; (M i ) Mass flow meter. TABLE 1 1 Proximate and Ultimate Analysis of Charcoal from Maritime Pine Wood Chips Proximate Analysis, Mass % Ultimate Analysis, Mass % M VM (dry) FC(dry) Ash (dry) C˙0.3% H ˙0.3% O ˙0.3% N ˙0.1% S ˙0.005% 1.8 4.9 93.7 1.4 89.8 2.2 6.1 0.1 0.01 M: Moisture content; VM: Volatile matter; FC: Fixed carbon. m.t/ dt D R.t/:m.t/ with R.t/ D F .X.t//:r j .T:p/:
19,792
[ "996971", "19516", "17552" ]
[ "11574", "11574", "242220", "242220" ]
01758141
en
[ "spi" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01758141/file/DesignRCDPRs_Gagliardini_Gouttefarde_Caro_Final_HAL.pdf
Lorenzo Gagliardini email: lorenzo.gagliardini.at.work@gmail.com Marc Gouttefarde email: marc.gouttefarde@lirmm.fr Stéphane Caro email: stephane.caro@ls2n.fr Design of Reconfigurable Cable-Driven Parallel Robots This chapter is dedicated to the design of Reconfigurable Cable-Driven Parallel Robots (RCDPRs) where the locations of the cable exit points on the base frame can be selected from a finite set of possible values. A task-based design strategy for discrete RCDPRs is formulated. By taking into account the working environment, the designer divides the prescribed workspace or trajectory into parts. Each part shall be covered by one configuration of the RCDPR. Placing the cable exit points on a grid of possible locations, numerous CDPR configurations can be generated. All the possible configurations are analysed with respect to a set of constraints in order to determine the parts of the prescribed workspace or trajectory that can be covered. The considered constraints account for cable interferences, cable collisions, and wrench feasibility. The configurations satisfying the constraints are then compared in order to find the combinations of configurations that accomplish the required task while optimising one or several objective function(s). A case study comprising the design of a RCDPR for sandblasting and painting of a three-dimensional tubular structure is finally presented. Cable exit points are reconfigured, switching from one side of the tubular structure to another, until three external sides of the structure are covered. The optimisation includes the minimisation of the number of cable attachment/detachment operations required to switch from one configuration to another one, minimisation of the size of the RCDPR, and the maximisation of the RCDPR stiffness. Introduction Cable-Driven Parallel Robots (CDPRs) form a particular class of parallel robots whose moving platform is connected to a fixed base frame by cables. Hereafter, the connection points between the cables and the base frame will be referred to as exit points. The cables are coiled on motorised winches. Passive pulleys may guide the cables from the winches to the exit points. A central control system coordinates the motors actuating the winches. Thereby, the pose and the motion of the moving platform are controlled by modifying the cable lengths. An example of CDPR is shown in Fig. 1. CDPRs have several advantages such as a relatively low mass of moving parts, a potentially very large workspace due to size scalibility, and reconfiguration capabilities. Therefore, they can be used in several applications, e.g. heavy payload handling and airplane painting [START_REF] Albus | The NIST spider, a robot crane[END_REF], cargo handling [START_REF] Holland | Cable array robot for material handling[END_REF], warehouse applications [START_REF] Hassan | Analysis of large-workspace cable-actuated manipulator for warehousing applications[END_REF], large-scale assembly and handling operations [START_REF] Pott | Large-scale assembly of solar power plants with parallel cable robots[END_REF][START_REF] Williams | Contour-crafting-cartesian-cable robot system concepts: Workspace and stiffness comparisons[END_REF], and fast pick-and-place operations [START_REF] Kamawura | High-speed manipulation by using parallel wire-driven robots[END_REF][START_REF] Maeda | On design of a redundant wire-driven parallel robot WARP manipulator[END_REF][START_REF] Pott | IPAnema: a family of cable-driven parallel robots for industrial applications[END_REF]. Other possible applications include the broadcasting of sporting events, haptic devices [START_REF] Fortin-Coté | An admittance control scheme for haptic interfaces based on cable-driven parallel mechanisms[END_REF][START_REF] Gallina | 3-DOF wire driven planar haptic interface[END_REF][START_REF] Rosati | Design, implementation and clinical test of a wire-based robot for neurorehabilitation[END_REF], support structures for giant telescopes [START_REF] Yao | Dimensional optimization design for the four-cable driven parallel manipulator in FAST[END_REF][START_REF] Yao | A modeling method of the cable driven parallel manipulator for FAST[END_REF], and search and rescue deployable platforms [START_REF] Merlet | Kinematics of the wire-driven parallel robot MARIONET using linear actuators[END_REF][START_REF] Merlet | A portable, modular parallel wire crane for rescue operations[END_REF]. Recent studies have been performed within the framework of an ANR Project CoGiRo [2] where an efficient cable layout has been proposed [START_REF] Gouttefarde | Geometry selection of a redundantly actuated cable-suspended parallel robot[END_REF] and used on a large CDPR prototype called CoGiRo. CDPRs can be used successfully if the tasks to be fulfilled are simple and the working environment is not cluttered. When these conditions are not satisfied, Reconfigurable Cable-Driven Parallel Robots (RCDPRs) may be required to achieve the prescribed goal. In general, several parameters can be reconfigured, as described in Section 2. Moreover, these reconfiguration parameters can be selected in a discrete or a continuous set of possible values. Preliminary studies on RCDPRs were performed in the context of the NIST RoboCrane project [START_REF] Bostelman | Cable-based reconfigurable machines for large scale manufacturing[END_REF]. Izard et al. [START_REF] Izard | A reconfigurable robot for cable-driven parallel robotic research and industrial scenario proofing[END_REF] also studied a family of RCDPRs for industrial applications. Rosati et al. [START_REF] Rosati | On the design of adaptive cable-driven systems[END_REF][START_REF] Zanotto | Sophia-3: A semiadaptive cable-driven rehabilitation device with a tilting working plane[END_REF] and Zhou et al. [START_REF] Zhou | Tension distribution shaping via reconfigurable attachment in planar mobile cable robots[END_REF][START_REF] Zhou | Stiffness modulation exploiting configuration redundancy in mobile cable robots[END_REF] focused their work on planar RCDPRs. Recently, Nguyen et al. [START_REF] Nguyen | On the analysis of large-dimension reconfigurable suspended cable-driven parallel robots[END_REF][START_REF] Nguyen | Study of reconfigurable suspended cable-driven parallel robots for airplane maintenance[END_REF] proposed reconfiguration strategies for large-dimension suspended CDPRs mounted on overhead bridge cranes. Contrary to these antecedent studies, this chapter considers discrete reconfigurations where the locations of the cable exit points are selected from a finite set (grid) of possible values. Hereafter, reconfigurations are limited to the cable exit point locations and the class of RCDPRs whose exit points can be placed on a grid of positions is defined as discrete RCDPRs. Figure 2 shows the prototype of a reconfigurable cable-driven parallel robot developed at IRT Jules Verne within the framework of CAROCA project. This prototype is reconfigurable for the purpose of being used for industrial operations in a cluttered environment. Indeed, its pulleys can be displaced onto the robot frame faces such that the collisions between the cables and the environment can be avoided during operation. The prototype has eight cables, can work in both suspended and fully constrained configurations and can carry up to 400 kg payloads. It contains eight motor-geardhead-winch sets. The nominal torque and velocity of each motor are equal to 15.34 Nm and 2200 rpm, respectively. The ratio of the twp-stage gearheads is equal to 40. The diameter of the Huchez TM industrial winches is equal to 120 mm. The CAROCA prototype is also equipped with 6 mm non-rotating steel cables and a B&R control board using Ethernet Powerlink TM communication. To the best of our knowledge, no design strategy has been formulated in the literature for discrete RCDPRs. Hence, Section 4 presents a novel task-based design strategy for discrete RCDPRs. By taking into account the working environment, the designer divides the prescribed workspace or trajectory into n t parts. Each part will be covered by one and only one configuration of the RCDPR. Then, for each configuration, the designer selects a cable layout, parametrising the position of the cable exit points. The grid of locations where the cable exit points can be located is defined by the designer as well. Placing the exit points on the provided set of possible locations, it is possible to generate many CDPR configurations. All the possible configurations are analysed with respect to a set of constraints in order to verify which parts of the prescribed workspace or trajectory can be covered. The configurations satisfying the constraints are compared in order to find the combinations of n t configurations that accomplish the required task and optimise at the same time one or several objective function(s). A set of objective functions, dedicated to RCD-PRs, is provided in Section 4.2. These objective functions aim at maximising the productivity (production cycle time) and reducing the reconfiguration time of the cable exit points. Let us note that if the design strategy introduced in Section 4 does not produce satisfactory results, the more advanced but complex method recently introduced by the authors in [START_REF] Gagliardini | Discrete reconfiguration planning for cable-driven parallel robots[END_REF] can be considered. In order to analyse the advantages and limitations of the proposed design strategy, a case study is presented in Section 5. It involves the design of an RCDPR for sandblasting and painting of a three-dimensional tubular structure. The tools performing these operations are embarked on the RCDPR moving platform, which follows the profile of the tubular structure. Each side of the tubular structure is associated to a single configuration. Cable exit points are reconfigured switching from one side of the tubular structure to another, until three external sides of the structure are sandblasted and painted. The cable exit point locations of the three configurations to be designed are optimised so that the number of cable attachment/detachment operations required to switch from a configuration to another is minimised. The size of the RCDPR is also minimised while its stiffness is maximised along the trajectories to be followed. Classes of RCDPRs CDPRs usually consist of several standard components: A fixed base, a moving platform, a set of m cables connecting the moving platform to the fixed base through a set of pulleys, a set of m winches, gearboxes and actuators, and a set of internal and external sensors. These components are usually dimensioned in such a way that the geometry of the CDPR does not vary during the task. However, by modifying the CDPR geometry, the capabilities of CDPRs can be improved. RCDPRs are then defined as CDPRs whose geometry can be adapted by reconfiguring part of their components. RCDPRs can then be classified according to the components, which are reconfigured and the nature of the reconfigurations. Fig. 3: CableBot designs with cable exit points fixed to a grid (left) and with cable exit points sliding on rails (right). Courtesy of the European FP7 Project CableBot. Reconfigurable Elements and Technological Solutions Part of the components of an RCDPR may be reconfigured in order to improve its performances. The geometry of the RCDPRs is mostly dependent on the locations of the cable exit points, the locations of the cable attachment points on the moving platform, and the number of cables. The locations of the cable exit points A i , i = 1, . . . , m have to be reconfigured to avoid cable collisions when the environment is strongly cluttered. Indeed, modifying the cable exit point locations can increase the RCDPR workspace size. Furthermore, the reconfiguration of cable exit points provides the possibility to modify the layout of the cables and improve the performance of the RCDPR (such as its stiffness). From a technological point of view, the cable exit points A i are displaced by moving the pulleys orienting the cables and guiding them to the moving platform. Pulleys are connected on the base of the RCDPR. They can be displaced by sliding them on linear guides or fixing them on a grid of locations, as proposed in the concepts of Fig. 3. These concepts have been developed in the framework of the European FP7 Project CableBot [7, [START_REF] Nguyen | On the study of large-dimension reconfigurable cable-driven parallel robots[END_REF][START_REF] Blanchet | Contribution à la modélisation de robots à câbles pour leur commande et leur conception[END_REF]. Alternatively, pulleys can be connected to several terrestrial or aerial unmanned vehicles, as proposed in [START_REF] Jiang | The inverse kinematics of cooperative transport with multiple aerial robots[END_REF][START_REF] Manubens | Motion planning for 6D manipulation with aerial towed-cable systems[END_REF][START_REF] Zhou | Analysis framework for cooperating mobile cable robots[END_REF]. The geometry of the RCDPR and the cable layout can be modified as well by displacing the cable anchor points on the moving platform, B i , i = 1, . . . , m. Changing the locations of points B i allows the stiffness of the RCPDR as well as its wrench (forces and moments) capabilities to be improved. A modification of the cable anchor points may also result in an increase of the workspace dimensions. The reconfiguration of points B i can be performed by attaching and detaching the cables at different locations on the moving platform. The number m of cables has a major influence on performance of the RCDPR. Using more cables than DOFs can enlarge the workspace of suspended CDPRs [START_REF] Gouttefarde | Geometry selection of a redundantly actuated cable-suspended parallel robot[END_REF] or yields fully constrained CDPRs where internal forces can reduce vibrations, e.g. [START_REF] Kamawura | High-speed manipulation by using parallel wire-driven robots[END_REF]. However, the larger the number of cables, the higher the risk of collisions. In this case, the reconfiguration can be performed by attaching or detaching one or several cable(s) to/from the moving platform and possibly to/from a new set of exit points. Furthermore, by attaching and detaching one or several cable(s), the Discrete and Continuous Reconfigurations According to the reconfigured components and the associated technology, reconfiguration parameters can be selected over a continuous or discrete domain of values, as summarised in Table 1. Reconfigurations performed over a discrete domain consist of selecting the reconfigurable parameters within a finite set of values. Modifying the number of cables is a typical example of a discrete reconfiguration. Discrete reconfigurations also apply to cable anchor points, when the cables can be installed on the moving platform at a (discrete) number of specific locations, e.g. its corners. Another example of discrete RCDPR is represented in Fig. 3 (left). In this concept, developed in the framework of the European FP7 Project CableBot, cable exit points are installed on a predefined grid of locations on the ceiling. Discrete reconfigurations are performed off-line, interrupting the task the RCDPR is executing. For this reason, the set up time for these RCDPRs can be relative long. On the contrary, RCDPRs with discrete reconfigurations can use the typical control schemes already developed for CDPRs. Furthermore, they do not require to motorise the cable exit points, thereby avoiding a large increase of the CDPR cost. Reconfigurations performed over a continuous domain provide the possibility of selecting the geometric parameters over a continuous set of values delimited by upper and lower bounds. A typical example of continuous RCDPR is represented in Fig. 3 (right), which illustrates another concept developed in the framework of the European FP7 Project CableBot. In this example, the cable exit points slide on rails fixed on the ceiling. Reconfigurations can be performed on-line, by continuously modifying the reconfigurable parameters during the task execution. The main advantages of continuous reconfigurations are the reduced set-up time and the local optimisation of the RCDPR properties. However, modifying the locations of the exit points in real time may require the design of a complex control scheme. Furthermore, the cost of RCDPRs with continuous reconfigurations is significantly higher than the cost of discrete RCDPRs when the movable pulleys are actuated. Nomenclature for RCDPRs Similarly to CDPRs, an RCDPR is mainly composed of a moving platform connected to the base through a set of cables, as illustrated in Fig. 4. The moving platform is driven by m cables, which are actuated by winches fixed on the base frame of the robot. The cables are routed by means of pulleys to exit points from which they extend toward the moving platform. The main difference between this chapter and previous works on CDPRs is the possibility to displace the cable exit points on a grid of possible locations. As illustrated in Fig. 4, F b , of origin O b and axes x b , y b , z b , denotes a fixed reference frame while F p of origin O p and axes x p , y p and z p , is fixed to the moving platform and thus called the moving platform frame. The anchor points of the ith cable on the platform are denoted as B i,c , where c represents the configuration number. For the c-th configuration, the exit point of the i-th cable is denoted as A i,c , i = 1, . . . , m. The Cartesian coordinates of each point A i,c , with respect to F b , are given by the vector a b i,c while b b i,c is the position vector of point B i,c expressed in F b . Neglecting the cable mass, the vector l b i,c directed along the i-th cable from point B i,c to point A i,c can be written as: l b i,c = a b i,c -t -Rb p i,c i = 1, . . . , m ( 1 ) where t is the moving platform position, i.e. the position vector of O p in F b , and R is the rotation matrix defining the orientation of the moving platform, i.e. the orientation of F p with respect to F b . The length of the i-th cable is then defined by the 2-norm of the cable vector l b i,c , namely, l i,c = l b i,c 2 , i = 1, . . . , m. In order to balance an external wrench (combination of a force and a moment), each cable generates on the moving platform a wrench proportional to its tension τ i = 1, . . . , m. The cables balance the external wrench w e , according to the following equation [START_REF] Roberts | On the inverse kinematics, statics, and fault tolerance of cable-suspended robots[END_REF]: Wτ + w e = 0 (2) The cable tensions are collected into the vector τ = [τ 1 , . . . , τ m ] and multiplied by the wrench matrix W whose columns are composed of the unit wrenches w i exerted by the cables on the platform: W = d b 1,c d b 2,c . . . d b m,c Rb p 1,c × d b 1,c Rb p 2,c × d b 2,c . . . Rb p m,c × d b m,c (3) where d b i,c , i = 1, . . . , m are the unit cable vectors associated with the c-th configuration: d b i,c = l b i,c l i,c , i = 1, . . . , m (4) Design Strategy for RCDPRs Similarly to CDPRs, the design of RCDPRs requires the dimensioning of all its components. In this chapter, the design of RCDPRs focuses on the selection of the cable exit point locations. The other components of the RCDPR are required to be chosen in advance. Design Problem Formulation The RCDPR design strategy proposed in this section consists of ten steps. The design can be formulated as a mono-objective or hierarchical multi-objective optimisation problem. The designer defines a prescribed workspace or moving platform trajectory and divides it into n t parts. Each part should be covered by one and only one configuration. The design variables are the locations of the cable exit points for the n t configurations covering the n t parts of the prescribed workspace or trajectory. The global objective functions investigated in this chapter (Section 4.2) aim to reduce the overall complexity of the RCDPR and the reconfiguration time. The optimisation is performed while verifying a set of user-defined constraints such as those presented in Section 4.3. Step I. Task and Environment. The designer describes the task to be performed. He/She specifies the nature of the problem, defining if the motion of the moving platform is static, quasi-static or dynamic. According to the nature of the problem, the designer defines the external wrenches applied to the moving platform and, possibly, the required moving platform twist and accelerations. The prescribed workspace or trajectory of the moving platform is given. A description of the environment is provided as well, including the possible obstacles encountered during the task execution. Step II. Division of the Prescribed Trajectory. Given the prescribed workspace or moving platform trajectory, the designer divides it into n t parts, assuming that each of them is accessible by one and only one configuration of the RCDPR. The division may be performed by trying to predict the possible collisions of the cables and the working environment. Step III. Constant Design Parameters. The designer defines a set of constant design parameters and their values. The parameters are collected in the constant design parameter vector q. Step IV. Design Variables and Layout Parametrisation. For each part of the prescribed workspace or moving platform trajectory, the designer defines the cable layout of the associated configuration. The cable layout associated with the t-th part of the prescribed workspace or trajectory defines the locations of the cable exit points, parametrised with respect to a set of n t,v design variables, u t,v , v = 1, . . . , n t,v . The design variables are defined as a discrete set of ε t,v values, [u] t,v , v = 1, . . . , n t,v . Step V. RCDPR Configuration Set. For each part of the prescribed trajectory, the possible configurations, which can be generated combining the values They analyse the properties of the combination of n t configurations comprising the RCDPR. If several global objective functions are to be solved simultaneously, the optimisation problem can be classically reduced to a mono-objective optimisation according to: [u] t,v , v = 1, . . . , V = n V ∑ t=1 µ t V t , µ t ∈ [0, 1], n V ∑ t=1 µ t = 1 (5) The weighting factors µ t ,t = 1, . . . , n V , are defined according to the prior- ity assigned to each objective function V t , the latter lying between 0 and 1. If several global optimisation functions have to be solved hierarchically, the designer will define those functions according to their order of priority, t = 1, . . . , n V , where V 1 has the highest priority and V n V the lowest one. Step X. Discrete Optimisation Algorithm. The design problem is formulated as an optimisation problem and solved by analysing all the n C set of feasible configurations. The analysis is performed with respect to the global objective functions defined at Step IX. The sets of n t configurations with the best global objective function value are determined. If a hierarchical multi-objective optimisation is required, the following procedure is applied: a. The algorithm analyses the n C sets of feasible configurations with respect to the global objective function which currently has the highest priority, V t (the procedure is initialised with t = 1). b. If only one set of configuration optimises V t , this solution is considered as the optimum. On the contrary, if n C ,t multiple solutions optimise V t , the algorithm proceeds to the following step. c. The algorithm analyses the n C ,t sets of optimal solutions with respect to the global objective function with lower priority, V t+1 . Then, t = t + 1 and the procedure moves back to Step b. Global Objective Functions The design strategy proposed in the previous section aims to optimise the characteristics of the RCDPR. The optimisation may be performed with respect to one or several global objective functions. The objective functions used in this chapter are described hereafter. RCDPR Size The design optimisation problem may aim to minimise the size of the robot, defined as the convex hull of the cable exit points. The Cartesian coordinates of exit point A i,c are defined as a b i,c = [a x i,c , a y i,c , a z i,c ] T . The variables s x , s y and s z denote the lower bounds on the Cartesian coordinates of the cable exit points along the axes x b , y b and z b , respectively: s x = min a x i,c , ∀i = 1, ..., m, c = 1, ..., n t (6) s y = min a y i,c , ∀i = 1, ..., m, c = 1, ..., n t (7) s z = min a z i,c , ∀i = 1, ..., m, c = 1, ..., n t (8) The upper bounds on the Cartesian coordinates of the RCDPR cable exit points, along the axes x b , y b , z b , are denoted by sx , sy and sz , respectively. sx = max a x i,c , ∀i = 1, ..., m, c = 1, ..., n t (9) sy = max a y i,c , ∀i = 1, ..., m, c = 1, ..., n t (10) sz = max a z i,c , ∀i = 1, ..., m, c = 1, ..., n t (11) Hence, the objective function related to the size of the robot is expressed as follows: V = ( sx -s x )( sy -s y )( sz -s z ) (12) Number of Cable Reconfigurations According to the reconfiguration strategy proposed in this chapter, reconfiguration operations require the displacement of the cable exit points, and consequently attaching/detaching operations of the cables. These operations are time consuming. Hence, an objective can be to minimise the number of reconfigurations, n r , defined as the number of exit point changes to be performed in order to switch from configuration C i to configuration C j . By reducing the number of cable attaching/detaching operations, the RCDPR set up time could be significantly reduced. Number of Configuration Changes During the reconfiguration of the exit points, the task executed by the RCDPR has to be interrupted. These interruptions impact the task execution time. Therefore, it may be necessary to minimise the number of interruptions, n i , in order to improve the effectiveness of the RCDPR. The objective function V = n i associated with this goal measures the number of configuration changes, n i , to be performed during a prescribed task. RCDPR Complexity The higher the number of configuration sets n C allowing to cover the prescribed workspace or trajectory, the more complex the RCDPR. When the RCDPR requires a large number of configurations, the base frame of the CDPR may become complex. In order to minimise the complexity of the RCDPR, an objective can be to minimise the overall number of exit point locations, V = n e , required by the n C configuration sets. Therefore, the optimisation aims to maximise the number of exit point locations shared among two or more configurations. Constraint Functions Any CDPR optimisation problem has to take into account some constraints. Those constraints represent the technical limits or requirements that need to be satisfied. The constraints used in this chapter are described hereafter. Wrench Feasibility Since cables can only pull on the platform, the tensions in the cables must always be non-negative. Moreover, cable tensions must be lower than an upper bound, τ max , which corresponds either to the maximum tension τ max1 the cables (or other me- chanical parts) can bear, or to the maximum tension τ max2 the motors can provide. The cable tension bounds can thus be written as: 0 ≤ τ i ≤ τ max , ∀i = 1, . . . , m (13) where τ max = min {τ max1 , τ max2 }. Due to the cable tension bounds, RCDPRs can balance only a bounded set of external wrenches. In this chapter, the set of external wrenches applied to the platform and that the cables have to balance is called the required external wrench set and is denoted [w e ] r . Moreover, the set of of admissible cable tensions is defined as: [τ] = {τ i | 0 ≤ τ i ≤ τ max , i = 1, . . . , m} (14) A pose (position and orientation) of the moving platform is then said to be wrench feasible if the following constraint holds: ∀w e ∈ [w e ] r , ∃τ ∈ [τ] such that Wτ + w e = 0 (15) Eq. ( 15) can be rewritten as follows: Cw e ≤ d, ∀w e ∈ [w e ] r ( 16 ) Methods to compute matrix C and vector d are presented in [START_REF] Bouchard | On the ability of a cable-driven robot to generate a prescribed set of wrenches[END_REF][START_REF] Gouttefarde | Characterization of parallel manipulator available wrench set facets[END_REF]. Cable Lengths Due to technological reasons, cable lengths are bounded between a minimum cable length, l min , and a maximum cable length, l max : l min ≤ l i,c ≤ l max , ∀i = 1, . . . , m (17) The minimum cable lengths are defined so that the RCDPR moving platform is not too close to the base frame. The maximum cable lengths depend on the properties of the winch drums that store the cables, in particular their lengths and their diameters. Cable Interferences A second constraint is related to the possible collisions between cables. If two or more cables collide, the geometric and static models of the CDPR are not valid anymore and the cables can be damaged or their lifetime severely reduced. In order to verify that cables do not interfere, it is sufficient to determine the distances between them. Modeling the cables as linear segments, the distance d cc i, j between the i-th cable and the j-th cable can be computed, e.g. by means of the method presented in [START_REF] Lumelsky | On fast computation of distance between line segments[END_REF]. There is no interference if the distance is larger than the diameter of the cables, φ c : d cc i, j ≥ φ c ∀i, j = 1, . . . , m, i = j ( 18 ) The number of possible cable interferences to be verified is equal to C m 2 = m! 2!(m-2)! . Note that, depending on the way the cables are routed from the winches to the moving platform, possible interferences of the cable segments between the winches and the pulleys may have to be considered. Collisions between the Cables and the Environment Industrial environments may be cluttered. Collisions between the environment and the cables of the CDPR should be avoided. In general, for fast collision detection, the environment objects (obstacles) are enclosed in bounding volumes such as spheres and cylinders. When more complex shapes have to be considered, their surfaces are approximated with polygonal meshes. Thus, collision analysis can be performed by computing the distances between the edges of those polygons and the cables, e.g. by using [START_REF] Lumelsky | On fast computation of distance between line segments[END_REF]. Many other methods may be used, e.g., those described in [START_REF] Blanchet | Contribution à la modélisation de robots à câbles pour leur commande et leur conception[END_REF]. In the case study presented in Section 5, a tubular structure is considered. The ith cable and the k-th structure tube will not collide if the distance between the cable and the axis (straight line segment) of the structure tube is larger than the sum of the cable radius φ c /2 and the tube radius φ s /2, i.e.: d cs i,k ≥ (φ c + φ s ) 2 ∀i = 1, . . . , m, ∀k = 1, . . . , n st ( 19 ) where n st denotes the number of tubes composing the structure. Pose Infinitesimal Displacement Due to the Cable Elasticity Cables are not perfectly rigid body. Under load, they are notably subjected to elongations that may induce some moving platform displacements. In order to quantify the stiffness of the CDPR, an elasto-static model may be used: δ w e = Kδ p = K δ t δ r ( 20 ) where δ w e is the infinitesimal change in the external wrench applied to the platform, δ p is the infinitesimal displacement screw of the moving platform and K is the stiffness matrix whose computation is explained in [START_REF] Behzadipour | Stiffness of cable-based parallel manipulators with application to stability analysis[END_REF]. δ t = [δt x , δt y , δt z ] T is the variation in the moving platform position and δ r = [δ r x , δ r y , δ r z ] T is the vector of the infinitesimal (sufficiently small) rotations of the moving platform around the axes x b , y b and z b . The pose variation should be bounded by the positioning error threshold vector, δ t = [δt x,c , δt y,c , δt z,c ], where δt x,c , δt y,c and δt z,c are the bounds on the positioning errors along the axes x b , y b and x b , and the orientation error threshold vector, δ φ = [δ γ c , δ β c , δ α c ], where δ γ c , δ β c and δ α c are the bounds on the platform orientation errors about the axes x b , y b and z b , i.e.: -[δt x,c , δt y,c , δt z,c ] ≤ [δt x , δt y , δt z ] ≤ [δt x,c , δt y,c , δt z,c ] (21) -[δ γ c , δ β c , δ α c ] ≤ [δ γ, δ β , δ α] ≤ [δ γ c , δ β c , δ α c ] (22) 5 Case Study: Design of a RCDPRs for Sandblasting and Painting of a Large Tubular Structure Problem Description The necessity to improve the production rate of large tubular structures has incited companies to investigate new technologies. These technologies should be able to reduce manufacturing time associated with the assembly of the structure parts or the treatment of their surfaces. Painting and sandblasting operations over wide tubular structures can be realised by means of RCDPRs, as illustrated in the present case study. Task and Environment The tubular structure selected for the given case study is 20 m long, with a cross section of 10 m x 10 m. The number of tubes to be painted is equal to twenty. Their diameter, φ s , is equal to 0.8 m. The sandblasting and painting operations are realised indoor. The structure lies horizontally in order to reduce the dimensions of the painting workshop. The whole system can be described with respect to a fixed reference frame, F b , of origin O b and axes x b , y b , z b , as illustrated in Fig. 6. Sandblasting and painting tools are embarked on the RCDPR moving platform. The Center of Mass (CoM) of the platform follows the profile of the structure tubes and the tools perform the required operations. The paths to be followed, P 1 , P 2 and P 3 , are represented in Fig. 6. Note that each path P i , i = 1, . . . , 3 is discretised into 38 points P j,i , j = 1, . . . , 38 i = 1, . . . , 3 and that n p denotes the corresponding total number of points. The offset between paths P i , i = 1, . . . , 3 and the structure tubes is equal to 2 m. No path will be assigned to the lower external side of the structure, since it is sandblasted and painted from the ground. Division of the Prescribed Workspace In order to avoid collisions between the cables and structure, reconfigurations of the cable exit points are necessary. Each external side of the structure should be painted by only one robot configuration. Three configurations are necessary to work on the outer part of the structure, configuration C i being associated to path P i , i = 1, two and three, in order not to interrupt the painting and sandblasting operations during their execution. Passing from one configuration to another, one or more cables are disconnected from their exit points and connected to other exit points located elsewhere. For each configuration, the locations of the cable exit points are defined as variables of the design problem. In the present case study, the dimensions of the platform as well as the position of the cable anchor points on the platform are fixed. Constant Design parameters The number of cables, m = 8, the cable properties, and the dimensions of the platform are given. Those parameters are the same for the three configurations. The moving platform of the RCDPR analysed in this case study is driven by steel cables. The maximum allowed tension in the cables, τ max , is equal to 34 950 N and we have: 0 < τ i ≤ τ max , ∀i = 1, . . . , 8 (23) Moreover, l p , w p and h p denote the length, width and height of the platform, respectively: l p = 30 cm, w p = 30 cm and h p = 60 cm. The mass of the moving platform is m MP = 60 kg. The design (constant) parameter vector q is expressed as: q = [m, φ c , k s , τ max , l p , w p , h p , m MP ] T (24) Constraint Functions and Configuration Analysis The design problem aims to identify the locations of points A i,c for the configurations C 1 , C 2 and C 3 . At first, in order to identify the set of feasible locations for the exit points A i,c , the three robot configurations are parameterised and analysed separately in the following paragraphs. A set of exit points is feasible if the design constraints are satisfied along the whole path to be followed by the moving platform CoM. The analysed constraints are: wrench feasibility, cable interferences, cable collisions with the structure, and the maximum moving platform infinitesimal displacement due to the cable elasticity. Both suspended and fully constrained eight-cable CDPR architectures are used. In the suspended architecture, gravity plays the role of an additional cable pulling the moving platform downward, thereby keeping the cables under tension. The suspended architecture considered in this work is inspired by the CoGiRo CDPR prototype [START_REF] Gouttefarde | Geometry selection of a redundantly actuated cable-suspended parallel robot[END_REF][START_REF] Lamaury | Dual-space adaptive control of redundantly actuated cable-driven parallel robots[END_REF]. For the non-suspended configuration, note that eight cables is the smallest possible even number of cables that can be used for the platform to be fully constrained by the cables. Collisions between the cables as well as collisions between the cables and structure tubes should be avoided. Since sandblasting and painting operations are performed at low speed, the motion of the CDPR platform can be considered quasistatic. Hence, only the static equilibrium of the robot moving platform will be considered. The wrench feasibility constraints presented in Section 4.3 are considered such that the required external wrench set [w e ] r is an hyperrectangle defined as: -50 N ≤ f x , f y , f z ≤ 50 N (25) -7.5 Nm ≤m x , m y , m z ≤ 7.5 Nm (26) where w e = [ f x , f y , f z , m x , m y , m z ] T , f x , f y and f z being the force components of w e and m x , m y , and m z being its moment components. Besides, the moving platform infinitesimal displacements, due to the elasticity of the cables, are constrained by: An advantage of this configuration is a large workspace to footprint ratio. The exit points A i,2 have been arranged in a parallelepiped layout. The Cartesian coordinates a i,c are defined as follows: Variables v i , i = 1, . . . , 5 are equivalent for configuration C 2 to variables u i , i = 1, . . . , 5, describing configuration C 1 . The layout of this configuration is illustrated in Fig. 8. The design variables of configuration C 2 are collected into the vector x 2 : -5 cm ≤ δt x , δt y , δt z ≤ 5 cm (27) -0.1 rad ≤ δ r x , δ r y , δ r z ≤ 0.1 rad (28) a b 1,2 = a b 2,2 = [v 1 -v 4 , v 2 -v 5 , v 3 ] T ( 38 ) a b 3,2 = a b 4,2 = [v 1 -v 4 , v 2 + v 5 , v 3 ] T ( 39 ) a b 5,2 = a b 6,2 = [v 1 + v 4 , v 2 + v 5 , v 3 ] T (40) a b 7,2 = a b 8,2 = [v 1 + v 4 , v 2 -v 5 , v 3 ] T (41) x 2 = [v 1 , v 2 , v 3 , v 4 , v 5 ] T (42) Note that this configuration is composed of couples of exit points theoretically connected to the same locations: {A 1,2 , A 2,2 }, {A 3,2 , A 4,2 }, {A 5,2 , A 6,2 }, and {A 7,2 , A 8,2 }. From a technical point of view, in order to avoid any cable interference, the coupled exit points should be separated by a certain distance. For the design problem at hand, this distance has been fixed to v 0 = 5 mm. a b 1,2 = v 1 -v ′ 4 , v 2 -v 5 , v 3 T ( 43 ) a b 2,2 = v 1 -v 4 , v 2 -v ′ 5 , v 3 T ( 44 ) a b 3,2 = v 1 -v 4 , v 2 + v ′ 5 , v 3 T ( 45 ) a b 4,2 = v 1 -v ′ 4 , v 2 + v 5 , v 3 T ( 46 ) a b 5,2 = v 1 + v ′ 4 , v 2 + v 5 , v 3 T ( 47 ) a b 6,2 = v 1 + v 4 , v 2 + v ′ 5 , v 3 T ( 48 ) a b 7,2 = v 1 + v 4 , v 2 -v ′ 5 , v 3 T ( 49 ) a b 8,2 = v 1 + v ′ 4 , v 2 -v 5 , v 3 T ( 50 ) where v ′ 4 = v 4v 0 and v ′ 5 = v 5v 0 The Cartesian coordinates of points B i,2 are defined as: b b 1,2 = 1 2 [l p , -w p , h p ] T , b b 2,2 = 1 2 [-l p , w p , -h p ] T (51) b b 3,2 = 1 2 [-l p , -w p , h p ] T , b b 4,2 = 1 2 [l p , w p , -h p ] T (52) b b 5,2 = 1 2 [-l p , w p , h p ] T , b b 6,2 = 1 2 [l p , -w p , -h p ] T (53) b b 7,2 = 1 2 [l p , w p , h p ] T , b b 8,2 = 1 2 [-l p , -w p , -h p ] T (54) Table 2 describes the lower and upper bounds as well as the number of values considered for the configuration C 2 . Combining these values, 22275 configurations have been generated. Among these configurations, only 5579 configurations are feasible. Configuration C 3 The configuration C 3 follows the path P 3 . This path is symmetric to the path P 1 with respect to the plane y b O b z b . Considering the symmetry of the tubular structure, configuration C 3 is thus selected as being the same as configuration C 1 . The discretised set of design variables chosen for the configuration C 3 is described in Table 2. The design variables for the configuration C 3 are collected into the vector x 3 : x 3 = [w 1 , w 2 , w 3 , w 4 , w 5 ] T ( 55 ) where the variables w i , i = 1, . . . , 5 amount to the variables u i , i = 1, . . . , 5, describing configuration C 1 . Therefore, the Cartesian coordinates of the exit points A i,3 are expressed as follows: a b 1,3 = [w 1 + w 4 , w 2 + w 5 , -w 3 ] T a b 2,3 = [w 1 + w 4 , w 2 + w 5 , w 3 ] T (56) Objective Functions and Design Problem Formulation The RCDPR should be as simple as possible so that the minimisation of the total number of cable exit point locations, V 1 = n e , is required. Consequently, the number of exit point locations shared by two or more configurations should be maximised. The size of the robot is also minimised to reduce the size of the sandblasting and painting workshop. Finally, the mean of the moving platform infinitesimal displacement due to cable deformations is minimised. The optimisations are performed hierarchically, by means of the procedure described in Section 4.1 and the objective functions collected in Section 4.2. Hence, the design problem of the CDPR is formulated as follows: minimise          V 1 = n e V 2 = ( sx -s x )( sy -s y )( sz -s z ) V 3 = δ t 2 n p over x 1 , x 2 , x 3 subject to: ∀P m,n , m = 1, . . . , 38 n = 1, . . . , 3                  Cw ≤ d, ∀w ∈ [w e ] r d cc i, j ≥ φ c ∀i, j = 1, . . . , 8, i = j d cs i,k ≥ (φ c + φ s ) 2 ∀i = 1, . . . , 8, ∀k = 1, . . . , 20 -5 cm ≤ δt x , δt y , δt z ≤ 5 cm -0.1 rad ≤ δ r x , δ r y , δ r z ≤ 0.1 rad (60) Once the set of feasible solutions have been obtained for each path P i , a list of RCDPRs with a minimum number of exit points, n c , is extracted from the list of feasible RCDPRs. Finally, the most compact and stiff RCDPRs from the list of RCDPRs with a minimum number of exit points are the desired optimal solutions. Optimisation Results The feasible robot configurations associated with paths P 1 , P 2 and P 3 have been identified. For each path, a configuration is selected, aiming to minimise the total number of exit points required by the RCDPR to complete the task. These optimal solutions have been computed in two phases. At first, the 4576 feasible robot configurations for path P 1 are compared with the 5579 feasible robot configurations for path P 2 looking for the couple of configurations having the minimum total number of exit points. The resulting couple of configurations is then compared to the feasible robot configurations for path P 3 , and the sets of robot configurations that minimise the overall number n e of exit points along the three paths are retained. According to the discrete optimisation analysis, 16516 triplets of configurations minimise this overall number of exit points. A generic CDPR composed of eight cables requires eight exit points A i = 1, . . . , 8 on the base. It is the case for the fully constrained configurations C 1 and C 3 . The suspended CDPR presents four coincident couples of exit points. Hence, in the present case study, the maximum overall number of exit points of the RCDPR is equal to 20. The best results provide a reduction of four points. Regarding the configurations C 1 and C 2 , points A 5,2 and A 7,2 can be coincident with points A 3,1 and A 5,1 , respectively. Alternatively, points A 5,2 and A 7,2 can be coincident with points A 1,1 and A 7,1 . As far as configurations C 2 and C 3 are concerned, points A 1,2 and A 3,2 can be coincident with points A 8,3 and A 2,3 , respectively. Likewise, points A 1,2 and A 3,2 can be coincident with points A 4,3 and A 6,3 , respectively. The total volume of the robot has been computed for the 16516 triplets of configurations minimising the overall number of exit points. Ninety six RCDPRs amongst the 16516 triplets of configurations have the smallest size, this minimum size being equal to 5104 m 3 . Selection of the best solutions has been promoted through the third optimisation criterion based on the robot stiffness. Twenty solutions provided a minimum mean of the moving platform displacement equal to 1.392 mm. An optimal solution is illustrated in Fig. 9. The corresponding optimal design parameters are given in Table 3. Figure 10 illustrates the minimum degree of constraint satisfaction s introduced in [START_REF] Guay | Measuring how well a structure supports varying external wrenches[END_REF] and computed thereafter along the paths P 1 , P 2 , and P 3 , which were discretised into 388 points. It turns out that the moving platform is in a feasible static equilibrium along all the paths because the minimum degree of constraint satisfaction remains negative. Referring to [START_REF] Guay | Measuring how well a structure supports varying external wrenches[END_REF], the minimum degree of constraint satisfaction can be used to test wrench feasibility since it is negative when a platform pose is wrench feasible. Configurations C 1 and C 3 maintain their degree of satisfaction lower than -400 N. On the contrary, configuration C 2 is often close to 0. The poses where s vanishes are such that two cables of the suspended CDPR of configuration C 2 are slack. The proposed RCDPR design strategy yielded good solutions, but it is time consuming. The whole procedure, performed on an Intel Core TM i7-3630QM 2.40 GHz, required 19 h of computations, on Matlab 2013a. Therefore, the development of more efficient strategies for the design of RCDPRs will be part of our future work. Moreover, the mass of the cables may have to be taken into account. Conclusions When the task to be accomplished is complicated, and the working environment is extremely cluttered, CDPRs may not succeed in the task execution. The problem can be solved by means of RCDPRs. This chapter focused on RCDPRs whose cable exit points on the base frame can be located on a predefined grid of possible positions. A design strategy for such discrete RCDPRs was introduced. This design strategy assumes that the number of configurations needed to complete the task is defined by the designer according to its experience. The designer divides the prescribed trajec-Fig. 9: Optimal Reconfigurable Cable-Driven Parallel Robot. tory or workspace into a set of partitions. Each partition has to be entirely covered by one configuration. The position of the cable exit points, for all the configurations, is computed by means of an optimisation algorithm. The algorithm optimises one or more global objective function(s) while satisfying a set of user-defined constraints. Examples of possible global objective functions include the RCDPR size, the overall number of exit points, and the number of cable reconfiguration. A case study was presented in order to validate the RCDPR design strategy. The RCDPR has to paint and sandblast three of the four external sides of a tubular structure. Each of these three sides is covered by one configuration. The design strategy provided several optimal solutions to the case study, minimising hierarchically the overall number of cable exit points, the size of the RCDPR, and the moving platform displacements due to the elasticity of the cables. The computation of the optimal solution Fig. 10: Minimum degree of constraint satisfaction [START_REF] Guay | Measuring how well a structure supports varying external wrenches[END_REF]. The analysis has been performed by discretising the paths P 1 , P 2 , and P 3 into 388 points. required nineteen hours of computation. More complicated tasks may thus require higher computation times. An improvement of the proposed RCDPR design strategy should be investigated in order to reduce this computational effort. Fig. 1 : 1 Fig. 1: Architecture of a CDPR developed in the framework of the IRT Jules Verne CAROCA project. Fig. 2 : 2 Fig. 2: CAROCA prototype: a reconfigurable cable-driven parallel robot working in a cluttered environment (Courtesy of IRT Jules Verne and STX France). Fig. 4 : 4 Fig. 4: Schematic of a RCDPR. The red points represent the possible locations of the cable exit points, where the pulleys can be fixed. Fig. 5 : 5 Fig. 5: Design strategy for RCDPRs. Fig. 6 : 6 Fig. 6: Case study model and prescribed paths P 1 , P 2 and P 3 of the moving platform CoM. Fig. 7 : 7 Fig. 7: Design variables parametrising the configuration C 1 . Fig. 8 : 8 Fig. 8: Design variables parametrising the configuration C 2 . a b 3 , 3 = [w 1 - 331 w 4 , w 2 + w 5 , -w 3 ] T a b 4,3 = [w 1w 4 , w 2 + w 5 , w 3 ] T (57) a b 5,3 = [w 1w 4 , w 2w 5 , -w 3 ] T a b 6,3 = [w 1w 4 , w 2w 5 , w 3 ] T (58) a b 7,3 = [w 1 + w 4 , w 2w 5 , -w 3 ] T a b 8,3 = [w 1 + w 4 , w 2w 5 , w 3 ] T (59) Table 1 : 1 CDPR reconfigurable parameter classification. Reconfigurable Parameter Discrete Domain Continuous Domain Exit Point Locations Yes Yes Platform Anchor Point Locations Yes Yes Cable Number Yes No architecture of the RCDPRs can be modified, permitting both suspended and fully constrained CDPR configurations. n t,v of the n t,v design variables, are computed. Therefore, n t,C = ∏ n t,v v=1 ε t,v possible configurations are generated for the t-th part of the prescribed workspace or trajectory.Step VI. Constraint Functions. The user defines a set of n φ constraint functions, φ k , k =, 1, . . . , n φ . These functions are applied to all possible configurations associated to the n t parts of the prescribed workspace or trajectory.Step VII. Configuration Analysis. For each portion of the prescribed workspace or trajectory, all the possible configurations generated at Step V with respect to the n φ user-defined constraint functions are tested. The n f ,t configurations satisfying the constraints all over the t-th part of the prescribed workspace or trajectory are defined hereafter as feasible configurations. Step VIII. Feasible Configuration Combination. The set of n t configurations that lead to the achievement of the prescribed task are computed. Each set is composed by selecting one of the n f ,t feasible configurations for each part of the prescribed workspace or trajectory. The number of feasible configuration sets generated during this step is equal to n C . Step IX. Objective Functions. The designer defines one or more global objective function(s), V t ,t =, 1, . . . , n V , where n V is equal to the number of global objective functions taken into account. The global objective functions associated with RCDPRs do not focus solely on a single configuration. Table 2 : 2 Design variables associated with configurations C 1 , C 2 and C 3 . Variables Lower Bounds Upper Bounds Number of values u 1 5.5 7.5 9 u 2 8.0 12.0 9 C 1 u 3 6 10 5 u 4 0.5 2.5 9 u 5 10 14 5 v 1 -1 1 9 v 2 8.0 12.0 5 C 2 v 3 7 11 9 v 4 5 7.5 11 v 5 10 14 5 w 1 -7.5 -5.5 9 w 2 8.0 12.0 9 C 3 w 3 6 10 5 w 4 0.5 2.5 9 w 5 10 14 5 Table 3 : 3 Design parameters of the selected optimum RCDPR. Conf. var.1 var.2 var.3 var.4 var.5 x 1 6.25 10.0 8.0 1.0 11.0 x 3 0 10.0 8.0 5.25 11.0 x 3 -6.25 10.0 8.0 1.0 11.0 Acknowledgements This research work is part of the CAROCA project managed by IRT Jules Verne (French Institute in Research and Technology in Advanced Manufacturing Technologies for Composite, Metallic and Hybrid Structures). The authors wish to associate the industrial and academic partners of this project, namely, STX, Naval Group, AIRBUS and CNRS. Configuration C 1 A fully-constrained configuration has been assigned to configuration C 1 . The exit points A i,1 have been arranged in a parallelepiped layout. The edges of the parallelepiped are aligned with the axes of frame F b . This layout can be fully described by means of five variables: u 1 , u 2 and u 3 define the Cartesian coordinates of the parallelepiped center, while u 4 and u 5 denote the half-lengths of the parallelepiped along the axes x b and y b , respectively. Therefore, the Cartesian coordinates of the exit points A i,1 are expressed as follows: The layout of the first robot configuration is described in Fig. 7. The corresponding design variables are collected into the vector x 1 : The Cartesian coordinates of the anchor points B i,1 of the cables on the platform are expressed as: A discretised set of design variables have been considered. The lower and upper bounds as well as the number of values for each variable are given in Table 2. 18225 robot configurations have been generated with those values. It turns out that 4576 configurations satisfy the design constraints along the 38 discretised points of path P 1 . Configuration C 2 A suspended redundantly actuated eight-cable CDPR architecture has been attributed to the configuration C 2 in order to avoid collisions between the cables and the tubular structure. The selected configuration is based on CoGiRo, a suspended CDPR designed and built in the framework of the ANR CoGiRo project [START_REF] Gouttefarde | Geometry selection of a redundantly actuated cable-suspended parallel robot[END_REF][START_REF] Lamaury | Dual-space adaptive control of redundantly actuated cable-driven parallel robots[END_REF].
54,140
[ "170861", "10659" ]
[ "235335", "388165", "441569", "481388", "473973", "441569" ]
01758178
en
[ "spi" ]
2024/03/05 22:32:10
2018
https://hal.science/hal-01758178/file/Sensitivity%20Analysis%20of%20the%20Elasto-Geometrical%20Model%20of%20Cable-Driven%20Parallel%20Robots%20-%20Cablecon2017.pdf
Sana Baklouti Stéphane Caro Eric Courteille Sensitivity Analysis of the Elasto-Geometrical Model of Cable-Driven Parallel Robots This paper deals with the sensitivity analysis of the elasto-geometrical model of Cable-Driven Parallel Robots (CDPRs) to their geometric and mechanical uncertainties. This sensitivity analysis is crucial in order to come up with a robust model-based control of CDPRs. Here, 62 geometrical and mechanical error sources are considered to investigate their effect onto the static deflection of the movingplatform (MP) under an external load. A reconfigurable CDPR, named ``CAROCA´´, is analyzed as a case of study to highlight the main uncertainties affecting the static deflection of its MP. Introduction In recent years, there has been an increasing number of research works on the subject of Cable-Driven Parallel Robots (CDPRs). The latter are very promising for engineering applications due to peculiar characteristics such as large workspace, simple structure and large payload capacity. For instance, CDPRs have been used in many applications like rehabilitation [START_REF] Merlet | MARIONET, a family of modular wire-driven parallel robots[END_REF], pick-and-place [START_REF] Dallej | Towards vision-based control of cable-driven parallel robots[END_REF], sandblasting and painting [START_REF] Gagliardini | Discrete reconfiguration planning for cable-driven parallel robots[END_REF][START_REF] Gagliardini | A reconfiguration strategy for reconfigurable cable-driven parallel robots[END_REF] operations. Many spatial prototypes are equipped with eight cables for six Degrees of Freedom (DOF) such as the CAROCA prototype, which is the subject of this paper. Sana Baklouti Université Bretagne-Loire, INSA-LGCGM-EA 3913, 20, avenue des Buttes de Cöesmes, 35043 Rennes, France, e-mail: sana.baklouti@insa-rennes.fr Stéphane Caro CNRS, Laboratoire des Sciences du Numérique de Nantes, UMR CNRS n6004, 1, rue de la Noë, 44321 Nantes, France, e-mail: stephane.caro@ls2n.fr Eric Courteille Université Bretagne-Loire, INSA-LGCGM-EA 3913, 20, avenue des Buttes de Cöesmes, 35043 Rennes, France, e-mail: eric.courteille@insa-rennes.fr 1 To customize CDPRs to their applications and enhance their performances, it is necessary to model, identify and compensate all the sources of errors that affect their accuracy. Improving accuracy is still possible once the robot is operational through a suitable control scheme. Numerous control schemes were proposed to enhance the CDPRs precision on static tasks or on trajectory tracking [START_REF] Jamshidifar | Adaptive Vibration Control of a Flexible Cable Driven Parallel Robot[END_REF][START_REF] Fang | Motion control of a tendonbased parallel manipulator using optimal tension distribution[END_REF][START_REF] Zi | Dynamic modeling and active control of a cable-suspended parallel robot[END_REF]. The control can be either off-line through external sensing in the feedback signal [START_REF] Dallej | Towards vision-based control of cable-driven parallel robots[END_REF], or on-line control based on a reference model [START_REF] Pott | IPAnema: a family of cable-driven parallel robots for industrial applications[END_REF]. This paper focuses on the sensitivity analysis of the CDPR MP static deflection to uncertain geometrical and mechanical parameters. As an illustrative example, Fig. 1: CAROCA prototype: a reconfigurable CDPR (Courtesy of IRT Jules Verne, Nantes) a suspended configuration of the reconfigurable CAROCA prototype, shown in Fig. 1, is studied. First, the manipulator under study is described. Then, its elastogeometrical model is written while considering cable mass and elasticity in order to express the static deflection of the MP subjected to an external load. An exhaustive list of geometrical and mechanical uncertainties is given. Finally, the sensitivity of the MP static deflection to these uncertainties is analyzed. Parametrization of the CAROCA prototype The reconfigurable CAROCA prototype illustrated in Fig. 1 was developed at IRT Jules Verne for industrial operations in cluttered environment such as painting and sandblasting large structures [START_REF] Gagliardini | A reconfiguration strategy for reconfigurable cable-driven parallel robots[END_REF][START_REF] Gagliardini | Discrete reconfiguration planning for cable-driven parallel robots[END_REF]. This prototype is reconfigurable because its pulleys can be displaced in a discrete manner on its frame. The size of the latter is 7 m long, 4 m wide and 3 m high. The rotation-resistant steel cables Carl Stahl Technocables Ref 1692 of the CAROCA prototype are 4 mm diameter. Each cable consists of 18 strands twisted around a steel core. Each strand is made up of 7 steel wires. The cable breaking force is 10.29 kN. ρ denotes the cable linear mass and E the cable modulus of elasticity. In this section, both sag-introduced and axial stiffness of cables are considered in the elasto-geometrical modeling of CDPR. The inverse elasto-geometrical model and the direct elasto-geometrical model of CDPR are presented. Then, the variations in static deflection due to external loading is defined as a sensitivity index. Inverse Elasto-Geometric Modeling (IEGM) The IEGM of a CDPR aims at calculating the unstrained cable length for a given pose of its MP. If both cable mass and elasticity are considered, the inverse kinematics of the CDPR and its static equilibrium equations should be solved simultaneously. The IEGM is based on geometric closed loop equations, cable sagging relationships and static equilibrium equations. The geometric closed-loop equations take the form: b p = b b i + b l i -b R p p a i , (1) where b R p is the rotation matrix from F b to F p and l i is the cable length vector. The cable sagging relationships between the forces i f i = [ i f xi , 0, i f zi ] applied at the end point A i of the ith cable and the coordinates vector i a i = [ i x Ai , 0, i z Ai ] of the same point resulting from the sagging cable model [START_REF] Irvine | Cable structures[END_REF] are expressed in F i as follows: i x Ai = i f xi L usi ES + | i f xi | ρg [sinh -1 ( i f zi f C i xi ) -sinh -1 ( i f zi -ρgL usi i f xi )], (2a) i z Ai = i f xi L usi ES - ρgL 2 usi 2ES + 1 ρg [ i f xi 2 + i f zi 2 -i f xi 2 + ( i f zi -ρgL usi ) 2 ], (2b) where L usi is the unstrained length of ith cable, g is the acceleration due to gravity, S is the cross sectional area of the cables. The static equilibrium equations of the MP are expressed as: Wt + w ex = 0, (3) where W is the wrench matrix, w ex is the external wrench vector and t is the 8dimensional cable tension vector. Those tensions are computed based on the tension distribution algorithm described in [START_REF] Mikelsons | A real-time capable force calculation algorithm for redundant tendon-based parallel manipulators[END_REF]. Direct elasto-geometrical model (DEGM) The direct elasto-geometrical model (DEGM) aims to determine the pose of the mobile platform for a given set of unstrained cable lengths. The constraints of the DEGM are the same as the IEGM, i.e, Eq. ( 1) to Eq. ( 3). If the effect of cable weight on the static cable profile is non-negligible, the direct kinematic model of CDPRs will be coupled with the static equilibrium of the MP. For a 6 DOFs CDPR with 8 driving cables, there are 22 equations and 22 unknowns. In this paper, the non-linear Matlab function ``lsqnonlin´´is used to solve the DEGM. Static deflection If the compliant displacement of the MP under the external load is small, the static deflection of the MP can be calculated by its static Cartesian stiffness matrix [START_REF] Carbone | Stiffness analysis and experimental validation of robotic systems[END_REF]. However, once the cable mass is considered, the sag-introduced stiffness should be taken into account. Here, the small compliant displacement assumption is no longer valid, mainly for heavy or/and long cables with light mobile platform. Consequently, the static deflection can not be calculated through the Cartesian stiffness matrix. In this paper, the IEGM and DEGM are used to define and calculate the static deflection of the MP under an external load. The CDPR stiffness is characterized by the static deflection of the MP. Note that only the positioning static deflection of the MP is considered in order to avoid the homogenization problem [START_REF] Nguyen | Stiffness Matrix of 6-DOF Cable-Driven Parallel Robots and Its Homogenization[END_REF]. As this paper deals with the sensitivity of the CDPR accuracy to all geometrical and mechanical errors, the elastic deformations of the CDPR is involved. This problem is solved by deriving the static deflection of the CDPR obtained by the subtraction of the poses calculated with and without an external payload. For a desired pose of the MP, the IEGM gives a set of unstrained cable lengths L us . This set is used by the DEGM to calculate first, the pose of the MP under its own weight. Then, the pose of the MP is calculated when an external load (mass addition) is applied. Therefore, the static deflection of the MP is expressed as: dp j,k = p j,k -p j,1 , (4) where p j,1 is the pose of the MP considering only its own weight for the j th pose configuration and p j,k is the pose of the MP for the set of the j th pose and k th load configuration. Error modeling This section aims to define the error model of the elasto-geometrical CDPR model. Two types of errors are considered: geometrical errors and mechanical errors. Geometrical errors The geometrical errors of the CDPR are described by δ b i , the variation in vector b i , δ a i , the variation in vector a i , and δ g, the uncertainty vector of the gravity center position; So, 51 uncertainties. The geometric errors can be divided into base frame geometrical errors and MP geometrical errors and mainly due to manufacturing errors. Base frame geometrical errors The base frame geometrical errors are described by vectors δ b i , (i=1..8). As the point B i is considered as part of its correspondent pulley, it is influenced by the elasticity of the pulley mounting and its assembly tolerance. b i is particularly influenced by pulleys tolerances and reconfigurability impact. Moving-platform geometrical errors The MP geometrical errors are described by vectors δ a i , (i=1..8), and δ g. The gravity center of the MP is often supposed to coincide with its geometrical center P. This hypothesis means that the moments generated by an inaccurate knowledge of the gravity center position or by its potential displacement are neglected. The Cartesian coordinate vector of the geometric center G does not change in frame F p , but strongly depends on the real coordinates of exit points A i that are related to uncertainties in mechanical welding of the hooks and in MP assembly. Mechanical errors The mechanical errors of the CDPR are described by the uncertainty in the MP mass (δ m) and the uncertainty on the cables mechanical parameters (δ ρ and δ E). Besides, uncertainties in the cables tension δ t affect the error model. As a result, 11 mechanical error sources are taken into account. End-effector mass As the MP is a mechanically welded structure, there may be some differences between the MP mass and inertia matrix given by the CAD software and the real ones. The MP mass and inertia may also vary in operation In this paper, MP mass uncertainty δ m is about ± 10% the nominal mass. Cables parameters Linear mass: The linear mass ρ of CAROCA cables is equal to 0.1015 kg/m. The uncertainty of this parameter can be calculated from the measurement procedure as: δ ρ = m c δ L + L δ m c L 2 , where m c is the measured cable mass for a cable length L. δ L and δ m c are respectively the measurement errors of the cable length and mass. Modulus of elasticity: This paper uses experimental hysteresis loop to discuss the modulus of elasticity uncertainty. Figure 3 shows the measured hysteresis loop of the 4 mm cable where the unloading path does not correspond to the loading path. The area in the center of the hysteresis loop is the energy dissipated due to internal friction in the cable. It depicts a non-linear correlation in the lower area between load and elongation. Based on experimental data presented in Fig. 3, Table 2 presents the modulus of elasticity of a steel wire cable for different operating margins, when the cable is in loading or unloading phase. This modulus is calculated as follows: E p-q = L c F q% -F p% S(x q -x p ) , ( 5 ) where S is the metallic cross-sectional area, i.e. the value obtained from the sum of the metallic cross-sectional areas of the individual wires in the rope based on their nominal diameters. x p and x q are the elongations at forces equivalent to p% and q% (F p% and F q% ), respectively, of the nominal breaking force of the cable measured during the loading path (Fig. 3). L c is the measured initial cable length. For a given range of loads (Tab. 2), the uncertainty on the modulus of elasticity depends only on the corresponding elongations and tensions measurements. In this case, the absolute uncertainty associated with applied force and resulting elongation measurements from the test bench outputs is estimated to be ± 1 N and ± 0.03 mm, respectively; so, an uncertainty of ± 2 GPa can be applied to the calculation of the modulus of elasticity. According to the International Standard ISO 12076, the modulus of elasticity of a steel wire cable is E 10-30 . However, the CDPR cables do not work always between F 10% and F 30% in real life and the cables can be in loading or unloading phase. The mechanical behavior of cables depends on MP dynamics, which affects the variations in cable elongations and tensions. From Table 2, it is apparent that the elasticity moduli of cables change with the operating point changes. For the same applied force, the modulus of elasticity for loaded and unloaded cables are not the same. While the range of the MP loading is unknown, a large range of uncertainties on the modulus of elasticity should be defined as a function of the cable tensions. Tension distribution Two cases of uncertainties of force determination can be defined depending on the control scheme: The first case is when the control scheme gives a tension set-point to the actuators resulting from the force distribution algorithm. If there is no feedback about the tensions measures, the range of uncertainty is relatively high. Generally, the effort of compensation does not consider dry and viscous friction in cable drum and pulleys. This non-compensation leads to static errors and delay [START_REF] De Wit | Robust adaptive friction compensation[END_REF] that degrade the CDPR control performance. That leads to a large range of uncertainties in tensions. As the benefit of tension distribution algorithm used is less important in case of a suspended configuration of CDPR than the fully-constrained one [START_REF] Lamaury | Contribution a la commande des robots parallles a cbles redondance d'actionnement[END_REF], a range of ± 15 N is defined. The second case is when the tensions are measured. If measurement signals are very noisy, amplitude peaks of the correction signal may lead to a failure of the force distribution. Such a failure may also occur due to variations in the MP and pulleys parameters. Here, the deviation is defined based on the measurement tool precision. However, it remains lower than the deviation of the first case by at least 50%. Sensitivity Analysis Due to the non-linearities of the elasto-geometrical model, explicit sensitivity matrix and coefficients [START_REF] Zi | Error modeling and sensitivity analysis of a hybrid-driven based cable parallel manipulator[END_REF][START_REF] Miermeister | An elastic cable model for cable-driven parallel robots including hysteresis effects[END_REF] cannot be computed. Therefore, the sensitivity of the elastogeometrical model of the CDPR to geometrical and mechanical errors is evaluated statistically. Here, MATLAB has been coupled with modeFRONTIER, a process integration and optimization software platform [17] for the analysis. The RMS (Root Mean Square) of the static deflection of CAROCA MP is studied. The nominal mass of the MP and the additional mass are equal to 180 kg and 50 kg, respectively. Influence of mechanical errors In this section, all the uncertain parameters of the elasto-geometrical CAROCA model are defined with uniformly distributed deviations. The uncertainty range and discretization step are given in Tab. 3. In this basis, 2000 SOBOL quai-randm observations are created. m (kg) ρ (kg/m) E (GPa) a i (m) b i (m) δt i (N) Uncertainty range ± 18 ± 0.01015 ± 18 ± 0.015 ± 0.03 ± 15 Step 0.05 3*10 -5 0.05 0.0006 0.0012 0.1 In this configuration, the operating point of the MP is supposed to be unknown. A large variation range of the modulus of elasticity is considered. The additional mass corresponds to a variation in cable tensions from 574 N to 730 N, which corresponds to a modulus of elasticity of 84.64 GPa. Thus, while the operating point of the MP is unknown, an uncertainty of ± 18 GPa is defined with regard to the measured modulus of elasticity E= 102 GPa. Figure 4a displays the distribution fitting of the static deflection RMS. It shows that the RMS distribution follows a quasi-uniform law whose mean µ 1 is equal to 1.34 mm. The RMS of the static deflection of the MP is bounded between a minimum value RMS min equal to 1.12 mm and a maximum value RMS max equal to 1.63 mm; a variation of 0.51 mm under all uncertainties, which presents 38% of the nominal value of the static deflection. Figure 4b depicts the RMS of the MP static deflection as a function of variations in E and ρ simultaneously, whose values vary respectively from 0.09135 to 0.11165 kg/m and from 84.2 to 120.2 GPa. The static deflection is very sensitive to cables mechanical behavior. The RMS varies from 0.42 mm to 0.67 mm due to the uncertainties of these two parameters only. As a matter of fact, the higher the cable modulus of elasticity, the smaller the RMS of the MP static deflection. Conversely, the smaller the linear mass of the cable, the smaller the RMS of the MP static deflection. Accordingly, the higher the sag-introduced stiffness, the higher the MP static deflection. Besides, the higher the axial stiffness of the cable, the lower the MP static deflection. Figure 4c illustrates the RMS of the MP static deflection as a function of variations in ρ and m, whose value varies from 162 kg to 198 kg. The RMS varies from 0.52 mm to 0.53 mm due to the uncertainties of these two parameters only. The MP mass affects the mechanical behavior of cables: the heavier the MP, the larger the axial stiffness, the smaller the MP static deflection. Therefore, a fine identification of m and ρ is very important to establish a good CDPR model. Comparing to the results plotted in Fig. 4b, it is clear that E affects the RMS of the MP static deflection more than m and ρ. As a conclusion, the integration of cables hysteresis effects on the error model is necessary and improves force algorithms and the identification of the robot geometrical parameters [START_REF] Miermeister | An elastic cable model for cable-driven parallel robots including hysteresis effects[END_REF]. Influence of geometrical errors In this section, the cable tension set-points during MP operation are supposed to be known; so, the modulus of elasticity can be calculated around the operating point and the confidence interval is reduced to ± 2 GPa. The uncertainty range and the discretization step are provided in Tab. 4. Figure 5a displays the distribution fitting of the MP static deflection RMS. It shows that the RMS distribution follows a normal law whose mean µ 2 is equal to 1.32 mm and its standard deviation σ 2 is equal to 0.01 mm. This deviation is relatively small, which allows to say that the calibration through static deflection is not obvious. The RMS of the static deflection of the MP is bounded between a minimum value RMS min equal to 1.28 mm and a maximum value RMS max equal to 1.39 mm; a variation of 0.11 mm under all uncertainties. The modulus of elasticity affects the static compliant of the MP, which imposes to always consider E error while designing a CDPR model. The bar charts plotted in Fig. 5b and Fig. 5c present, respectively, the effects of the uncertainties in a i and b i , (i=1..8), to the static deflection of the CAROCA for symmetric (0 m, 0 m, 1.75 m) and non-symmetric (3.2 m, 1.7 m, 3 m) robot configurations. These effects are determined based on t-student index of each uncertain parameter. This index is a statistical tool that can estimate the relationships between outputs and uncertain inputs. The t-Student test compares the difference between the means of two samples of designs taken randomly in the design space: • M + is the mean of the n + values for an objective S in the upper part of domain of the input variable, • M -is the mean of the n -values for an objective S in the lower part of domain of the input variable. The t-Student is defined as t = |M -- M + | V 2 g n - + V 2 g n + , where V g is the general variance [START_REF] Courteille | Design optimization of a deltalike parallel robot through global stiffness performance evaluation[END_REF]. When the MP is in a symmetric configuration, all attachment points have nearly the same effect size. However, when it is located close to points B 2 and B 4 , the effect size of their uncertainties becomes high. Moreover, the effect of the corresponding mobile points (A 2 and A 4 ) increases. It means that the closer the MP to a given point, the higher the effect of the variations in the Cartesian coordinates of the corresponding exit point of the MP onto its static deflection. That can be explained by the fact that when some cables are longer than others and become slack for a non-symmetric position, the sag effect increases. Consequently, a good identification of geometrical parameters is highly required. In order to minimize these uncertainties, a good calibration leads to a better error model. Conclusion This paper dealt with the sensitivity analysis of the elasto-geometrical model of CDPRs to mechanical and geometrical uncertainties. The CAROCA prototype was used as a case of study. The validity and identifiability of the proposed model are verified for the purpose of CDPR model-based control. That revealed the importance of integrating cables hysteresis effect into the error modeling to enhance the knowledge about cables mechanical behavior, especially when there is no feedback about tension measurement. It appears that the effect of geometrical errors onto the static deflection of the moving-platform is significant too. Some calibration [START_REF] Dit Sandretto | Certified calibration of a cable-driven robot using interval contractor programming[END_REF][START_REF] Joshi | Calibration of a 6-DOF cable robot using two inclinometers[END_REF] and self-calibration [START_REF] Miermeister | Auto-calibration method for overconstrained cable-driven parallel robots[END_REF][START_REF] Borgstrom | Nims-pl: A cable-driven robot with self-calibration capabilities[END_REF] approaches were proposed to enhance the CDPR performances. More efficient strategies for CDPR calibration will be performed while considering more sources of errors in a future work. Fig. 2 : 2 Fig. 2: The ith closed-loop of a CDPR 5 A 5 6 0.2 0.15 -0.125 B 7 -3.5 -2 3.5 A 7 0.2 -0.15 -0.125 B 8 3.5 -2 3.5 A 8 -0.2 -0.15 0.125 3 Elasto-geometric modeling Fig. 3 : 3 Fig. 3: Load-elongation diagram of a steel wire cable measured in steady state conditions at the rate of 0.05 mm/s Fig. 4 : 4 Fig. 4: (a) Distribution of the RMS of the MP static deflection (b) Evolution of the RMS under a simultaneous variations of E and ρ (c) Evolution of the RMS under a simultaneous variations of m and ρ Fig. 5 : 5 Fig. 5: (a) Distribution of the RMS of the MP static deflection (b) Effect of uncertainties in a i (c) Effect of uncertainties in b i Table 1 : 1 Cartesian coordinates of anchor points A i (exit points B i , resp.) expressed in F p (in F b , resp.) Table 2 : 2 Modulus of elasticity while loading or unloading phase Modulus of elasticity (GPa) E 1-5 E 5-10 E 5-20 E 5-30 E 10-15 E 10-20 E 10-30 E 20-30 Loading 72.5 83.2 92.7 97.2 94.8 98.3 102.2 104.9 Unloading 59.1 82.3 96.2 106.5 100.1 105.1 115 126.8 Table 3 : 3 Uncertainties and steps used to design the error model Parameter Table 4 : 4 Uncertainties and steps used to design the error model Parameter m (kg) ρ (kg/m) E (GPa) a i (m) b i (m) δt i (N) Uncertainty range ± 18 ± 0.01015 ± 2 ± 0.015 ± 0.03 ± 15 Step 0.05 3*10 -5 0.05 0.0006 0.0012 0.1 number of observations 1.26 1.28 1.3 1.32 1.34 1.36 1.38 1.4 Static deflection RMS (mm)
24,960
[ "173154", "10659", "173925" ]
[ "25157", "481388", "25157" ]
01758205
en
[ "info" ]
2024/03/05 22:32:10
2018
https://inria.hal.science/hal-01758205/file/2018_tro_mani.pdf
A Direct Dense Visual Servoing Approach using Photometric Moments Manikandan Bakthavatchalam, Omar Tahri and Franc ¸ois Chaumette Abstract-In this paper, visual servoing based on photometric moments is advocated. A direct approach is chosen by which the extraction of geometric primitives, visual tracking and image matching steps of a conventional visual servoing pipeline can be bypassed. A vital challenge in photometric methods is the change in the image resulting from the appearance and disappearance of portions of the scene from the camera field of view during the servo. To tackle this issue, a general model for the photometric moments enhanced with spatial weighting is proposed. The interaction matrix for these spatially weighted photometric moments is derived in analytical form. The correctness of the modelling, effectiveness of the proposed strategy in handling the exogenous regions and improved convergence domain are demonstrated with a combination of simulation and experimental results. Index Terms-image moments, photometric moments, dense visual servoing, intensity-based visual servoing I. INTRODUCTION Visual servoing (VS) refers to a wide spectrum of closedloop techniques for the control of actuated systems with visual feedback [START_REF] Chaumette | Visual servoing and visual tracking[END_REF]. A task function is defined from a set of selected visual features, based on the currently acquired image I(t) and the reference image I ⇤ learnt from the desired robot pose. In a typical VS pipeline, the image stream is subjected to an ensemble of measurement processes, including one or more image processing, image matching and visual tracking steps, from which the visual features are determined. Based on the nature of the visual features used in the control law, VS methods can be broadly classified into geometric and photometric approaches. The earliest geometric approaches employ as visual features parameters observed in the image of geometric primitives (points, straight lines, ellipses, cylinders) [START_REF] Espiau | A new approach to visual servoing in robotics[END_REF]. These approaches are termed Image-based Visual Servoing (IBVS). In Pose-based Visual Servoing [START_REF] Wilson | Relative end-effector control using cartesian position based visual servoing[END_REF], geometric primitives are used to reconstruct the camera pose which is then used as input for visual servoing. These approaches are thus dependent on the reliable detection, extraction and subsequent tracking of the aforesaid primitives. While PBVS may be affected by instabilities in pose estimation, IBVS designed from image points may be subject to local minima, singularity, inadequate robot trajectory and limited convergence domain, when the six degrees of freedom are controlled and when the image error is large and/or when the robot has a large displacement to achieve to reach the desired pose [START_REF] Chaumette | Visual servoing and visual tracking[END_REF]. This is due to the Parts of this work have been presented in [START_REF] Bakthavatchalam | Photometric moments: New promising candidates for visual servoing[END_REF] and [START_REF] Bakthavatchalam | An improved modelling scheme for photometric moments with inclusion of spatial weights for visual servoing with partial appearance/disappearance[END_REF]. Manikandan Bakthavatchalam and Franc ¸ois Chaumette are with Inria, Univ Rennes, CNRS, IRISA, Rennes, France. e-mail: Manikandan.Bakthavatchalam@inria.fr, Francois.Chaumette@inria.fr Omar Tahri is with INSA Centre Val de Loire, Université d'Orléans, PRISME EA 2249, Bourges, France. email: omar.tahri@insa-cvl.fr strong non linearities and coupling in the interaction matrix of image points. To handle these issues, geometric moments were introduced for VS in [START_REF] Chaumette | Image moments: a general and useful set of features for visual servoing[END_REF]- [START_REF] Tahri | Visual servoing based on shifted moments[END_REF], which allowed obtaining a large convergence domain and adequate robot trajectories, thanks to the reduction of the non linearities and coupling in the interaction matrix of adequate combinations of moments. However, these methods are afflicted by a serious restriction: their dependency on the availability of well-segmented regions or a set of tracked and matched points in the image. Breaking this traditional dependency, the approach proposed in this paper embraces a more general class, known as dense VS, in which the extraction, tracking and matching of set of points or well-segmented regions is not necessary. In another suite of geometric methods, an homography and a projective homography are respectively used as visual features in [START_REF] Benhimane | Homography-based 2d visual tracking and servoing[END_REF] and [START_REF] Silveira | Direct visual servoing: Vision-based estimation and control using only nonmetric information[END_REF], [START_REF] Silveira | On intensity-based nonmetric visual servoing[END_REF]. These quantities are estimated by solving a geometric or photo-geometric image registration problem, carried out with non-linear iterative methods. However, these methods require a perfect matching of the template considered in the initial and desired images, which strongly limits their practical relevance. The second type of methods adopted the photometric approach by avoiding explicit geometric extraction and resorting instead to use the image intensities. A learning-based approach was proposed in [START_REF] Nayar | Subspace methods for robot vision[END_REF], where the intensities were transformed using Principal Component Analysis to a reduced dimensional subspace. But it is prohibitive to scale this approach to multiple degrees of freedom [START_REF] Deguchi | A direct interpretation of dynamic images with camera and object motions for vision guided robot control[END_REF]. The set of intensities in the image were directly used as visual features in [START_REF] Collewet | Photometric visual servoing[END_REF] but the high nonlinearity between the feature space and the state space limits the convergence domain of this method and does not allow obtaining adequate robot trajectories. This direct approach was later extended to omnidirectional cameras [START_REF] Caron | Photometric visual servoing for omnidirectional cameras[END_REF] and to depth map [START_REF] Teulière | A dense and direct approach to visual servoing using depth maps[END_REF]. In this work, instead of using directly the raw luminance of all the pixels, we investigate the usage of visual features based on photometric moments. As it has been shown in [START_REF] Tahri | Point-based and region-based image moments for visual servoing of planar objects[END_REF] that considering geometric moments (built from a set of image points) provides a better behavior than considering directly a set of image points, we will show that considering photometric moments (built from the luminance of the pixels in the image) provides a better behavior than considering directly the luminance of the pixels. These moments are a specific case of the Kernel-based formulation in [START_REF] Kallem | Kernelbased visual servoing[END_REF] which synthesized controllers only for 3D translations and rotation around the optic axis. Furthermore, the analytical form of the interaction matrix of the features proposed in [START_REF] Kallem | Kernelbased visual servoing[END_REF] has not been determined, which makes impossible the theoretical sta-bility analysis of the corresponding control scheme. Different from [START_REF] Kallem | Kernelbased visual servoing[END_REF], the interaction matrix is developed in closed-form in this paper, and most importantly taking into account all the six degrees of freedom, which is the first main contribution of this work. It is shown that this is more general as well as consistent with the current state-of-the-art. Furthermore, an important practical (and theoretical) issue that affects photometric methods stem from the changes in the image due to the appearance of new portions of the scene or the disappearance of previously viewed portions from the camera field-of-view (FOV). This means that the set of measurements varies along the robot trajectory, with a potential large discrepancy between the initial and desired images, leading to an inconsistency between the set of luminances I(t) in the current image and the set I ⇤ in the desired image, and thus also for the photometric moments computed in the current and desired images. In practice, such unmodelled disturbances influence the system behaviour and may result in failure of the control law. Another original contribution of this work is an effective solution proposed to this challenging problem by means of a spatial weighting scheme. In particular, we determine a weighting function so that a closed-form expression of the interaction matrix can be determined. The main contributions of this paper lie in the modelling issues related to considering photometric moments as inputs of visual servoing and in the study of the improvements it brings with respect to the pure luminance method. The control scheme we have used to validate these contributions is a classical and basic kinematic controller [START_REF] Chaumette | Visual servoing and visual tracking[END_REF]. Let us note that more advanced control schemes, such as dynamic controllers [START_REF] Mahony | A port-Hamiltonian approach to imagebased visual servo control for dynamic systems[END_REF]- [START_REF] Wang | Adaptive visual tracking for robotic systems without imagespace velocity measurement[END_REF], could be designed from these new visual features. The sequel of the paper is organized as follows: in Section II, the modelling aspects of photometric moments and the associated weighting strategy are discussed in depth. In Section III, the visual features adopted and the control aspects are discussed. Sections IV and V are devoted to simulations and experimental results. Finally, the conclusions drawn are presented in Section VI. II. MODELLING Generalizing the classical definition of image moments, we define a weighted photometric moment of order (p + q) as: m pq = Z Z ⇡ x p y q w (x) I (x, t) dx dy (1) where x = (x, y) is a spatial point on the image plane ⇡ where the intensity I(x, t) is measured at time t and w(x) is a weight attributed to that measurement. By linking the variations of these moments to the camera velocity v c , the interaction matrix of the photometric moments can be obtained. ṁpq = L mpq v c (2) where L mpq = ⇥ L vx mpq L vy mpq L vz mpq L !x mpq L !y mpq L !z mpq ⇤ . Each L v/! mpq 2 R is a scalar with the superscripted v denoting translational velocity and ! the rotational velocity along or around the axis x, y or z axis of the camera frame. Taking the derivative of the photometric moments in (1), we have ṁpq = Z Z ⇡ x p y q w(x) İ(x, y) dx dy (3) The first step is thus to model the variations in the intensity İ(x, y) that appear in (3). In [START_REF] Collewet | Photometric visual servoing[END_REF] which aimed to use raw luminance directly as visual feature, the intensity variations were modelled using the Phong illumination model [START_REF] Phong | Illumination for computer generated pictures[END_REF] resulting in an interaction matrix with parts corresponding to the ambient and diffuse terms. In practice, use of light reflection models requires cumbersome measurements for correct instantiation of the models. Besides, a perfect model should take into account the type of light source, attenuation model and different possible configurations between the light sources, the vision sensor and the target object used in the scene. Since VS is robust to modelling errors, adding such premature complexity to the models can be avoided. Instead, this paper adopts a simpler and more practical approach by using the classical brightness constancy assumption [START_REF] Horn | Determining optical flow[END_REF] to model the intensity variations, as done in [START_REF] Collewet | Visual servoing set free from image processing[END_REF]. This assumption considers that the intensity of a moving point x = (x, y) remains unchanged between successively acquired images. This is encapsulated in the following well-known equation I(x + x, t + t) = I(x, t) (4) where x is the infinitesimal displacement undergone by the image point after an infinitesimal increment in time t. A first order Taylor expansion of (4) around x leads to rI > ẋ + İ = 0 (5) known as the classical optic flow constraint equation (OFCE), where rI > = h @I @x @I @y i = ⇥ I x I y ⇤ is the spatial gradient at the image point x. Further, the relationship linking the variations in the coordinates of a point in the image with the spatial motions of a camera is well established [START_REF] Chaumette | Visual servoing and visual tracking[END_REF]: ẋ = L x v c where L x =  1 Z 0 x Z xy (1 + x 2 ) y 0 1 Z y Z 1 + y 2 xy x (6) In general, the depth of the scene points can be considered as a polynomial surface expressed as a function of the image point coordinates [START_REF] Chaumette | Image moments: a general and useful set of features for visual servoing[END_REF]. 1 Z = X p 0,q 0,p+qn A pq x p y q (7) where n is the degree of the polynomial with n = 1 for a planar scene. Equation ( 7) is a general form with the only assumption that the depth is continuous. In this work however, for simplifying the analytical forms presented, only planar scenes have been considered in the modelling 1 . We will see in Section V-D that this simplification is not crucial by considering non planar environments. Therefore, with n = 1, (7) becomes 1 Z = Ax + By + C (8) where A(= A 10 ), B(= A 01 ), C(= A 00 ) are scalar parameters that describe the configuration of the plane in the camera frame. From (5), we can write: İ(x, y) = rI > ẋ (9) By plugging ( 8) and ( 6) in [START_REF] Benhimane | Homography-based 2d visual tracking and servoing[END_REF], we obtain İ(x, y) = L I v c = rI > L x v c ( 10 ) where L I = rI > L x is given by: L > I = 2 Substituting ( 10) into (3), we see that ṁpq = Z Z ⇡ x p y q w(x) L I v c dx dy (12) By comparing with (2), we can then identify and write down the interaction matrix of the photometric moments as L mpq = Z Z ⇡ x p y q w(x) L I dx dy (13) Direct substitution of (11) into the above equation gives us L vx mpq = Z Z ⇡ x p y q w(x)I x (Ax + By + C) dx dy L vy mpq = Z Z ⇡ x p y q w(x)I y (Ax + By + C) dx dy L vz mpq = Z Z ⇡ x p y q w(x)( xI x yI y )(Ax + By + C) dx dy L !x mpq = Z Z ⇡ x p y q w(x)( xyI x (1 + y 2 )I y ) dx dy L !y mpq = Z Z ⇡ x p y q w(x)((1 + x 2 )I x + xyI y ) dx dy L !z mpq = Z Z ⇡ x p y q w(x)(xI y yI x ) dx dy We see that the interaction matrix consists of a set of integrodifferential equations. For convenience and fluidity in the ensuing developments, the following compact notation is introduced. m rx pq = Z Z ⇡ x p y q w(x) I x dx dy (14a) m ry pq = Z Z ⇡ x p y q w(x) I y dx dy (14b) Each component of the interaction matrix in ( 13) can be easily re-arranged and expressed in terms of the above compact notation as follows: L vx mpq = A m rx p+1,q + B m rx p,q+1 + C m rx p,q L vy mpq = A m ry p+1,q + B m ry p,q+1 + C m ry p,q L vz mpq = A m rx p+2,q B m rx p+1,q+1 C m rx p+1,q A m ry p+1,q+1 B m ry p,q+2 C m ry p,q+1 L !x mpq = m rx p+1,q+1 m ry p,q m ry p,q+2 L !y mpq = m rx p,q + m rx p+2,q + m ry p+1,q+1 L !z mpq = m rx p,q+1 + m ry p+1,q (15) The terms m rx pq and m ry pq have to be evaluated to arrive at the interaction matrix. This in turn would require the computation of the image gradient terms I x and I y , an image processing step performed using derivative filters, which might introduce an imprecision in the computed values. In the following, it is shown that a clever application of the Green's theorem can help subvert the image gradients computation. The Green's theorem helps to compute the integral of a function defined over a subdomain ⇡ of R 2 by transforming it into a line (curve/contour) integral over the boundary of ⇡, denoted here as @⇡: Z Z ⇡ ( @Q @x @P @y )dx dy = I @⇡ P dx + I @⇡ Qdy (16) With suitable choices of functions P and Q, we aim to transform the terms m rx pq and m ry pq . To compute m rx pq , we let Q = x p y q w(x) I(x) and P = 0. We have @P @y = 0 and @Q @x = px p 1 y q w(x)I(x)+x p y q @w @x I(x)+x p y q w(x)I x [START_REF] Kallem | Kernelbased visual servoing[END_REF] Substituting this back into [START_REF] Teulière | A dense and direct approach to visual servoing using depth maps[END_REF], we can write Z Z ⇡ h p x p 1 y q w(x)I(x) + x p y q @w @x I(x) + x p y q w(x) I x i dxdy = I @⇡ x p y q w(x) I(x)dy (18) Recalling our compact notation in (14a) and rearranging [START_REF] Mahony | A port-Hamiltonian approach to imagebased visual servo control for dynamic systems[END_REF], we obtain m rx pq = Z Z ⇡ ⇣ p x p 1 y q w(x)I(x) + x p y q @w @x I(x) ⌘ dx dy + I @⇡ x p y q w(x)I(x)dy Applying (1) to the first term in the RHS, we have m rx pq = p m p 1,q Z Z ⇡ x p y q @w @x I(x) dx dy + I @⇡ x p y q w(x) I(x) dy In the same manner, the computation of the term m ry pq is again simplified by employing the Green's theorem with P = x p y q w(x) I(x) and Q = 0. m ry pq = q m p,q 1 Z Z ⇡ x p y q @w(x, y) @y I(x) dx dy I @⇡ x p y q w(x) I(x) dx (20) The results ( 19) and ( 20) are generic, meaning there are no explicit conditions on the weighting except that the function is differentiable. Clearly, depending on the nature of the weighting chosen for the measured intensities in (1), different analytical results can be obtained. In the following, two variants of the interaction matrix are developed corresponding to two different choices for the spatial weighting. A. Uniformly Weighted Photometric Moments (UWPM) First, the interaction matrix is established by attributing the same importance to all the measured intensities on the image plane. These moments are obtained by simply fixing w(x, t) = 1, 8 x 2 ⇡ leading to @w @x = @w @y = 0. Subsequently, ( 19) and ( 20) get reduced to 8 < : m rx pq = p m p 1,q + H @⇡ x p y q I(x, y) dy m ry pq = q m p,q 1 H @⇡ x p y q I(x, y) dx (21) The second terms in m rx pq and m ry pq are contour integrals along @⇡. These terms represent the contribution of information that enter and leave the image due to camera motion. They could be evaluated directly but for obtaining simple closedform expressions, the conditions under which they vanish are studied. Let us denote I @⇡ = H @⇡ x p y q I(x, y) dy. The limits y = y m and y = y M are introduced at the top and bottom of the image respectively (see Fig 1a). Since y(= y M ) is constant along C1 and y(= y m ) is constant along C3, it is sufficient to integrate along C2 and C4. Along C 2 , y varies from y M to y m while x remains constant at x M . Along C 4 , y varies from y m to y M while x remains constant at x m . Introducing these limits, we get I @⇡ = x p M ym Z y M y q I(x M , y)dy + x p m y M Z ym y q I(x m , y)dy If I(x M , y) = I(x m , y) = I, 8y, then we have I @⇡ = (x p M x p m ) I ym Z y M y q dy Since we want I @⇡ = 0, the only solution is to have I = 0, that is when the acquired image is surrounded by a uniformly colored black2 background. This assumption, named information persistence (IP) was already implicitly done in [START_REF] Kallem | Kernelbased visual servoing[END_REF] [START_REF] Swensen | Empirical characterization of convergence properties for kernel-based visual servoing[END_REF]. It does not need not be strictly enforced. In fact, mild violations of the IP assumption were deliberately introduced in experiments (refer IV-B) and this was quite acceptable in most cases, as evidenced by our results. This assumption gets naturally eliminated when appropriate weighting functions are introduced in the moments formulation as shown in II-B. Substituting ( 22) into (15), we get the final closed form expression for the interaction matrix. L vx mpq = A(p + 1)m pq Bpm p 1,q+1 Cpm p 1,q L vy mpq = Aqm p+1,q 1 B(q + 1)m p,q Cqm p,q 1 L vz mpq = A (p + q + 3) m p+1,q + B(p + q + 3) m p,q+1 + C(p + q + 2) m pq L !x mpq = q m p,q 1 + (p + q + 3) m p,q+1 L !y mpq = p m p 1,q (p + q + 3) m p+1,q L !z mpq = p m p 1,q+1 q m p+1,q 1 (23) The interaction matrix in ( 23) has a form which is exactly identical to those developed earlier for the geometric moments [START_REF] Chaumette | Image moments: a general and useful set of features for visual servoing[END_REF]. A consistency with previously developed results is thus observed even though the method used for the modelling developments differ completely from [START_REF] Chaumette | Image moments: a general and useful set of features for visual servoing[END_REF]. Consequently, all the useful results available in the state of the art with regards to the developments of visual features [START_REF] Tahri | Point-based and region-based image moments for visual servoing of planar objects[END_REF] [8] are applicable as they are for the proposed photometric moments. Unlike [START_REF] Caron | Photometric visual servoing for omnidirectional cameras[END_REF], the image gradients do not appear anymore in the interaction matrix. Their computation is no longer necessary. The developments presented have led to the elimination of this image processing step required by pure luminance-based visual servoing [START_REF] Collewet | Photometric visual servoing[END_REF]. The computation of the interaction matrix is now reduced to a simple and straight-forward computation of the moments on the image plane. Note also that in order to calculate L mpq , only moments of order upto p + q + 1 are required. In addition, we note that as usual in IBVS, the interaction matrix components corresponding to the rotational degrees of freedom are free from 3D parameters. B. Weighted Photometric Moments (WPM) In order to remove the IP assumption we do not attribute anymore an equal contribution to all the measured intensities (w(x) 6 = 1, 8x 2 @⇡), as was done in Sec II-A. Instead, a lesser importance is attributed to peripheral pixels, on which the appearance and disappearance effects are pronounced. To achieve this, the spatial weighting function is made to attribute maximal importance to the pixels in the area around the image center and smoothly reducing it radially outwards towards 0 at the image periphery. If w(x, y) = 0, 8x 2 @⇡, this still ensures I @⇡ = 0 obviating the need to have any explicit IP assumption anymore. Weighting scheme: The standard logistic function l(x) = 1 1+e x smoothly varies between 0 and 1 and has simple derivatives. It is a standard function that is used in machine learning. However, if used to design w(x), it is straight-forward to check that the interaction matrix cannot be expressed as functions of the weighted photometric moments. To achieve this, we propose to use functions with the general structure: F(x) = K exp p(x) (24) with p(x) = a 0 + a 1 x + 1 2 a 2 x 2 + 1 3 a 3 x 3 + ... + 1 n a n x n . Indeed, functions of this structure possess the interesting property that their derivatives can be expressed in terms of the function itself. It is given by: F 0 (x) = K exp p(x) p 0 (x) = p 0 (x)F(x) with p 0 (x) = a 1 + a 2 x + a 3 x 2 + ... + a n x n 1 . In line with the above arguments, we propose the following custom exponential function (see Fig 1b) w(x, y) = K exp a(x 2 +y 2 ) 2 ( 25 ) where K is the maximum value that w can attain and a can be used to vary the area which receives maximal and minimal weights respectively. This choice allows the interaction matrix to be obtained directly in closed-form as a function of the weighted moments. Therefore, no additional computational overheads are introduced since nothing other than weighted moments upto a specific order are required. In addition, the symmetric function to which the exponential is raised ensures that the spatial weighting does not alter the behaviour of weighted photometric moments to planar rotations. The spatial derivatives of (25) are as follows: 8 > < > : @w @x = 4ax(x 2 + y 2 ) w(x) @w @y = 4ay(x 2 + y 2 ) w(x) (26) Substituting ( 26) into ( 19) and ( 20), we obtain ⇢ m rx pq = p m p 1,q + 4a (m p+3,q + m p+1,q+2 ) m ry pq = q m p,q 1 + 4a (m p,q+3 + m p+2,q+1 ) (27) By combining (27) with the generic form in [START_REF] Caron | Photometric visual servoing for omnidirectional cameras[END_REF], the interaction matrix of photometric moments w L mpq weighted with the radial function ( 25) is obtained. w L mpq = h w L vx mpq w L vy mpq w L vz mpq w L !x mpq w L !y mpq w L !z mpq i (28) with w L vx mpq = L vx mpq + 4 a A (m p+4,q + m p+2,q+2 ) + 4 a B (m p+3,q+1 + m p+1,q+3 ) + 4 a C (m p+3,q + m p+1,q+2 ) w L vy mpq = L vy mpq + 4 a A (m p+3,q+1 + m p+1,q+3 ) + 4 a B (m p,q+4 + m p+2,q+2 ) + 4 a C (m p,q+3 + m p+2,q+1 ) w L vz mpq = L vz mpq 4 a A (m p+5,q + 2m p+3,q+2 + m p+1,q+4 ) 4 a B (m p+4,q+1 + 2m p+2,q+3 + m p,q+5 ) 4 a C (m p+4,q + 2m p+2,q+2 + m p,q+4 ) w L !x mpq = L !x 1mpq 4 a(m p+4,q+1 + 2 m p+2,q+3 + m p,q+3 + m p+2,q+1 + m p,q+5 ) w L !y mpq = L !y 1mpq + 4 a(m p+3,q + m p+1,q+2 + m p+5,q + 2 m p+3,q+2 + m p+1,q+4 ) w L !z mpq = L !z mpq = pm p 1,q+1 qm p+1,q 1 We note that the interaction matrix can be expressed as a matrix sum w L mpq = L mpq + 4aL w (29) where L mpq has the same form as [START_REF] Collewet | Visual servoing set free from image processing[END_REF]. We note however that the moments are now computed using the weighting function in [START_REF] Bakthavtachalam | Utilisation of photometric moments in visual servoing[END_REF]. The matrix L w is tied directly to the weighting function. Of course if a = 0 which means w(x) = 1, 8x 2 ⇡, we find w L mpq = L mpq . To compute L mpq , moments of order upto (p + q + 1) are required whereas L w is a function of moments m tu , where t + u  p + q + 5. This is in fact a resultant of the term (x 2 + y 2 ) 2 to which the exponential is raised (see [START_REF] Bakthavtachalam | Utilisation of photometric moments in visual servoing[END_REF]). On observation of the last component of w L mpq , we see that it does not contain any new terms when compared to [START_REF] Collewet | Visual servoing set free from image processing[END_REF]. That is, the weighting function has not induced any extra terms, thus retaining the invariance of the classical moment invariants to optic axis rotations. This outcome was of course desired from the symmetry of the weighting function. On the other hand, if we consider the other five components, additional terms are contributed by the weighting function. As a result, moment polynomials developed from the classical moments [START_REF] Tahri | Point-based and region-based image moments for visual servoing of planar objects[END_REF] will not be invariant to translational motions when used with WPM. Thus, there is a need to develop new invariants for use with WPM such that they would retain their invariance to translations. This is an open problem that is not dealt with in this paper. Finally and as usual, the components of the interaction matrix corresponding to the rotational motions are still free from any 3D parameters. Weighted photometric moments allow visual servoing on scenes prone to appearance and disappearance effects. Moreover, the interaction matrix has been developed in closed-form in order to facilitate detailed stability and robustness analyses. The above developments would be near-identical for other weighting function choices of the form given by ( 24) [START_REF] Bakthavtachalam | Utilisation of photometric moments in visual servoing[END_REF]. III. VISUAL FEATURES AND CONTROL SCHEME The photometric moments are image-based measurements m(t) = (m 00 (t), m 10 (t), m 01 (t), ...) obtained from the image I(t). To control n ( 6) degrees of freedom of the robot, a large set of k (> n) individual photometric moments could be used as input s to the control scheme: s = m(t). However, this would lead to redundant features, for which it is well known that, at best, only the local asymptotic stability can be demonstrated [START_REF] Chaumette | Visual servoing and visual tracking[END_REF]. That is why we prefer to use the same strategy as in [START_REF] Chaumette | Image moments: a general and useful set of features for visual servoing[END_REF]- [START_REF] Tahri | Visual servoing based on shifted moments[END_REF], that is, from the set of available measurements m(t), we design a set of n visual features s = s(m(t)) so that L s is of full rank n and has nice decoupling properties. The interaction matrix L s can easily be obtained from the matrices L mpq 2 R 1⇥6 modelled in the previous section. Indeed, we have: L s = @s @m L m ( 30 ) where L m is the matrix obtained by stacking the matrices L mpq . Then, the control scheme with the most basic and classical form has been selected [START_REF] Chaumette | Visual servoing and visual tracking[END_REF]: v c = c L s 1 (s s ⇤ ) (31) where s ⇤ = s(m ⇤ ) and c L s is an estimation or an approximation of L s . Such an approximation or estimation is indeed necessary since, as detailed in the previous section, the translational components of L mpq are function of the 3D parameters A pq describing the depth map of the scene. Classical choices are c L s = L s (s(t), b Z(t)) where Z = (A, B, C) when an estimation of Z is available, c L s = L s (s(t), c Z ⇤ ), or even c L s = L s (s ⇤ , c Z ⇤ ). Another classical choice is to use the mean c L s = 1 2 ⇣ L s (s(t), b Z(t)) + L s (s ⇤ , c Z ⇤ ) ⌘ or c L s = 1 2 ⇣ L s (s(t), c Z ⇤ ) + L s (s ⇤ , c Z ⇤ ) ⌘ since it was shown to be efficient for very large camera displacements [START_REF] Malis | Improving vision-based control using efficient second-order minimization techniques[END_REF]. With such a control scheme, it is well known that the global asymptotic stability (GAS) of the system in the Lyapunov sense is ensured if the following sufficient condition holds [START_REF] Chaumette | Visual servoing and visual tracking[END_REF]: L s c L s 1 > 0 (32) Of course, in case c L s = L s , the system is GAS if L s is never singular, and a perfect decoupled exponential decrease of the error s s ⇤ is obtained. Such a perfect behavior is not obtained as long as c L s 6 = L s , but the error norm will decrease and the system will converge if condition (32) is ensured. This explains the fact that a non planar scene can be considered in practice (see Section V-D), even if the modelling developed in the previous section was limited to the planar case. A. Control of SCARA motions Photometric moments-based visual features can be used to control not only the subset of SE(3) motions considered in [START_REF] Kallem | Kernelbased visual servoing[END_REF] but also full 6 dof motions. In the former case, the robot is configured for SCARA (3T+1R, n = 4) type actuation to control only the planar translation, translation along the optic axis and rotation around the optic axis. The camera velocity is thus reduced to v cr = (v x , v y , v z , ! z ). Similarly to [START_REF] Tahri | Point-based and region-based image moments for visual servoing of planar objects[END_REF], the following set of 4 visual features is used to control these 4 dofs. s r = (x n , y n , a n , ↵) where x n = x g a n , y n = y g a n , a n = Z ⇤ q m ⇤ From the simple relations between s r and m pq , (p + q < 3), it is quite simple to determine the analytical form of the interaction matrix L sr using (30). When the target is parallel to the image plane (A = B = 0), the following sparse matrix is obtained for UWPM. L sr = 2 6 6 4 L xn L yn L an L ↵ 3 7 7 5 = 2 6 6 4 1 0 0 y n 0 1 0 x n 0 0 1 0 0 0 0 1 3 7 7 5 (35) Let us note that the current value of the depth does not appear anywhere in L sr and only the desired value Z ⇤ intervenes indirectly through x n and y n , and thus in L sr . This nice property and the sparsity in (35) justify the choice of s r . Following the line of analysis at the start of this section, we infer that the control law using d L sr = L sr (s r (t), Z ⇤ ) is GAS since L sr is always of full rank 4 and L sr d L sr 1 = I 4 when c Z ⇤ = Z ⇤ . Let us now consider the more general case where c Z ⇤ 6 = Z ⇤ . From (35), it is straight-forward to obtain L s c L s 1 = 2 6 6 4 1 0 0 Y 0 1 0 X 0 0 1 0 0 0 0 1 3 7 7 5 (36) where Y = ( b Z ⇤ Z ⇤ 1)y n and X = (1 b Z ⇤ Z ⇤ )x n . The eigen values of the symmetric part of the above matrix product are given by = {1, 1, 1 ± p X 2 +Y 2 2 }. For (32) to hold, all eigen values have to be positive, that is, p X 2 +Y 2 2 < 1 , X 2 + Y 2 < 4. Back-substitution of X and Y yields the following bounds for system stability: 1 2 p x 2 n + y 2 n < b Z ⇤ Z < 1 + 2 p x 2 n + y 2 n ( 37 ) which are easily ensured in practice since x n and y n are small (0.01 typically). Let us now consider the case where c L s = I 4 , which is a coarse approximation. In that case, we obtain L s c L s 1 = 2 6 6 4 1 0 0 y n 0 1 0 x n 0 0 1 0 0 0 0 1 3 7 7 5 (38) Then, proceeding as previously leads to the following condition for GAS x 2 n + y 2 n < 4 (39) which, once again, is always ensured in practice. Note that these satisfactory theoretical results have not been reported previously and are an original contribution of this work. Unfortunately, exhibiting similar conditions for the WPM case is not so easy since the first three columns of L sr are not as simple as (35) due to the loss of invariance property of WPM. B. 6 dof control To control all the 6 dof, two more features in addition to (33) are required. In moments-based VS methods, these features are chosen as ratios of moment polynomials which are invariant to 2D translations, planar rotation and scale. In [START_REF] Tahri | Point-based and region-based image moments for visual servoing of planar objects[END_REF], [START_REF] Tahri | Visual servoing based on shifted moments[END_REF], several moment invariants-based visual features have been introduced. In principle, all these previous results could be adopted for use with the photometric moments proposed in this work. Certainly, an exhaustive exploration of all these choices is impractical. Based on several simulations and experimental convergence trials (see [START_REF] Bakthavtachalam | Utilisation of photometric moments in visual servoing[END_REF]), the following visual feature introduced in [START_REF] Tahri | Visual servoing based on shifted moments[END_REF] was selected: r = 1 / 2 (40) with ⇢ 1 = 3μ 30 μ12 + μ2 30 + 3μ 03 μ21 + μ2 03 2 = μ30 μ12 + μ2 21 μ03 μ21 + μ2 12 ( 41 ) where μpq is the shifted moment of order p + q with respect to shift point x sh (x sh , y sh ) defined by [START_REF] Tahri | Visual servoing based on shifted moments[END_REF]: μpq = Z Z (x x g + x sh ) p (y y g + y sh ) q w(x)I(x) dx dy To sum up, the shifted moments in (42) are computed with respect to P 1 and P 2 , resulting in two different sets of shifted moments. Then, the feature in (40) is computed employing these two sets of moments to derive two corresponding visual features r P1 and r P2 . Therefore, the following set of visual features for controlling the 6 dof is obtained: s = (x n , y n , a n , r P1 , r P2 , ↵) (44) The interaction matrix developments of r P1 and r P2 are provided in Appendix A. When UWPM are used, the interaction matrix L s exhibits the following sparse structure when the sensor and target planes are parallel. The matrix E is non-singular if its left 2 ⇥ 2 submatrix has a non-zero determinant. When the interaction matrix is computed with moments from shift points (P 1 6 = P 2 ) as described above, this condition is effortlessly ensured. As a result, the interaction matrix L || s is non-singular [START_REF] Tahri | Visual servoing based on shifted moments[END_REF]. On the other hand, when the features are built from WPM, the sparsity in (45) cannot be achieved anymore. This is because L s has a more complex form, except for its last column which remains exactly the same (since behaviour with respect to optic axis rotations is not altered). Nevertheless, the obtained results were quite satisfactory for a variety of scenes and camera displacements, as shown in the next section. L || s =  I 3 D 0 3 E ( IV. VALIDATION RESULTS FOR UWPM A. Modelling Validation and Comparison to Pure Luminance In this section, simulation results of 6 dof positioning tasks are presented to demonstrate the correctness of the modelling of the UWPM proposed in Sec.II-A and to compare their behavior to pure luminance. The initial and desired images are shown in Figs. 3a and3b respectively. The background is empty without appearance or disappearance of scene portions in the camera view. The initial pose is chosen far away from the desired one such that the image overlap is small. The displacements required for convergence are a translation of t = [1.0m, 1.0m, 1.0m] and a rotation of R = [25 , 10 , 55 ]. The control law in (31) is used with c L s = L s (s(t), Z(t)). This control law is expected to result in a pure exponential decrease of the errors to 0. In simulation, the depths Z(t) are readily available from ground truth and need not be estimated. A gain of = 1.0 was used for this experiment. As seen from Fig 3c, a perfect exponential decrease of the errors is indeed obtained as expected. Furthermore, the camera traces a straight-forward path to the goal pose as shown in Fig 3d . This demonstrates the validity of the modelling steps and the design of the visual features. Let us note that no image processing (image matching or visual tracking) were used with the photometric moments in the reported experiments. Comparison to pure luminance: Then, the same control law configuration was tested using pure luminance directly as visual feature, that is using v B. Experimental Results with UWPM Experiments were performed at video rate on a Viper850 6 dof robot. Unlike in Sec IV-A, mild violations of the IP assumption are deliberately allowed. The photometric moments are tested first on SCARA-type motions and then with 6 dof. 1) SCARA motions: For this experiment, the features in (33) are used with their current interaction matrix b L s = c L s (s(t), c Z ⇤ ), with c Z ⇤ = (0, 0, 1/ Ẑ⇤ ), Ẑ⇤ roughly approximated with depth value at the desired pose. A gain of = 1.5 was used. The desired image is shown in Figure 5b. The initial pose is chosen such that the image in 5a is observed by the camera. The target is placed such that very small portions of its corners are slightly outside the field of view (see Fig 5a). Furthermore, the background is not perfectly black, thereby non-zero. It can be observed from Fig 6c that the decrease in errors is highly satisfactory while we recall that only the interaction matrix at the desired configuration and approximate depth were employed. The generated velocity profiles are also smooth as shown in Fig. 6d. Clearly, the camera spatial trajectory is close to a geodesic as shown in Figure IV-B2. Further, an accuracy of [ 0.56mm, 0.08mm, 0.14mm] in translation and [ 0.01 , 0.04 , 0.03 ] in rotation was obtained. The above experimental results showed results with UWPM where there are only mild violations of the IP assumption. Next, we show results on more general scenes with WPM where this restrictive assumption (black background) has been eliminated. V. VALIDATION RESULTS FOR WPM For all the experiments presented in this section, the parameter K = 1 is fixed, so maximum weight a pixel can have is 1. Then, a is chosen with a simple heuristic, that 40% of the image pixels will be assigned a weight greater than 0.5 and around 90% a weight greater than 0.01. This is straightforward to compute from the definition of w(x, y). For an image resolution of 640 ⇥ 480 for example, with K = 1, a = 650 satisfies the above simple heuristic. The surface of w(x, y) with these parameters is depicted in Fig. 1b. Let us note that the tuning of these parameters is not crucial. In our case, changing a by ±200 does not introduce any drastic changes in the results. A. Validation of WPM In this section, the modelling of WPM is validated using 6 dof positioning tasks in simulation. No specific backgrounds are considered anymore since the WPM designed in Section II-B are equipped to handle such scenarios. Comparisons to both the pure luminance feature and to moments without the weighting strategy are made. The image learnt from the desired pose is shown in Fig 7b . In the image acquired from the initial robot pose (see Fig 7a ), a large subset of pixels not present in the desired image have appeared. In fact, there is no clear distinction of which pixels constitute the background. These scenarios are more representative of camera-mounted robotic arms interacting with real world objects. For the control, the set of visual features (44) is adopted with the current interaction matrix L s (s(t), Z(t)). The depths are not estimated but available from the ground truth data. A gain of = 1.5 was used for all the experiments. The resulting behaviour is very satisfactory. The errors in the visual features decrease exponentially as shown in Figures 7c and7d. This confirms the correctness of the modelling steps used to obtain the interaction matrix of WPM. Naturally, the successful results also imply the correctness of the visual features obtained from the weighted moments. Comparison with UWPM: For the comparison, the same experiment is repeated with the same control law but without the weighting strategy. In this case, the errors appear to decrease initially (see Figs 8a and8b). However, after about 25 iterations the system diverges (see Fig 8c) and the servo is stopped after few iterations. As expected, the system in this case is clearly affected by the appearance and disappearance of parts of the scene. Comparison to pure luminance: Next, we also compared the WPM with the pure luminance feature. Also in this case, the effect of the extraneous regions is severe and the control law does not converge to the desired pose. The generated velocities do not regulate the errors satisfactorily (see Fig 8d). The error This can be compared to the case of the WPM where the error norm decreases exponentially as shown in Figure 8e. Also, as mentioned previously, the visual features are redundant and there is no mapping of individual features to the actuated dof. The servoing behaviour depends on the profile of the cost function, which is dependent on all the intensities in the acquired image. The appearance and disappearance of scene portions thus also affects the direct visual servoing method. Thus, we see that the extraneous regions have resulted in the worst case effect namely non-convergence to the desired pose in both the UWPM as well as when using the pure luminance. Next, we discuss results obtained from servoing on a scene different from the one used in this experiment. B. Robustness to large rotations In this simulation, we consider 4 dof and very large displacements such that large scene portions enter and leave the camera field of view (see Figures 9a and9b). A rotation of 100 around the optic axis and translational displacement of c⇤ t c = [5cm, 4cm, 25cm] are required for convergence. For this experiment, the VS control law in (31) with the features in (33) is used with a gain of = 2. For this difficult task, the mean c has been selected in the control scheme. Note that the depths are not updated at each iteration and only approximated using Z ⇤ = 1. This choice was on purpose to show that online depth estimation is not necessary and an approximation of its value at the desired pose is sufficient for convergence. The visual servoing converged to the desired pose with an accuracy of 0.29 in rotation and [ 0.07mm, 0.48mm, 0.61mm] in translation. The control velocities generated are shown in Fig. 9d and the resulting Cartesian trajectories are shown in Fig. 9e. This experiment demonstrates the robustness of the WPM to very large displacements even when there is appearance and disappearance of huge parts of the image. This affirms also that the convergence properties are improved with the proposed WPM. L s = 1 2 ⇣ L s (s(t), c Z ⇤ ) + L s (s ⇤ , c Z ⇤ ) ⌘ ( C. Empirical Convergence Analysis In this section, we compare through simulations the convergence domain of WPM with pure luminance and UWPM. For this, we considered the 4dof case as in [START_REF] Swensen | Empirical characterization of convergence properties for kernel-based visual servoing[END_REF]. Artificially generated synthetic scenes in which polygonal blocks are sprinkled at the image periphery were employed. As seen from Fig 10, this allows to simulate in varying degrees the appearance and disappearance of scene portions in the camera FOV. For this analysis, the desired pose to be attained is fixed at 1.8m. Positioning tasks starting from 243 different initial poses consisting of 3 sets of 81 poses each, conducted at 3 different depths of 1.8m, 1.9m and 2.0m were considered. In all these initial poses, the camera is subjected to a rotation of 25 around the optic axis while the x and y translations vary from 0.2m to 0.2m. The interaction matrix c L s = L s (s ⇤ , c Z ⇤ ) is chosen in the control scheme, just like in previous works on convergence analysis [START_REF] Collewet | Photometric visual servoing[END_REF] [START_REF] Teulière | A dense and direct approach to visual servoing using depth maps[END_REF]. We consider an experiment to have converged if the task error kek is reduced to less than 1e 10 in a maximum of 300 iterations. In addition to this condition, we also impose that the SSD error defined by e SSD = P x [I(x) I ⇤ (x)] 2 /N pix between the final and learnt images is less than 1.0. This criterion ensures that a non-desired equilibrium point is not considered wrongly as converged. In the reported converged experiments, the final accuracy in pose is less than 1mm for translations and less than 1 for the planar rotation. The UWPM met with failure in all the cases. No segmentation or thresholding is employed and the servo is subjected to appearance and disappearance effects at the image periphery. A dismal performance resulted as expected without the weighting strategy since the model is not equipped to handle the energy inflow and outflow at respect to UWPM, the same set of experiments was repeated using a dense texture (see Fig. 11), where the WPM yield a better result than non-weighted moments. The non-weighted moments have converged on an average only in 55% of the cases. Also note that this is different from the synthetic case at 0%, that is they were completely unable to handle the entry and exit of extraneous regions. In comparison, for WPM, only 3 cases failed to converge out of 243 total runs with a very satisfactory convergence rate of 98%. In fact, in the first two sets of experiments, WPM converged for all the generated poses yielding a 100% convergence rate. No convergence to any undesired equilibrium points were observed, thanks to the textured object. The final accuracies for all the converged experiments was less than 1mm in translation and less then 1 in rotation. Based on the clear improvements in convergence rate, we conclude that WPM are effective as a solution to the problem of extraneous image regions and result in a larger convergence domain in comparison to classical nonweighted moments. We have finally to note that for larger lateral displacements, all methods fail since initial and desired images do not share sufficient common information. D. Robustness to non planar environments In this section, visual servoing with WPM is demonstrated on a non planar scene with the Viper850 robot by considering 4 dof as previously. A realistic scenario is emulated by placing five 3D objects of varying shape, size and color in the scene as shown in c L s = 1 2 ⇣ L s (s(t), c Z ⇤ ) + L s (s ⇤ , c Z ⇤ ) ⌘ has been selected in the control scheme. The depth distributions in the scene are not estimated nor known apriori. An approximation c Z ⇤ = (0, 0, 1/ Ẑ⇤ ) with Ẑ⇤ = 0.5m was used. A gain of = 0.4 was employed. The control law generates camera velocities that decrease exponentially (see Fig 12d), which causes a satisfactory decrease in the feature errors (see Fig 12c). The average accuracy in positioning in translations is 0.6mm while the rotational accuracy is 0.15 . The camera spatial trajectory is satisfactory as seen from Fig. 12e. The simplification [START_REF] Tahri | Point-based and region-based image moments for visual servoing of planar objects[END_REF] of planar scene introduced in the modelling (see Section II) is therefore a reasonable tradeoff of complexity, even if it is not possible to demonstrate that the sufficient stability condition (32) is ensured since c L s 6 = L s . This demonstrates the robustness of visual servoing with respect to (moderate) modelling approximations. E. 6 dof experimental results Several 6dof positioning experiments were conducted on the ViPER 850 robot. A representative one is presented below while the others can be consulted in [START_REF] Bakthavtachalam | Utilisation of photometric moments in visual servoing[END_REF]. For this experiment, the desired robot pose is such that the camera is at 0.5m in a frontoparallel configuration in front of the target. The image learnt from this pose is shown in Fig 13b. L s as the mean of the desired and current interaction matrices. No pose estimation is performed and the depth is approximated roughly as 0.5m. The appearance of new scene portions in the camera view from the left side of the image does not affect the convergence of the visual servo. This influx of information is handled gracefully thanks to the improved modelling used by the WPM. The error in features related to control of rotational motions is very satisfactory (see Fig13e). On the other hand, from the error decrease in features related to control of translational motions in Figure 13d, it can be seen that the error in feature a n is noisy. This feature is based on the area moment m 00 directly related to the quantity of pixels in the image. Since the lighting conditions are not controlled, this might sometimes contribute to some noise in the features. It is also to be noted that when the interaction matrix is updated at each iteration (for the mean configuration in this case), this noise in the features sometimes make the velocities noisy as well (see Figure 13f). However, this noise does not affect the satisfactory convergence as evidenced by our results. A satisfactory Cartesian behaviour was obtained as shown in Fig 13g . The final accuracy in translations is [ 0.05mm, 1.1mm, 0.08mm] and for the rotations is [0.18 , 0.006 , 0.019 ]. Let us finally note that a superior strategy would be to use the photometric moments during the beginning of the servo and to switch over to the pure luminance feature near convergence (when the error norm is below a certain lower bound). This strategy would ensure both enhanced convergence domain thanks to photometric moments and excellent accuracies at convergence thanks to luminance feature. Let us finally note that it is possible to use a normalized intensity level in order to be robust to global lighting variations. Such a normalization can be easily obtained by computing in a first step the smallest and highest values observed in the image. This simple strategy does not modify any modelling step presented in this paper as long as the parts of the scene corresponding to these extremal values do not leave the image (or new portions with higher or smaller intensities do not enter in the camera field of view), which would thus allow obtaining exactly the same results in that case. On the other hand, if the extremal values do not correspond to the same parts of the scene, the induced perturbations may cause the failure of the servoing. VI. CONCLUSION This paper proposed a novel visual servoing scheme based on photometric moments, which capture the image intensities in the form of image moments. The analytical form of the interaction matrix has been derived for these new features. Visual servoing is demonstrated on scenes which do not contain a discrete set of points or monotone segmented objects. Most importantly, the proposed enhanced model takes into account the effect of the scene portions which appear and disappear from the camera field of view during the visual servoing. Existing results based on moment invariants are then exploited to obtain visual features from the photometric moments. The control using these visual features is performant for large SCARA motions (where the images acquired during the servo have very less overlap with the desired image), with a large convergence domain in comparison to both the pure luminance feature and to features based on nonweighted moments. The proposed approach can also be used with non planar environments. This paper thus brings notable improvements over the pure luminance feature and existing moments-based VS methods. The 6 dof control using weighted photometric moments yielded satisfactory results for small displacements to be realized. The control can be rendered suitable for large displacements if the alteration in invariance properties induced by the weighting function can be prevented. So, an important future direction of work would be about the formulation of alternate weighting strategies that preserve the invariance properties as in the non-weighted moments. This is an open and challenging problem that, once solved, would ease a complete theoretical stability and robustness analysis. Also, it is to be noted that the method will certainly fail when the shared portions between the initial and desired images are too low. Another distinction with respect to geometric approaches is that the performance depends on the image contents and hence large uniform portions with poorly texture scenes might pose issues for the servoing. Despite these obvious shortcomings, we believe that direct approaches will become more commonplace and lead to highly performant visual servoing methods. APPENDIX A. Interaction matrix of r P1 and r P2 In (42), on expanding the terms (x x g + x sh ) p and (y y g + y sh ) q using the binomial theorem, the shifted moments can be expressed in terms of the centred moments: Fig. 1 . 1 Fig. 1. a) Evaluation of contour integrals in the interaction matrix developments, b) Custom exponential function w(x, y) = exp 650(x 2 +y 2 ) 2 in the domain 0.4  x  0.4 and 0.3  y  0.3. Gradual reduction in importance from maximum (dark red) in the centre outwards to minimum (blue) at the edges With the same line of reasoning, the contour integral in m ry pq also vanishes. Then (21) transforms to the following simple form:⇢ m rx pq = p m p 1,q m ry pq = q m p,q 1 2 arctan ⇣ 2µ 11 µ 20 µ 02 ⌘µ 20 = m 20 m 00 x 2 g µ 02 = m 02 m 00 y 2 g µ 11 = 21102202211 00m00 with x g = m 10 /m 00 and y g = m 01 /m 00 the centre of gravity coordinates, Z ⇤ the desired depth and finally ↵ = 1 is made of centred moments given by: m 11 m 00 x g y g Fig. 2 . 2 Fig. 2. Shift points P 1 (xg + x sh1 ) and P 2 (xg + x sh2 ) with respect to which the shifted moments are computed. As shown in Fig 2, one shift point is selected along the major orientation (✓ = ↵) and the second point orthogonal to the previous (✓ = ↵ + ⇡ 2 ) such that we have : P 1 [x g + p m 00 cos(↵), y g + p m 00 sin(↵)] and P 2 [x g + p m 00 cos(↵ + ⇡ 2 ), y g + p m 00 sin(↵ + ⇡ 2 )]. To sum up, the shifted moments in (42) are computed with respect to P 1 and P 2 , resulting in two different sets of shifted moments. Then, the feature in (40) is computed employing these two sets of moments to derive two corresponding visual features r P1 and r P2 . Therefore, the c = c L I + (I I ⇤ ). The velocity profiles generated are shown in Figure 4c. The pure luminance experiment is not successful as it results in an enormous final error e ⇡ 10 9 , as seen from Fig 4a. In direct Fig. 3 .Fig. 4 . 34 Fig. 3. Simulation results with UWPM in perfect conditions Fig. 5 . 5 Fig. 5. Experimental results in SCARA mode actuation pertaining to Section IV-B1. Fig. 6 . 6 Fig. 6. 6 dof experimental results pertaining to Section IV-B2. Fig. 7 . 7 Fig. 7. Simulation V-A : 6 dof VS with WPM pertaining to Section V-A. Fig. 8 . 8 Fig.8. Simulation V-A : 6 dof VS comparison to UWPM and pure luminance (see Fig.7). Fig. 9. 4 dof simulation results under large rotations (see Section V-B). Fig. 10 .Fig. 11 . 1011 Fig. 10. Desired image in (a) and a sampling of different images from the 243 generated initial poses are shown in (b)-(d) Fig 12f. In the initial acquired image (see Fig 12a), 3 out of these 5 objects are not fully visible. The WPM were in fact conceived for use in such scenarios. Rotational displacement of 10 around the optic axis and translations of c⇤ t c = [1.5cm, 1cm, 8cm] are required for convergence. Once again, the mean interaction matrix The initial pose is chosen such that the image in Fig 13a is observed. Let us note that Lauren Bacal present in the left part of the desired image is completely absent from the initial image. The corresponding difference image is shown in Fig 13c. There is no monotone segmented object and the assumption about uniform black background is clearly not valid in this case. Nominal displacements of [ 0.35cm, 1.13cm, 6.67cm] in translation and [0.33 , 1.05 , 12.82 ] in rotation are required for convergence. The control law in (31) with the features in (44) is used, with b Fig. 12 . 12 Fig. 12. 4 dof experimental results with a non planar scene (see Section V-D). Fig. 13. WPM 6 dof experimental results (see Section V-E). lx k sh y l 1 p 1 g ) p (y y g ) q w(x)I(x)dxdy (47) Differentiating (46) will yield the interaction matrix of the shifted moments.L μpq = L x sh p sh µ p k,q l + L µ p k,q l p m00 L m00 p m 00 sin ✓L ✓ L y sh = 1 2 sin ✓ p m00 L m00 + p m 00 cos ✓L ✓ (49) Note that the general analytic form of Lm pq could be obtained with n > 1 for non planar scenes, as was done in[START_REF] Chaumette | Image moments: a general and useful set of features for visual servoing[END_REF] for the geometric moments or white with the intensity I redefined to Imax I and the rest of the developments remain identical. with ✓ = ↵ for shift point P 1 and ✓ = ↵+ ⇡ 2 for shift point P 2 . Further, by differentiating (47), we obtain with r = p + q k l. Knowing (48) and (49), the interaction matrix for any shifted moment of order p + q can be obtained. The next step is to compute L 1 and L 2 by differentiating (41). Finally, the interaction matrix L r is directly obtained by differentiating (40).
58,913
[ "753133", "15722" ]
[ "303079", "525244" ]
01758280
en
[ "info" ]
2024/03/05 22:32:10
2017
https://theses.hal.science/tel-01758280/file/2017IMTA0032_AflatoonianAmin.pdf
Dr Karine Guil An outsourced on-demand service is divided into a customer part and an SP one. The latter exposes to the former APIs which allow requesting the execution of the actions involved in the different steps of the lifecycle. We present an XMPP-based NBI allowing opening up a secured BYOC-enabled API. The asynchronous nature of this protocol together with its integrated security functions, eases the outsourcing of control into a multi-tenant SDN framework. Delegating the control of all or a part of a service introduces some potential valueadded services. Security applications are one of these BYOC-based services that might be provided by an SP. We discuss their feasibility through a BYOC-based Intrusion Prevention System (IPS) service example. v Résumé Au cours des dernières décennies, les fournisseurs de services (SP) ont eu à gérer plusieurs générations de technologies redéfinissant les réseaux et nécessitant de nouveaux modèles économiques. L'équilibre financier d'un SP dépend principalement des capacités de son réseau qui est valorisé par sa fiabilité, sa disponibilité et sa capacité à fournir de nouveaux services. À contrario l'évolution permanente du réseau offre au SP l'opportunité d'innover en matière de nouveaux services tout en réduisant les coûts et en limitant sa dépendance auprès des équipementiers. L'émergence récente du paradigme de la virtualisation modifie profondément les méthodes de gestion des services et conduit à une évolution des services réseau traditionnels vers de nouveaux services réseau à la demande. Ceux-ci permettent aux clients du SP de déployer et de gérer leurs services de manière autonome et optimale grâce à l'ouverture par le SP d'une interface bien définie sur sa plate-forme. Pour offrir cette souplesse de fonctionnement à ses clients en leurs fournissant des capacités réseau à la demande, le SP doit pouvoir s'appuyer sur une plate-forme de gestion permettant un contrôle dynamique et programmable du réseau. Nous montrons dans cette thèse qu'une telle plateforme peut être fournie grâce à la technologie SDN (Software-Defined Networking). Nous proposons une caractérisation préalable de la classe de services réseau à la demande, qui en fixe le périmètre. Les contraintes de gestion les plus faibles que ces services doivent satisfaire sont identifiées et intégrées à un modèle abstrait de leur cycle de vie. Celui-ci détermine deux vues faiblement couplées, l'une spécifique au client et l'autre au SP. Ce cycle de vie est complété par un modèle de données qui précise chacune de ses étapes. L'architecture SDN ne prend pas en charge toutes les étapes du cycle de vie précédent. Nous l'étendons à travers un Framework original permettant la gestion de toutes les étapes identifiées dans le cycle de vie. Ce Framework est organisé autour d'un orchestrateur de services et d'un orchestrateur de ressources communiquant via une interface interne. Sa mise en oeuvre nécessite une encapsulation du contrôleur SDN. L'exemple du VPN MPLS sert de fil conducteur pour illustrer notre approche. Un PoC basé sur le contrôleur OpenDaylight ciblant les parties principales du Framework est proposé. La maitrise par le SP de l'ouverture contrôlée de la face nord du SDN devrait être profitable tant au SP qu'à ses clients. Nous proposons de valoriser notre Framework en introduisant un modèle original de contrôle appelé BYOC (Bring Your Own Control) qui formalise, selon différentes modalités, la capacité d'externaliser un service à la demande par la délégation d'une partie de son contrôle à un tiers externe. L'ouverture d'une interface de contrôle offrant un accès de granularité variable à l'infrastructure sous-jacente, nous conduit à prendre vi en compte certaines exigences incontournables telles que le multi-tenancy ou la sécurité, au niveau de l'interface Northbound (NBI) du contrôleur SDN. Un service externalisé à la demande est structurée en une partie client et une partie SP. Cette dernière expose à la partie client des API qui permettent de demander l'exécution des actions induites par les différentes étapes du cycle de vie. Nous présentons un NBI basé sur XMPP permettant l'ouverture d'une API BYOC sécurisée. La nature asynchrone de ce protocole ainsi que ses fonctions de sécurité natives facilitent l'externalisation du contrôle dans un environnement SDN multi-tenant. La délégation du contrôle de tout ou partie d'un service permet d'enrichir certains services d'une valeur ajoutée supplémentaire. Les applications de sécurité font partie des services BYOC pouvant être fournis par un SP. Nous illustrons leur faisabilité par l'exemple du service IPS (système de prévention d'intrusion) décline en BYOC. iii Abstract Over the past decades, Service Providers (SPs) have been crossed through several generations of technologies redefining networks and requiring new business models. The economy of an SP depends on its network which is evaluated by its reliability, availability and ability to deliver new services. The ongoing network transformation brings the opportunity for service innovation while reducing costs and mitigating the locking of suppliers. Digitalization and recent virtualization are changing the service management methods, traditional network services are shifting towards new on-demand network services. These ones allow customers to deploy and manage their services independently and optimally through a well-defined interface opened to the SP's platform. To offer this freedom to its customers and to provide on-demand network capabilities, the SP must be able to rely on a dynamic and programmable network control platform. We argue in this thesis that this platform can be provided by Software-Defined Networking (SDN) technology. Indeed, the SDN controller can be used to provide an interface to service customers where they could on-demand subscribe to new services and modify or retire existing ones. To this end we first characterize the perimeter of this class of new services. We identify the weakest management constraints that such services should meet and we integrate them in an abstract model structuring their lifecycle. This one involves two loosely coupled views, one specific to the customer and the other one to the SP. This double-sided service lifecycle is finally refined with a data model completing each of its steps. The SDN architecture does not support all stages of the previous lifecycle. We extend it through an original Framework allowing the management of all the steps identified in the lifecycle. This Framework is organized around a service orchestrator and a resource orchestrator communicating via an internal interface. Its implementation requires an encapsulation of the SDN controller. The example of the MPLS VPN serves as a guideline to illustrate our approach. A PoC based on the OpenDaylight controller targeting the main parts of the Framework is proposed. Providing to the SP the mastering of SDN's openness on its northbound side should largely be profitable to both SP and customers. We therefore propose to value our Framework by introducing a new and original control model called BYOC (Bring Your Own Control) which formalizes, according to various modalities, the capability of outsourcing an on-demand service by the delegation of part of its control to an external third party. Opening a control interface and offering a granular access to the underlying infrastructure leads us to take into account some characteristics, such as multi-tenancy or security, at the Northbound Interface (NBI) level of the SDN controller. Dans cette thèse nous nous intéressons à la gestion des services de télécommunication dans un environnement contrôlé. L'exemple de la gestion d'un service de connectivité (MPLS xxii VPN) enrichi d' un contrôle de la qualité de service (QoS) centralisé, nous sert de fil conducteur pour illustrer notre analyse. Au cours de la dernière décennie, les réseaux MPLS ont évolué et sont devenus critiques pour les fournisseurs de services. MPLS est utilisé à la fois pour une utilisation optimisée des ressources et pour l'établissement de connexions VPN. List of Tables À mesure que la transformation du réseau devient réalité et que la numérisation modifie les méthodes de gestion des services, les services de réseau traditionnels sont progressivement remplacés par les services de réseau à la demande. Les services à la demande permettent aux clients de déployer et de gérer leurs services de manière autonome grâce à l'ouverture par le fournisseur de service d'une interface bien définie sur sa plate-forme. Cette interface permet à différents clients de gérer leurs propres services possédant chacun des fonctionnalités particulières. Pour offrir cette souplesse de fonctionnement à ses clients en leurs fournissant des capacités réseau à la demande, le fournisseur de services doit pouvoir s'appuyer sur une plate-forme de gestion permettant un contrôle dynamique et programmable du réseau. Nous montrons dans cette thèse qu'une telle plate-forme peut être fournie grâce à la technologie SDN (Software-Defined Networking). Un réseau de télécommunications fait appel à différentes technologies fournissant plusieurs types de services. Ces services sont utilisés par plusieurs clients et une mauvaise configuration d'un service client peut avoir des conséquences sur la qualité de service des autres. La position centrale du contrôleur SDN permet à l'opérateur de gérer tous les services et équipements. Cependant la fourniture d'une interface de gestion et de contrôle de service à granularité variable s'appuyant sur ce contrôleur requiert la mise en place d'une couche supplémentaire de gestion des services au-delà du contrôleur et permettant au fournisseur de services de gérer le cycle de vie du service tout en mettant à la disposition de ses clients une interface de gestion de service. Nous présentons dans le cadre de cette thèse un framework basé sur SDN permettant à la fois de gérer le cycle de vie d'un service et d'ouvrir avec une granularité contrôlable l'interface de gestion de services. La granularité de cette interface permet de fournir différents -Création de service : L'application spécifie les caractéristiques de service dont elle a besoin, elle négocie le SLA associé qui sera disponible pour une durée limitée et enfin elle demande une nouvelle création de service. -Retrait du service : l'application retire le service à la fin de la durée négociée. Cette étape définit la fin de la durée de vie. Les applications de type 2 tire parti des événements provenant de la NBI pour surveiller le service. Il est à noter que ce service peut être créé par la même application qui surveille le service. Ce type d'application ajoute une étape supplémentaire au cycle de vie du service côté client. Ce cycle de vie contient trois étapes principales : -Création de service. -Surveillance de service : Une fois créé, le service peut être utilisé par le client pour une durée négociée. Pendant ce temps, certains paramètres réseau et de service seront surveillés grâce aux événements et aux notifications envoyées par le SDNC à l'application. -Retrait de service. Dans un cas plus complexe, c'est-à-dire les applications de type 3, une application peut créer le service via la NBI, elle surveille le service via cette interface et, en fonction des événements à venir, elle reconfigure le réseau via le SDNC. Ce type de contrôle ajoute une étape rétroactive au cycle de vie du service côté client. Celui-ci contient quatre étapes principales : -Création de service. -Surveillance de service. -Modification de service : Les événements remontés par les notifications peuvent déclencher un algorithme implémenté dans l'application (implémenté au nord du SDNC), dont la sortie reconfigure les ressources réseau sous-jacentes via le SDNC. -Retrait de service. Un cycle de vie global de service côté client contient toutes les étapes préalables nécessaires pour gérer les trois types d'applications, discutées précédemment. Nous introduisons dans ce modèle une nouvelle étape déclenchée par les opérations côté opérateur : -Création de service. -Surveillance de service. -Modification de service. -Mis à jour de service : La gestion du réseau de l'opérateur peut entraîner la mise à jour du service. Cette mise à jour peut être émise en raison d'un problème survenant lors de l'utilisation du service ou d'une modification de l'infrastructure réseau. Cette mise à jour peut être minime, telle que la modification d'une règle dans l'un des équipements sous-jacents, ou peut avoir un impact sur les étapes précédentes, avec des conséquences sur la création du service et / ou sur la consommation du service. -Retrait de service. Le cycle de vie du service côté opérateur comprend en revanche six étapes principales : xxv -Demande de service : Une fois qu'une demande de création ou de modification de service arrive du portail de service des utilisateurs, le gestionnaire de demandes négocie le SLA et une spécification de service de haut niveau afin de l'implémenter. Il convient de noter qu'avant d'accepter le SLA, l'opérateur doit s'assurer que les ressources existantes peuvent gérer le service demandé au moment où il sera déployé. En cas d'indisponibilité, la demande sera mise en file d'attente. -Décomposition de service, compilation : Le modèle de haut niveau du service demandé est décomposé en plusieurs modèles de service élémentaires qui sont envoyés au compilateur de service. Le compilateur génère un ensemble de configurations de ressources réseau qui composent ce service. -Configuration de service : Sur la base du précédent ensemble de configurations de ressources réseau, plusieurs instances de ressources virtuelles correspondantes seront créées, initialisées et réservées. Le service demandé peut ensuite être implémenté sur ces ressources virtuelles créées en déployant des configurations de ressources réseau générées par le compilateur. -Maintenance et surveillance de service : Une fois qu'un service est mis en oeuvre, sa disponibilité, ses performances et sa capacité doivent être maintenues automatiquement. En parallèle, un gestionnaire de journaux de service surveillera tout le cycle de vie du service. -Mise à jour de service : Lors de l'exploitation du service, l'infrastructure réseau peut nécessiter des modifications en raison de problèmes d'exécution ou d'évolution technique, etc. Elle entraîne une mise à jour susceptible d'avoir un impact différent sur le service. La mise à jour peut être transparente pour le service ou peut nécessiter de relancer une partie des premières étapes du cycle de vie du service. -Retrait de service : la configuration du service sera retirée de l'infrastructure dès qu'une demande de retrait arrive au système. Le retrait du service émis par l'exploitant est hors du périmètre de ce travail. Un framework d'approvisionnement de services SDN Les processus de gestion des services peuvent être divisés en deux familles plus génériques : la première gère toutes les étapes exécutants les taches liées au service, depuis la négociation Ces modèles permettent de dériver le type et la taille des ressources nécessaires pour implémenter ce service. Le SO demande la réservation de ressources virtuelles à partir de la couche inférieure et déploie la configuration de service sur les ressources virtuelles via un SDNC. L' "Orchestrateur de ressource" gère les opérations sur les ressources : -Réservation de ressources -Surveillance des ressources Cet orchestrateur, qui gère les ressources physiques, réserve et lance les ressources virtuelles. Il maintient et surveille les états des ressources physiques en utilisant son interface sud. L'architecture interne de SO est composée de cinq modules principaux : -Gestionnaire de demande de service (SCM) : il traite les demandes de service des clients et négocie les spécifications du service. -Gestionnaire de décomposition et compilation de service (SDCM) : il répartit toutes les demandes de service reçues en un ou plusieurs modèles de service élémentaires qui sont des modèles de configuration de ressources. -Gestionnaire de configuration de service (SCM) : il configure les ressources physiques ou virtuelles via le SDNC. -Contrôleur SDN (SDNC) -Gestionnaire de surveillance de service, d'une part, il reçoit les alarmes et notifications à venir de l'orchestrateur inférieur, RO, et d'autre part il communique les notifications de service à l'application externe via la NBI. Bring Your Own Control (BYOC) Conclusion et perspectives Chapter 1 Introduction In this chapter we introduce the context of this thesis followed by the motivation and background of this studies. Then we present our main contributions and we conclude by the structure of this document. Thesis context Over the past two decades, service providers have been crossed through several generations of technologies redefining networks and requiring new business models. The economy of a Service Provider depends on its network which is evaluated by its reliability, availability and ability to deliver services. Due to the introduction of new technologies requiring a pervasive network, new and innovative applications and services are increasing the demand for network access [START_REF] Metzger | Future Internet Apps: The Next Wave of Adaptive Service-Oriented Systems?[END_REF]. Service Providers, on the other hand, are looking for a cost-effective solution to meet this growing demand while reducing the network complexity [START_REF] Benson | Unraveling the Complexity of Network Management[END_REF] and costs (i.e. Capital Expenditure (CapEx) and Operating Expenditure (OpEx)), and accelerating service innovation. The network of an Operator is designed on the basis of equipments that are carefully developed, tested and configured. Due to the importance of this network, the operators avoid the risks of modifications made to the network. Hardware elements, protocols and services require several years of standardization before being integrated into the equipment by suppliers. This hardware lock-in reduces the ability of Service Providers to innovate, integrate and develop new services. The network transformation brings the opportunity for service innovation while reducing costs and mitigating the locking of suppliers. Transformation means making it possible to exploit network capabilities through application power. This transformation converts the Operator network from a simple utility to a digital service delivery platform. The latter not only increases the velocity of the service, but also creates new sources of revenue. Recently Software-Defined Networking (SDN) [START_REF] Mckeown | Software-defined networking[END_REF][START_REF] Kim | Improving network management with software defined networking[END_REF] and Network Function Virtualization (NFV) [START_REF] Mijumbi | Network Function Virtualization: State-of-the-Art and Research Challenges[END_REF][START_REF] Han | Network function virtualization: Challenges and opportunities for innovations[END_REF] technologies are proposed to accelerate the transformation of the network. The promise Chapter 1. Introduction of these technologies is to bring more flexibility and agility to the network while creating cost-effective solutions. This will allow Service Providers to become digital businesses. The SDN concept is presented to decouple the control and forwarding functionalities of network devices by putting the first one on a central unit called controller [START_REF] Kreutz | Software-Defined Networking: A Comprehensive Survey[END_REF]. This separation makes it possible to control the network from a central application layer simplifying network control and management tasks. And the programmability of the controller accelerates the Service Providers network transformation. As the network transformation is becoming a reality and the digitalization is changing the service management methods, traditional network services are replacing with on-demand network services. On-demand services allow customers to deploy and manage their services independently through a well-defined interface opened to the Service Providers platform. Motivation and background This interface allows different customers to manage their own services each one possessing special features. For example, to manage a VPN service, a customer might have several types of interactions with the Service Provider platform. For the first case, a customer might request a fully managed VPN interconnecting its sites. For this type of service, the customer owns abstract information about the service and provides a simple service request to the Service Provider. The second case is a customer, with a more professional profile, who monitors the service by retrieving some network metrics sent from the Providers platform. And the third type consists of a more dynamic and open service sold to customers wishing to control all or part of their services. For this type of services, based on the metrics retrieved from the Service Providers platform, the customer re-configures the service. Problem statement In order to offer this freedom to its customers and to provide on-demand network capability, the Service Provider must be able to rely on a dynamic and programmable network control platform. We argue that this platform can be provided by SDN technology. Indeed, the SDN Contributions of this thesis As part of this thesis we present an SDN based framework allowing to both manage the lifecycle of a service and open the service management interface with a fine granularity. The granularity of this interface allows to provide different levels of abstraction to the customer, each one allowing to offer part of the capabilities needed by an on-demand service discussed in Section 1.2. The following are the main research contributions of this thesis. -A double-sided service lifecycle and the associated data model We first characterise the applications that might be deployed upon the northbound side of an SDN controller, through their lifecycle. The characterisation rests on a classification of the complexity of the interactions between the outsourced applications and the controller. This leads us to a double-side service lifecycle presenting two articulated points of view: client and operator. The service lifecycle is refined with a data model completing each of its steps. - A Document structure In Chapter 2 we present a state of the art on SDN and NFV technologies. We try to focus our study on SDN control and application layer. We present two classifications of SDN applications. For the first classification we are interested in the functionality of applications and their contribution in the deployment of the controller. And for the second one, we present different types of applications according to the model of the interaction between them and the controller. We discuss in this second classification three types of applications, each one requiring some characteristics at the Northbound Interface (NBI) level. In Chapter 3 we discuss the deployment of a network service in SDN environment. For the first part of this chapter, we present the MPLS networks with a rapid analysis of the control and forwarding planes of these networks in the legacy world. This analysis quickly shows which information is used to configure such a service. This information is, for confidential reasons, managed by the operator most of which is not manageable by the customer. For the second part of this chapter, we analyze the deployment of the MPLS service on the SDN network through the OpenDaylight controller. For this analysis we consider two possibilities: (1) deployment of the service using the third-party applications developed on the controller (the VPN Services project), and (2) deployment of the service using the northern Application Programming Interface (API)s provided by the controller's native functions. The results obtained during the second part together with the case study discussed in the first part, accentuate the lack of a service management system in the current controllers. This justifies the presentation of a service management framework providing the service management interfaces and managing the service lifecycle. In order to refine the perimeters of this framework, we firstly discuss a service life cycle studies in Chapter 4. This analysis is carried out on two sides: customer and operator. For the service lifecycle analysis from the client-side perspective, we rely on the classification of applications made in Chapter 2. During this analysis we study the additional steps that each application adds in the lifecycle of a service. And for the analysis of the lifecycle from the operator side view point we study all steps an operator takes during the deployment and management of a service. At the end of this chapter, we discuss the data model allowing to implement each step of the service lifecycle. This data model is based on a two layered approach analyzing a service provisioning system on two layers: service and device. Based on this analysis, we study the data model of each service lifecycle step, helping to define the internal architecture of the service management framework. Document structure Service lifecycle analysis leads us to present, in Chapter 5, the SDN-based service management framework. This framework cuts up all the tasks an operator performs to manage the lifecycle of a service. Through an MPLS VPN service deployment example we detail all of these steps. Part of tasks are carried on the service presented to the client, and part of them on the resources managed by the operator. We organize these two parts into two orchestration systems, called respectively Service Orchestrator and Resource Orchestrator. In order to analyze the framework's capability in service lifecycle management, we take the example of MPLS VPN service update. With this example we show how the basic APIs provided by an SDN controller can be used by the framework to deploy and manage a requested service. The presented framework allows us not only to manage the service life cycle but also to open an NBI to the client. This interface allows us to provide different levels of abstraction used by each of lastly discussed three types of applications. In Chapter 6, we present for the first time the new service model: Bring Your Own Control (BYOC). This new service allows a customer or a third party operator to participate in the service lifecycle. This is the practical case of a type 3 application, where the client configures a service based on the events coming up from the controller. We analyze characteristics of interface allowing to deploy such a BYOC-type service. We present in this chapter the XMPP protocol as a good candidate enabling us to implement this new service model. In Chapter 7, we apply the BYOC model to a network service. For this use case we choose to externalize the control of an IPS. Outsourcing the IPS service control involves implementing the attack detection engine in an external controller, called Guest Controller (GC). In Chapter 8, we point out the main contributions of this thesis and give the research perspectives in relation to BYOC services in SDN/NFV and 5G networks. Chapter 2 Programming the network In this chapter we present, firstly, a state of the art on programmable networks. Secondly, we study Software-Defined Networking (SDN) as a technology allowing to control and program network equipment to provide on-demand services. For this analysis we discuss the general architecture of SDN, its layers and its interfaces. Finally, we discuss SDN applications, their different types and the impact that all applications can have on the internal architecture of an SDN controller. Technological context Nowadays Internet whose number of users exceeds 3,7 billions [START_REF]World Internet Usage and Population Statistics[END_REF], is massively used in all human activities from the professional part to the private ones via academical ones, administrative ones, etc. The infrastructure supporting the Internet services rests on various interconnected communication networks managed by network operators. This continuously growing infrastructure evolves very dynamically and becomes quite huge, complex, and sometimes locally ossified. Fundamentals of programmable networks The high performance constraints required for routers in packet switched networks, limit the authorized processing to the sole modification of the packet headers. The strength of this approach is also its weakness because the counterpart of their high performance is their lack of flexibility. The evolution brought by the research on the programmability of the network, has led to the emergence of strong ideas whose relevance can be measured by their intellectual longevity. The seed of the idea of having APIs allowing a flexible management of the network equipments at least goes back to the OpenSig initiative [START_REF] Campbell | A Survey of Programmable Networks[END_REF] which aimed to develop and promote standard programmable interfaces crafted on network devices [START_REF] Biswas | The IEEE P1520 standards initiative for programmable network interfaces[END_REF]. It is one of the first fundamental steps towards the virtualization of networks the main objectives of which consisted in switching from a strongly coupled network, where the hardware and the software are intimately linked to a network where the hardware and the software are decorrelated. It concretely conducts in keeping the data forwarding capability inside the box while outsourcing the control. In a general setting the control part of the processing carried out in routers, roughly consists in organizing in a smart and performant way the local forwarding of each received packet while ensuring a global soundness between all the boxes involved in its path. The outsourcing of the control has been designed according different philosophies. One aesthetically nice but extreme vision known as « Active Networks » recommends that each packet may carry in addition to its own data, the code of the process which will be executed at each crossed Software-Defined Networking (SDN) SDN is presented to change the way networks operate by giving hope to change the current network limitations. It enables simple network data-path programming, allows easier deployment of new protocols and innovative services, opens network virtualization and management by separating the control and data planes [START_REF] Kim | Improving network management with software defined networking[END_REF]. This paradigm is attracting attention by both academia and industry. SDN breaks the vertical integration of the traditional network devices by decoupling the control and data planes, where network devices become a simple forwarding device programmed by a logically centralized application called controller or network operating system. As the OpenFlow-based SDN community is growing up, a large variety of OpenFlowenabled networking hardware and software switches are presented into the market. Hardware devices are produced for a long range purposes, from the small businesses [START_REF] Hp | zl Switch Series[END_REF]18] to Chapter 2. Programming the network high-class one [START_REF] Ferkouss | A 100Gig network processor platform for openflow[END_REF] used for their high switching capacity. Software switches, on the other hand are mostly OpenFlow-enabled applications, and are used to provide the virtual access points in the data centers and to bring virtualized infrastructures. Architecture SDN Southbound Interface (SBI) The communication between the Infrastructure layer and the control one is assured through a well-defined API called Southbound Interface (SBI), that is the element separating the data and the control plane. This one provides for upper layer a common interface to manage physical or virtual devices by a mixture of different southbound APIs and control plug-ins. The most accepted and implemented of such southbound APIs is OpenFlow [START_REF] Mckeown | OpenFlow: Enabling Innovation in Campus Networks[END_REF] standardized by Open Networking Foundation (ONF) [START_REF] Onf | Open Networking Foundation[END_REF]. OpenFlow Protocol The SDN paradigm is started by the forwarding and control layer separation idea presented by OpenFlow protocol. This protocol enables flow-based programmability of a network device. Indeed, OpenFlow provides for SDN controller an interface to create, update and delete new entries reactively or proactively. SDN Controller Host A Host B Switch1 Switch2 Switch3 Forwarder Forwarder Forwarder ing the first packet, Switch 1 looks up in its flow table, if no match for the flow is found, the switch sends an OpenFlow PACKET_IN message to the SDN controller for instructions. Ctrl Agent Ctrl Agent 2.4. Software-Defined Networking (SDN) 13 Based on this message, the controller creates a PACKET_OUT message and sends it to the switch. This message is used to add a new entry to the flow table of the switch. Programming a network device using the OpenFlow can be done in three ways [START_REF] Salisbury | OpenFlow: Proactive vs Reactive Flows[END_REF]: -Reactive flow instantiation. When a new flow arrives to the switch, it looks up into the flow table and if the relevant action doesn't match with the flow, the switch sends a PACKET_IN message to the controller. In previous example, shown in Fig. 2.5, the SDN controller programs the Switch 1 in a reactive manner. -Proactive flow instantiation. In contrast to the first case, a flow can be defined in advance. In this case when a new flow comes to the switch there is no lookup into the flow table and the action will be done based on a predefined entry. In our example (Fig. 2.5) the follow programing done for two Switches 2 and 3, is a proactive one. The proactive flow instantiation eliminates the latency introduced by controller interrogation. -Hybrid flow instantiation. This one is a combination of two first modes. In our example (Fig. 2.5) for a specific traffic, sent by Host A to Host B, the controller programs the related switches using this method. The Switch 1 is programmed reactively and two other switches (Switch 2 and Switch 3) are programmed proactively. Using hybrid flow instantiation allows to benefit the flexibility of the reactive mode for granular traffics, while saving a low-latency traffic forwarding for the rest of traffic. OpenFlow switch The most recent OpenFlow Switch (1.5.0) has been defined by ONF [START_REF]OpenFlow Switch Specification, Version 1.5.0[END_REF]. -OpenFlow Channel creates a secured channel, over Secure Sockets Layer (SSL), between the switch and a controller. Using this channel, the controller manages the switch via OpenFlow protocol allowing commands and packet to be sent from the controller to the switch. Chapter 2. Programming the network -Flow Table contains a set of flow entries dictating the switch how to process the flow. These entries include match fields, counters and a set of instructions. -Group Table contains a set of group each one having a set of actions. Fig. 2.7 shows an OpenFlow Switch flow table. Each flow table contains three columns: rules, actions and counters [START_REF] Mckoewn | Why can't I innovate in my wiring closet?[END_REF]. The rules column contains header fields used to define a flow. For an incoming packet, the switch looks up the flow table, if a rule matches the header of the packet, the related action of action table will be applied to the packet, and finally the counter value will be updated. There are several possible actions to be taken on a packet (Fig. 2.7). The packet can be forwarded to a switch port, it can be sent to the controller, it can be sent to a group table, it can be modified in some fashions, or it can be dropped. SDN Controller The Control plane, equivalent to the network operating system [START_REF] Gude | NOX: Towards an Operating System for Networks[END_REF], is the intelligent part of this architecture. It controls the network thanks to its centralized perspective of networks state. On one hand, this logically centralized control simplifies the network configuration, management and evolution through the SBI. On the other hand, it gives an abstract and global view of the underlying infrastructure to the applications through the Northbound Interface (NBI). While SDN's interest is quite extending in different environments, such as home networks [START_REF] Yiakoumis | Slicing Home Networks[END_REF], data center network [START_REF] Al-Fares | A Scalable, Commodity Data Center Network Architecture[END_REF], and enterprise networks [START_REF] Casado | Ethane: Taking Control of the Enterprise[END_REF], the number of proposed SDN controller architecture and the implemented functions is also growing up. Despite this large number, most of existing proposals implement several core network functions. These functions are used by upper layers, such as network applications, to build their own logic. Among the various SDN controller implementations, these logical blocks can be classified into: Topology Manager, Device Manager, Stats Manager, Notification Manager and Shortest Path Forwarding. For instance, a controller should be able to provide a network topology model to the upper layer applications. It also should be able to receive, process and forward events by creating alarm notifications or state changes. As mentioned previously, nowadays, numerous commercial and non-commercial communities are developing SDN controllers proposing network applications on top of them. Controllers such as NOX [START_REF] Gude | NOX: Towards an Operating System for Networks[END_REF], Ryu [29], Trema [30], Floodlight [START_REF]Floodlight OpenFlow Controller[END_REF], OpenDayLight [START_REF]The OpenDaylight SDN Platform[END_REF] and ONOS [START_REF] Berde | ONOS: Towards an Open, Distributed SDN OS[END_REF] are the top five today's controllers. These controllers implement basic network functions such as topology manager, switch manager, etc. and provide the network programmability to applications via NBI. In order to implement a complex network service on a SDN-based network, service providers face a large number of controllers each one implementing a large number of core services based on a dedicated work flow and specific properties. R. Khondoker et al. [START_REF] Khondoker | Feature-based comparison and selection of Software Defined Networking (SDN) controllers[END_REF] tried to solve the problem of selecting the most suitable controller by proposing a decision making template. The decision however requires a deep analysis of each controller and totally depends on the service use case. It is worth to mention that in addition to this miscellaneous controller's world, the NBI abstraction level diversity also emphasizes the challenge. SDN Northbound Interface (NBI) In the SDN ecosystem the NBI is the key. This interface allows applications to be independent of a specific implementation. Unlike the southern interface, where we have some standard proposals (OpenFlow [START_REF] Mckeown | OpenFlow: Enabling Innovation in Campus Networks[END_REF] and NETCONF [START_REF] Enns | Network Configuration Protocol (NETCONF)[END_REF]), the subject of a common and a standard NBI standard is remained open. Since use cases are still in development, it is still immature to define a standardized NBI. Contrary to its equivalent in the south (SBI), the NBI is a software ecosystem, it means that the standardization of this interface requires more maturity and a well standardized SDN framework. In application ecosystems, implementation is usually the leading engine, while standards emerge later [START_REF] Guis | The SDN Gold Rush To The Northbound API[END_REF]. Open and standard interfaces are essential to promote application portability and interoperability across different control platforms. As illustrated in Table 2.1, existing controllers such as Floodlight, Trema, NOX, ONOS, and OpenDaylight propose and define their own APIs in the north [START_REF] Salisbury | The Northbound API-A Big Little Problem[END_REF]. However, each of them has its own specific definitions. . The experience gained in developing various controllers will certainly be the basis for a common application-level interface. SDN Controller If we consider the SDN controller as a platform allowing to develop applications on a resource pool, a north API can be compared to the Portable Operating System Interface (POSIX) standard in operating systems [START_REF] Josey | POSIX -Austin Joint Working Group[END_REF]. This interface provides generic functions hiding the operational details of the computer hardware. These ordinary functions allow a software to manipulate this hardware by ignoring their technical details. Today, programming languages such as Procera [START_REF] Voellmy | Procera: A Language for Highlevel Reactive Network Control[END_REF] and Frenetic [START_REF] Foster | Frenetic: A Network Programming Language[END_REF] are proposed to follow this logic by providing an abstraction layer on controller functions. The yanc project [START_REF] Monaco | Applying Operating System Principles to SDN Controller Design[END_REF] also offers an abstraction layer simplifying the development of SDN applications. This layer allows programmers to interact with lower-level devices and subsystems through the traditional file system. It may be concluded that it is unlikely that a single northern interface will emerge as a winner because the requirements for different network applications are quite different. For example, APIs for security applications may be different from routing ones. In parallel with its SDN development work, the ONF has begun a vertical solution in its North Bound Interface Working Group (NBI -WG) to present standardized northbound APIs [START_REF] Menezes | North Bound Interface Working Group (NBI-WG) Charter[END_REF]. This work is still ongoing. SDN Applications Analysis SDN Applications At the toppest part of the SDN architecture, the Application layer programs the network behavior through the NBI offered by the SDN controller. Existing SDN applications implement a large variety of network functionalities from simple one, such as load balancing and routing, to more complex one, such as mobility management in wireless networks. This wide variety of applications is one of the major reasons to raise up the adoption of SDN into current networks. Regardless of this variety most SDN applications can be grouped mainly in five categories [START_REF] Hu | A Survey on Software-Defined Network and Open-Flow: From Concept to Implementation[END_REF], including (I) traffic engineering, (II) mobility and wireless, (III) measurement and monitoring, (IV) security and dependability, and (V) data center networking. Traffic engineering The first group of SDN application consists of proposals that monitor the traffic through the SDN Controller (SDNC) and provide the load balancing and energy consumption optimization. Load balancing as one of the first proposed SDN applications [START_REF] Nikhil | Aster*x: Load-Balancing Web Traffic over Wide-Area Networks[END_REF] covers a big range of network management tasks, from redirecting clients requests traffic to simplifying the network services placement. For instance, the work [START_REF] Wang | OpenFlow-based Server Load Balancing Gone Wild[END_REF] proposes the use of wilcard-based for aggregating a group of clients requests based on their Internet Protocol (IP) prefixes. In the [START_REF] Handigol | Plug-n-Serve: Load-Balancing Web Traffic using OpenFlow[END_REF] also the network application is used to distribute the network traffic among the available servers based on the network load and computing capacity of servers. The ability of network load monitoring through the SBI introduces applications such as energy consumption optimization and traffic optimization. The information received from the SBI can be used by specialized optimization algorithms to aim up to 50% of economization of network energy consumption [START_REF] Heller | ElasticTree: Saving Energy in Data Center Networks[END_REF] by dynamically scale in/out of the links and devices. This capacity can be leveraged to provision dynamic and scalable of services, such as Virtual Private Network (VPN) [START_REF] Scharf | Dynamic VPN Optimization by ALTO Guidance[END_REF], and increase network efficiency by optimizing rules placement [START_REF] Nguyen | Optimizing Rules Placement in OpenFlow Networks: Trading Routing for Better Efficiency[END_REF]. Mobility and wireless The programmability of the stack layers of wireless networks [START_REF] Bansal | OpenRadio: A Programmable Wireless Dataplane[END_REF], and decoupling the wireless protocol definition from the hardware, introduce new wireless features, such as creation of on-demand Wireless Access Point (WAP) [START_REF] Vestin | CloudMAC: Towards Software Defined WLANs[END_REF], load balancing [START_REF] Gudipati | SoftRAN: Software Defined Radio Access Network[END_REF], seamless mobility [START_REF] Dely | OpenFlow for Wireless Mesh Networks[END_REF] and Quality of Service (QoS) [START_REF] Li | Toward Software-Defined Cellular Networks[END_REF] management. These traditionally hard to implement features are implemented by the help of the well-defined logics presented from the SDN controller. The decoupling of the wireless hardware from its protocol definition provides a software abstraction that allows sharing Media Access Control (MAC) layers in order to provide programmable wireless networks [START_REF] Bansal | OpenRadio: A Programmable Wireless Dataplane[END_REF]. Measurement and monitoring The detailed visibility provided by centralized logic of the SDN controller, permits to introduce the applications that supply network parameters and statistics for other networking services [START_REF] Sundaresan | Broadband Internet Performance: A View from the Gateway[END_REF][START_REF] Kim | Improving network management with software defined networking[END_REF]. These measurement methods can also be used to improve features of the SDN controller, such as overload reduction. Security The capability of SDN controller in collecting network data and statistics, and allowing applications to actively program the infrastructure layer, introduce works that propose to improve the network security using SDN. In this type of applications, the SDN controller is the network policy enforcement point [START_REF] Casado | SANE: A Protection Architecture for Enterprise Networks[END_REF] through which malicious traffic are blocked before entering a specific area of the network. In the same category of applications, the work [START_REF] Braga | Lightweight DDoS flooding attack detection using NOX/OpenFlow[END_REF] uses SDN to actively detect and prevent Distributed Denial of Service (DDoS) attacks. Intuitive classification of SDN applications As described previously, SDN applications can be analyzed in different categories. In 2.5.1 we categorized the SDN applications based on the functionality they add to the SDN controller. In this section we analyze these applications based on their contribution on the network control life cycle. SDN applications consist of modules implemented at the top of a SDNC which, thanks to the NBI, configure network resources through the SDNC. This configuration might control the network behavior to offer a network service. Applications which configure the network through a SDNC can be classified in three types. The Fig. 2.8 presents this classification. The first type concerns an application configuring a network service which once initialized and running will not be modified anymore. A "simple site interconnection" through MultiProtocol Label Switching (MPLS), can be a good example for this service. This type of services requires a one direction up-down NBI which can be implemented with a RESTful solution. The second one concerns an application which, firstly, configures a service and, secondly, monitors it during the service life. One example for this model is a network monitoring application which monitors the network via the SDNC in order to generate QoS reports. For example, for assuring the QoS of an MPLS network controlled by the SDNC, this application might calculate the traffic latency between two network endpoints thanks to metrics received from the SDNC. This model requires a bottom-up communication model in the NBI level so that the real-time events can be sent from the controller to the application. Finally, the third type of coordination concerns an application resting on, and usually including, the two previous types and adding specific control treatments executed in the application layer. In this case the application configures the service (type one), listens to network real-time events (type two), and calculates some specific network configurations in order to re-configure the underlying network accordingly (type one). SDN Controller Impact of SDN Applications on Controller design The variety of SDN applications developed at the top of the SDN controller may modify the internal architecture of the controller and its core functions, described in 2.4.4. In this section we analyze some of these applications and their contribution to a SDNcontroller core architecture. The Aster*x [START_REF] Nikhil | Aster*x: Load-Balancing Web Traffic over Wide-Area Networks[END_REF] and Plug-n-Serve [START_REF] Handigol | Plug-n-Serve: Load-Balancing Web Traffic using OpenFlow[END_REF] projects propose HTTP load balancing applications that rely on three functional units implemented in the SDN controller: "Flow Manager", HTTP servers and reports it to the Flow Manager. This load-balancing application adds two complementary modules inside the controller, within the core functions. " In work [START_REF] Wang | OpenFlow-based Server Load Balancing Gone Wild[END_REF] authors implemented a series of load-balancing modules in a NOX controller that partition client traffics between multiple servers. The partitioning algorithm implemented in the controller receives client's Transmission Control Protocol (TCP) connection requests, arriving into the Load Balancer Switch, and balances the load over the servers by generating wildcard rules. The load-balancing application proposed in this work is implemented inside the controller, in addition with other controller's core modules. Adjusting the set of active network devices in order to save the data center energy consumption is another type of SDN applications. ElasticTree [START_REF] Heller | ElasticTree: Saving Energy in Data Center Networks[END_REF], as one of these applications, proposes a "network-wide power manager" increasing the network performance and fault tolerance while minimizing its power consumption. This system implements three main modules: "Optimize", "Power control" and "Routing". The optimizer finds the minimum power network subset, it uses the topology, traffic matrix, and calculates a set of active components to both the power control and routing modules. Power control toggles the power states of elements. The routing chooses paths for all flows and pushes routes into the network. In ElasticTree these modules are implemented as a NOX application inside the controller. The application pulls network statistics (flow and port counters) sends them to the Optimizer module, and based on calculated subset it adjusts flow routes and port status by OpenFlow protocol. In order to toggle the elements, such as active ports, linecards, or entire switches different solutions, such as Simple Network Management Protocol (SNMP) or power over OpenFlow can be used. In SDN architecture the network topology is one of the information provided to applications. The work [START_REF] Gurbani | Abstracting network state in Software Defined Networks (SDN) for rendezvous services[END_REF] proposes the Application-Layer Traffic Optimization (ALTO) protocol as a topology manager component of this architectures. In this work authors propose this protocol to provide an abstract view of the network to the applications which, based on Chapter 2. Programming the network this informations, can optimize their decision related to service rendezvous. ALTO protocol provides network topology by hiding its internal details or policies. The integration of ALTO protocol to the SDN architecture introduces an ALTO server inside the SDN controller through which the controller abstracts the information concerning the routing costs between network nodes. This information will be sent to SDN applications in the form of ALTO maps. These maps are used in different types of applications, such as: data centers, Content Distribution Network (CDN)s, and peer-to-peer applications. Network Function Virtualization, an approach to service orchestration Network Function Virtualization (NFV) is an approach to virtualize and orchestrate network functions, traditionally carried out on dedicated hardware, on Commercial Off-The-Shelf (COTS) hardware platform. This is an important aspect of the SDN particularly studied by service providers, who see here a solution to better adjust the investment according to the needs of their customers. The main advantage of using NFV to deploy and manage VNFs is that the Time To Market (TTM) of NFV-based service is less than a legacy service, thanks to the standard hardware platform used in this technology. The second advantage of NFV is lower Capital Expenditure (CapEx) while standard hardware platforms are usually cheaper than wholesale hardware used on legacy services. This approach, however, has certain issues. Firstly, in a service operator network, there is no more a single central (data center type) network to manage, but also several networks deployed by different technologies, both physical or virtual. At first glance this seems to be contrary to one of the primary objectives of the SDN: the simplification of network operations. The second problem is the complexity that the diversity of NFV architecture elements brings to the service management system. In order to create and manage a service, several VNFs should be created. These VNFs are configured, each one, by an EMS, the life cycle of which is managed though the Virtual Network Function Manager (VNFM). All VNFs are deployed within an infrastructure managed by the Virtual Infrastructure Manager (VIM). For the sake of simplicity, we don't mention the license management systems proposed by VNF editors to manage the licensing of their products. In order to manage a service all mentioned systems should be managed by the Orchestrator. Chapter 3 SDN-based Outsourcing Of A Network Service In this chapter we present the MPLS networks, its control plan and its data plan. Then, we study the processes and the necessary parameters in order to configure a VPN network. In the second part, we study the deployment of this type of network using SDN. For this analysis, we firstly analyze the management of the VPN network with non-openflow controllers, such as OpenContrail. Then, we analyze the deployment of the VPN network with one of the most developed OpenFlow enabled controller: OpenDaylight. Introduction to MPLS networks MPLS [START_REF] Rosen | Multiprotocol Label Switching Architecture, RFC 3031[END_REF] technology supports the separation of traffic flows to create VPNs. It allows the majority of packets to be transferred over Layer 2 rather than Layer 3 of the service provider network. In an MPLS network, the label determines the route that a packet will follow. The label is injected between Layer 2 and Layer 3 headers of the packet. A label is a 32 bits word containing several information: -Label: 20 bits -Time-To-Live (TTL): 8 bits -CoS/EXP: specifies the Class of Service used for the QoS, 3 bits -BoS: determines if the label is the last one in the label stack (if BoS = 1), 1 bit MPLS data plan The path taken by the MPLS packet is called Label Switch Path (LSP). MPLS technology is used by providers to improve their QoS by defining LSPs capable of satisfying Service Level Agreement (SLA) in terms of traffic latency, jitter, packet loss. In general, the MPLS network router is called a Label Switch Router (LSR). -Customer Equipment (CE) is the LAN's gateway from the customer to the core network of the service provider -Provider Equipment (PE) is the entry point to the core network. The PE labels packets, classifies them and sends them to a LSP. Each PE can be an Ingress or an Egress LSR. We discussed earlier the way this device injects or removes the label of the packet. MPLS control plan -P routers are the core routers of an MPLS network that switch MPLS packets. These devices are Transit LSRs, the operation of whom is discussed earlier. Each PE can be connected to one or several client sites (Customer Edge (CE)s), Cf. Fig. 3.3. In order to isolate PE-CE traffics and to separate routing tables within PE, an instance of Virtual Routing and Forwarding (VRF) is instantiated for each site, this instance is associated with the interface of the router connected to the CE. The routes that PE receives from the CE are recorded in the appropriate VRF Routing Table. These routes can be propagated by Exterior BGP (eBGP) [START_REF] Rekhter | A Border Gateway Protocol 4 (BGP-4), RFC 4271[END_REF] or Open Shortest Path First (OSPF) [START_REF] Moy | OSPF Version 2, RFC 1247[END_REF] protocols. The PE distributes the VPN information via Multiprotocol BGP (MP-BGP) [START_REF] Bates | Multiprotocol Extensions for BGP-4, RFC 4760[END_REF] to the other PE within the MPLS network. It also installs the Interior Gateway Protocol (IGP) routes learned from the MPLS backbone in its Global Routing Table. We drive this configuration example by joining one of customer_1 sites (Site D of Figure 2) to his VPN. Assuming that the MP-BGP of PE4 is already configured and the MPLS backbone IGP is already running on this router. To start the configuration, the service provider creates a dedicated VRF, called customer_1. He adds the RD value on this VRF, we use for this example the RD = 65000:100. For allowing that VRF to distribute and learn routes of this VPN, the RT specified to this customer (65000:100) is configured on the VRF. He then associates the physical interface connected to the CE4 with the initiated VRF. A routing protocol (eBGP, OSPF, etc.) is configured between the VRF and the CE4. This protocol allows to learn Site D network prefix, the information that will be used by PE4 to send the MP-BGP update to other PEs. We discussed earlier this process. By receiving this update, all sites belonging to this customer are able to communicate with Site D. MPLS VPN Service Management In the network of a service provider, the parameters used to configure the MPLS network of a client, are not managed neither configured by this client. In other words, for the sake of security, the customer doesn't have any right to configure the PEs connected to these sites or to modify the parameters of his service. For example, if a client A modify the configuration of its VRF by supplying the RTs used for the other VPN (of client B), it can overlap its VPN with that of the client B and put itself in the network of this client. On the other hand, a client can parameter the elements of its sites, for example the addressing plan of its Local Area Network (LAN), and exchange the parameters of its service, ex: service classes (Class of Service (CoS)). Table 1 summarizes parameters of an MPLS VPN service that can be modified by the service provider and its client. MPLS VPN Parameters Service Provider Service Client LAN IP address ✗ ✓ RT ✓ ✗ RD ✓ ✗ Autonomous System (AS) ✓ ✗ VRF name ✓ ✗ Routing protocols ✓ ✗ VPN Identifier (ID) ✓ ✗ SDN-based MPLS Decoupling control from the forwarding plane of an OpenFlow-based MPLS network permits to centralize all routing and label distribution protocols (i.e. Border Gateway Protocol (BGP), LDP, etc.) in a logically centralized SDNC. In this architecture forwarding elements deploy uniquely three MPLS actions needed to establish an LSP. However, this architecture is not the only one proposed to deploy SDN-based MPLS. MPLS naturaly decouples the service (i.e. IP unicast) from the transport by LSPs [START_REF] Szarkowicz | MPLS in the SDN Era[END_REF]. This decoupling is achieved by encoding instructions (i.e. MPLS lables) in packet headers. In [START_REF] Szarkowicz | MPLS in the SDN Era[END_REF] the authors propose to use MPLS as a "key enabler" to deploy SDN. In this work to achieve data centers connectivity authors propose to use the OpenContrail controller that allows to establish overlay network between Virtual Machine (VM)s based on BGP MPLS protocols. OpenContrail Solution OpenContrail [START_REF] Singla | Day One: Understanding OpenContrail Architecture[END_REF] is an open source controller developed based on BGP and MPLS service architecture. It decouples overlay network from underlay, and control plane from forwarding one by centralizing network policy management. -Control nodes, that propagate low-level model to and from network elements. -Analytics nodes, that capture real-time data from network elements, abstract it, and present it in a form suitable for applications to consume. -Topology Manager: handles information about the network topology. At the boot time, it builds the topology of the network based on the notifications coming from the switches. This topology can be updated according to notifications coming from other modules like Device Manager and Switch Manager. OpenFlow-based MPLS Networks SDN-based MPLS 31 -Statistics Manager: sends statistics request to resources (switches), collects statistics and stores them in a data base. This component implements an API to retrieve information like meter, table, flow, etc. -Forwarding Rules Manager: manages forwarding rules, resolves conflict and validates rules. This module communicates via the SBI with the equipment. It deploys the new rules in switches. -Switch Manager: provides information for nodes (network equipment) and connectors (ports). When the controller discovers the new device, it stores the parameters in this module. The latter provides an API for retrieving information about nodes and discovered links. -Host Tracker: provides information on end devices. This information can be switch type, port type, network address, etc. To retrieve this information, the Host Tracker uses ARP. The database of this module can also be manually enriched via the north API. -Inventory Manager: retrieves the information about the switches and its ports for keeping its database up to date. These modules provide some APIs at the NBI level allowing to program the controller to install flows. Using this API, the modules implemented in application layer are able to control the behavior of each equipment separately. Programming an OpenFlow switch consists of a tuple of two rules: match and action. Using this tuple, for each incoming packet, the controller can decide if the packet should be treated, if so which action should be applied on this packet. This programming capacity allows the appearance of a large API allowing to manipulate almost every packet types, including MPLS packets. OpenDaylight native MPLS API OpenDaylight proposes native APIs to make three MPLS actions, PUSH, POP, and SWAP, each LSR might apply on the packet. Using these APIs, the NBI application may install flows on the Ingress LSR pushing tag on a packet entering MPLS network. It may install flows on Transit LSRs allowing to swap tags along and routing the packet along the LSP. This application may install a flow on Egress LSR to send the packet to its final destination by popping the tag. In order to program the underlying networks behavior via this native API, the application needs to have a detailed perspective of the network and its topology, and a control on specified MPLS labels. Table 3.2 summarizes parameters that an application may control using the OpenDaylight native API. OpenDaylight VPN Service project Apart from OpenDaylight core functions, additional modules can be developed in this controller. In order to deploy a specific service, these modules benefit the information provided As discussed in this example, the VPN Service project and its interfaces are rich enough to deploy a VPN service via OpenDaylight. Nevertheless, in order to create a "sufficient" complex VPN service, the user must manage the information concerning the service, its sites and its equipments. Table 3.3 summarizes the information that a user should manage using this project. As it is shown in this table, the amount of manageable data, information about its BGP routers (local AS number and identifier) information about its BGP neighbor (AS number and IP address) information on VPN (VPN ID, RD and RT) and etc. can quickly increase exponentially. This large amount of information can make the service management more complex and reduce the QoS. It is important to note that the SDN controller of an operator manages a set of services on different network equipment shared between several clients. MPLS VPN Parameters That means that, for the sake of security, most of the listed information will not be made available to the customer. Outsourcing problematics Decoupling control plane and data plane of MPLS networks and outsourcing the second one into a controller brings several benefits in terms of service management, service agility and QoS [START_REF] Ali | MPLS-TE and MPLS VPNS with Openflow[END_REF][START_REF] Das | MPLS with a simple OPEN control plane[END_REF]. Centralized control layer offers a service management interface available to the customer. Nevertheless, this outsourcing and openness can create several challenges. The MPLS backbone is a shared environment among the customers of an operator. To deploy a VPN network, as discussed recently, the operator configures a set of devices, situated in the core and the edge of the network. This equipment, mostly provide several services to customers in parallel. The latter ones use the VPN connection as a reliable means of transaction of their confidential data. Outsourcing problematics 35 Outsourcing the control plane to an SDNC brings a lot of visibility on the traffic exchanged within the network. It is through this controller that a customer can create an on-demand service and manage this service dynamically. Tables 3.2 and 3.3 present in detail the information sent from the NBI to deploy a VPN service. These NBIs are proposed by two solutions 3.2.2.2 and 3.2.2.3. The granularity of this information gives to customer more freedom in the creation and management of his service. Moreover, beyond this freedom a customer having access to the NBI not only can modify the parameters of his own service (i.e. VPN) but also it can modify the parameters concerning the services of other customers. In order to control the customers access to the services managed by the controller, while maintaining service management agility, we propose to introduce a service management framework beyond the SDNC. From bottom-up perspective, this framework provides an NBI abstracting all rich SDNC functions and control complexities, discussed in Section 2.5.3. We strengthen this framework by adding the question of the access of the client to managed resources and services. Indeed, this framework must be able to provide a NBI of variable granularity, through which the customer is able to manage all three types of services discussed in Section 2.5.2: -Type-1 applications: The service abstraction model brought by the framework's NBI allows the customers side application to configure a service with minimum of information communicated between the application and the framework. The restricted access provided by the framework prevent unintentional or intentional data leaking and service misconfiguration. -Type-2 applications: On the southern side, internal blocks of the framework receive upcoming network events directly from the resources, or indirectly through the SDNC. On the northern side, these blocks open up an API to applications allowing them to subscribe to some metrics used for monitoring reasons. Based on receiving network events, these metrics are calculated by framework internal blocks and are sent to the appropriate application. -Type-3 applications: The controlled access to SDN based functions assured by the framework provides not only a service management API, but also a service control one, opened to the customers application. The thin granularity control API allows customers to have a low-level access to network resources via the framework. Using this API customers receive upcoming network events sent by devices, based of which they reconfigure the service. In order to provide a framework able to implement mentioned APIs, we need to analyze the service lifecycle in details. This analyze gives rise to all internal blocks of the framework and all steps they may take, from presenting a high-level service and control API to deploying a low-level resource allocation and configuration. Chapter 4 Service lifecycle and Service Data Model In order to propose different level of abstractions on the top of the service providers platform a service orchestrator should be integrated at the top of the SDNC. This system allows third party actor, called user or customer, to participate to all or part of his network service lifecyle. Nowadays, orchestrating an infrastructure based on SDN technology is one of the SDN challenges. This problematic has at our knowledge been once addressed by Tail-F which proposes a partial proprietary solution [START_REF] Chappell | Creating the Programmable Network, The Business case for Netconf/YANG in network devices[END_REF]. In order to reduce the Operation Support System (OSS) cost and also the TTM of services, Tail-F Network Control System (NCS) [START_REF]Tail-f Network Control System (NCS) -Datasheet[END_REF] introduces an abstraction layer on the top of the NBI in order to implement different services, including layer 2 or layer 3 VPN. It addresses an automated chain from the service request, on the one hand, to the device configuration deployment in the network, on the other hand. To transform the informal service model to a formal one this solution uses the YANG data model [START_REF] Bjorklund | YANG -A Data Modeling Language for the Network Configuration Protocol (NETCONF)[END_REF]. The service model is mapped into device configurations as a data model transformation. The proposed work doesn't however cover all management phases of the service lifecycle, specially service monitoring, maintenance, etc. , and also it doesn't study the possibility of opening up a control interface to a third party actor. Due to the proprietary nature of this product it is not possible to precisely analyze its internal structure. We present in this chapter a comprehensive solution to this problematic by identifying a reasonable set of capabilities of the NBI of the SDN together with the associated API. Our first contribution rests on a global analysis of an abstract model of the operator platform articulated to a generic but simple service lifecycle, described in Section 4.1, which takes into account the view of the user together with that of the operator. Tackling the service lifecycle The second part of this chapter, Section 4.2, is dedicated to service data model analysis, where we describe data model(s) used on each service lifecycle phases, both for client side and operator side. Service Lifecycle The ability of managing the lifecycle of a service is essential to implement it in an operator platform. Existing service lifecycle frameworks are oriented on human-driven services. For example, if a client needs to introduce or change an existing service, the operator has to configure the service manually. This manual configuration may take hours or sometimes days. It may therefore significantly affect the operators OpEx. It clearly appears that the operator has to re-think about its service implementation in order to provision dynamically and also to develop on-demand services. There are proposals in order to enhance new ondemand network resource provisioning. For instance, the GYESERS project [START_REF] Demchenko | GYESERS Project, Service Delivery Framework and Services Lifecycle Management in on-demand services/resources provisioning[END_REF], proposed a complex service lifecycle model for on-demand service provisioning. This model includes five typical stages, namely service requests/SLA negotiation, composition/reservation, deployment/register and synchronization, operation (monitoring), decommissioning. The main drawback of this model rests on its inherent complexity. We argue this one may be reduced by splitting the global service lifecycle in two complementary and manageable viewpoints: client and operator view. Each one of both views captures only the information useful for the associated actor. The global view may however be obtained by composing the two partial views. In a fully virtualized network based on SDN, the SDNC is administratively managed by the Service Operator. This one provides a programmable interface, called NBI, at the top of this SDNC allowing the OSS and Service Client applications to configure on-demand services. In order to analyze the service lifecycle, and to propose a global model of service lifecycle in this kind of networks, the application classification analysis is necessary. In Section 2.5.2 we made an intuitive classification of SDN applications. This classification allows us to analyze the service lifecycle on both operator and client sides. Client side Service Lifecycle Based on the application classification discussed in Section 2.5.2, we analyze the client side service lifecycle of the three main application types. Client side Service Lifecycle managed by Type-1 applications Type-1 applications consist of applications creating a network service using the NBI. This category doesn't monitor neither modify the service based on upcoming network events. Service Creation Service Retirement Client side Service Lifecycle managed by Type-2 applications This category of applications, takes advantage of events coming up from NBI to monitor the service. It is worth to note that this service may be created by the same application which monitors the service. -Service monitoring: Once created, the service may be used by the client for the negotiated duration. During this time some network and service parameters will be Service Creation Client side Service Lifecycle managed by Type-3 applications In a more complex case, an application may create the service through the NBI, monitors the service through this interface, and based on upcoming events reconfigure the network via the SDNC. This type of control adds a retroactive step to the client-side service lifecycle. This one is illustrated in Fig. 4 Global Client-side Service Lifecycle A global client-side service lifecycle is illustrated in Fig. 4 -Service modification and update: The management of the operator's network may lead to the update of the service. This update can be issued because of a problem occurring during the service consummation or a modification of the network infrastructure. This update may be minimal, such as modifying a rule in one of the underlying devices, or it may impact the previous steps, with consequences on the service creation and/or on the service consummation. -Service retirement: [Service retirement] cf. Section 4.1.1.1. Operator Side Service Lifecycle The Operator-side service lifecycle is illustrated in Fig. 4.5. This service lifecycle consists of six main steps: -Service request: Once a service creation or modification request arrives from the users' service portal (through the NBI), the request manager negotiates the SLA and a high level service specification in order to implement it. It is worth noting that before agreeing the SLA the operator should ensure that the existing resources can cope with the requested service at the time it will be deployed. In case of unavailability, the request will be enqueued. -Service configuration: Based on the previous set of network resource configurations, several instances of corresponding virtual resources will be created, initialized and reserved 1 . The requested service can then be implemented on these created virtual resources by deploying network resource configurations generated by the compiler. -Service maintain, monitoring and operation: Once a service is implemented, its availability, performance and capacity should be maintained automatically. In parallel, a service log manager will monitor all service lifecycle. -Service update: During the service exploitation the network infrastructure may necessitate changes due to some execution problems or technical evolution requirements, etc. It leads to update which may impact the service in different way. The update may be transparent to the service or it may require to re-initiate a part of the first steps of the service lifecycle. -Service retirement: the service configuration will be retired from the infrastructure as soon as a retirement request arrives to the system. The service retirement issued by the operator is out of the scope of this work. We argue that this service lifecycle on the provider side is generic enough to manage the three types of applications, discussed in Section 2.5.2. The global view The global service lifecycle is the combination of both service lifecycles explained in Sections -Service Monitoring ↔ Service Maintain, Monitoring and Operating: client-side service monitoring, which is executed during the service consummation, is in parallel with operator-side service maintain, monitoring and operation. -Service Update ↔ Service Update: operator-side service maintain, monitoring and operation phase may lead to the service update phase in the client-side service lifecycle. -Service Retirement ↔ Service Retirement: In the end of the service life, the client-side service retirement phase will be executed in parallel with the operator-side service retirement. Chapter 4. Service lifecycle and Service Data Model Following we describe service model(s) used during each step of operator side service lifecycle, discussed in Section 4.1.2: -Service Request: to negotiate the service with the customer the operator relies on the service layer model. This model is the same as the model used on the client side service lifecycle. For example, for a negotiated VPN service, both Service Request step and client side service lifecycle, will use the same service layer model. An example of this model is discussed in Section 4.2.1 [START_REF] Moberg | A two-layered data model approach for network services[END_REF]. -Service Decomposition and Compilation: this step receives on the one hand, the service layer model and generates, on the other hand, device configuration sets. Comparing to proposed two-layered approach, this phases is equivalent to the intermediate layer transforming data models. A service layer model can be a fusion of several service models that for the sake of simplicity are merged into a global model. During the decomposition step this global model is broken down into elementary service models which are used in compilation step. They are finally transformed in sets of device models. The transformation of models can be done through two methods: -Declarative method is a straightforward template that makes a one to one mapping of a source data model of parameters to a destination one. For example a service model describing a VPN can be transformed to device configuration sets by one-to-one mapping of values given within the service model. In this case it is sufficient that the transformer retrieves required values from the first model to construct the device model based on a given template. -Imperative method is defined by an algorithmic expression used to map a data model to a second one. Usually this model contains some dynamic parameters, e.g. an unlimited list of interfaces. An example for this model can be a VPN service model in which each client's site, i.e. CE, has different number of up-links (1..n) connected to different number of PEs (1..m). In this case the transformation is not a simple one-to-one mapping any more, but rather an algorithmic process (here a loop) that creates one device model per service model. Using one of these methods, i.e. declarative or imperative data transformation, has its own advantage or drawback, hardly one of these methods would be superior than the other [START_REF] Pichler | Imperative versus Declarative Process Modeling Languages: An Empirical Investigation[END_REF][START_REF] Fahland | Declarative versus Imperative Process Modeling Languages: The Issue of Understandability[END_REF]. We argue that the choice of the transformation method used on compilation phase rests on the service model, its related device model and the granularity of parameters within each model. -Service configuration: to configure a resource, the device model generated by the transformation method of the previous step (i.e. compilation) is used. If this model is generated into the same data model known by the network element, no transformation method should be used. Otherwise another data transformation action should be done on the device model transforming the original device model to a network element compatible one. It is worth noting that since this transformation is a one-to-one mapping task, the data transformation can be done with the declarative method. -Service maintain, monitoring and operation: since the service maintain and operation process is directly done on network elements, the data model used for this phase 4.2. Service Data Model 47 is device model. Although the service model used for the monitoring task of this phase relies on the nature of the monitored resource. For example, if the service engineers and operators need to monitor the status of a resource they might use a monitoring method such as SNMP, BGP signaling-based monitoring [START_REF] Di Battista | Monitoring the status of MPLS VPN and VPLS based on BGP signaling information[END_REF], etc. the result of which is described in device model. Otherwise, if a service customer needs to monitor its service, e.g. monitoring the latency of two endpoints of a VPN connection, the monitoring information sent from the operator to the customer is transformed to a service data model. This bottom-up transformation can be done by declarative or imperative method. -Service update: updating a service consists in updating network elements configurations, hence the data model used on this phase is a device data model. Nevertheless this update may derive a modification on the service model represented to the customer. In this case, at the end of the service update process, a new service model will be generated based on the final state of network elements. This new model is the result of the bottom-up data transformation done through of declarative or imperative methods. -Service retirement: decommissioning a service is made up of all tasks done to remove service related configurations from network elements, and eventually to remove the resource itself. In order to remove device configurations, device data models are used. But, during the retirement phase the service model is also used. The data model transformation done in this phase entirely depends on the source of retirement process. Indeed, if the service retirement is requested from the customer, hence the request is arrived from the client side described in a service model. Conclusion In this chapter we conducted an analysis of service lifecycle in an SDN-based ecosystem. This analysis has led us to two general service lifecycles: client-side and service-side. On the first side we discussed how an application implementing network services using an SDNC can contribute to the client-side service lifecycle. For this reason, for each application category discussed in Section 2.5.2, we presented a client-side service lifecycle model, by discussing additional steps that each category may add to this model. Finally, a global clientside service lifecycle is presented. This global model, contains all steps needed to deploy each type of applications. We also presented a global model concerning the operator-side service lifecycle. It represents the model that an operator may take into account to manage a service from the service negotiation to the service retirement phases. In the second part of this chapter we discussed the data model used by each service lifecycle phase. Through an example we explained in details the manner in which a data model is transformed from a source model into a destination one. We argue that presenting a service lifecycle model on one side, allows the implementation of a global SDN orchestration model managed by an operator. On the other side, this model will help us to understand the behavior of applications. In this way it will simplify the specification of the NBI in forthcoming studies. Presenting the data model also describes in details the behavior of the management system on each service lifecycle step. It also permits the definition of operational blocks and their relations allowing to implement the operator side service lifecycle. In this chapter we present a framework involving a minimal set of functions required to Orchestrator-based SDN Framework Service management processes, as illustrated in the previous example, can be divided into two more generic families: the first one managing all steps executing service based tasks from service negotiation to service configuration and service monitoring, and the second one managing all resource based operations. These two families managing together all operatorside service lifecycle (discussed in 4.1.2) can be represented as a framework illustrated in Fig. The model is composed of two main orchestration layers: -Service Orchestrator (SO) -Resource Orchestrator (RO) The "Service Orchestrator" will be dedicated to the service part operations and is conform to the operator side service lifecycle, cf. The "Resource Orchestrator" will manage resource part operations: -Resource Reservation -Resource Monitoring Service Orchestrator (SO): This orchestrator receives service orders and initiates the service lifecycle by decomposing complex and high level service requests to elementary service models. These models allow to derive the type and the size of resources needed to implement that service. The SO will demand the virtual resource reservation from the lower layer and deploy the service configuration on the virtual resources through an SDNC. Resource Orchestrator (RO): This orchestrator, which manages physical resources, will reserve and initiate virtual resources. It maintains and monitors physical resources states using the southbound interface. Internal structure of the Service Orchestrator As mentioned in Fig. 5.3, the first orchestrator, SO, contains five main modules: -SRM -SDCM -SCM Chapter 5. An SDN-based Framework For Service Provisioning The SCM can be considered as a resource driver of the SO. This module is the interface between the orchestrator and resources. Creating such a module facilitates the processes run at upper layers of the orchestrator where the service can be managed independently of existing technologies, controllers and protocols implementing and controlling resources. On the one hand, this module communicates to different resources through its SBI. On the other hand, it exposes a universal resource model to other SO modules, specifically to SDCM. Configuring a service by SCM requires a decomposition into two tasks: creating the resource on the first step (if the resource doesn't exist, cf. arrow 4 of Fig. 5.6), and configuring that resource at the second step (cf. arrow 5 of Fig. 5.6). In our example, once the PE3 ID and the required configuration is received from the SDCM side, the SCM, firstly fetches the management IP address of the PE3 from its database. Secondly if the requested vRouter is missing on the PE, it creates a vRouter (cf. arrow 4 of Fig. 5.6). And thirdly, it configures that vRouter to fulfill the requested service. In order to create the required resource (i.e. to create the vRouter on the PE3), SCM sends a resource creation request to the RO (arrow 4 of Fig. 5.6). Once the virtual resource (vRouter) is initiated, the RO acknowledges the creation of the resource by sending the management IP address of that resource to SCM. All what this latter needs to do, is to push the generated configuration to that vRouter using its management IP address (arrow 6 of Fig. 5.6). The configuration of the vRouter can be done via different methods. In our example the vRouter is an OpenFlow-enabled device programmable via the NBI of an SDNC. To configure the vRouter, SCM uses its interface with the SDNC controlling this resource. SCM -SDN Controller (SDNC) Interface As we explained, the configuration of part or all of virtual resources used to fulfill a service can be done through an SDNC. In Section 3.2.2.1 we analyzed the architecture of the Open-Daylight controller providing a rich set of modules allowing to program network elements. This controller exposes on its NBI some Representational State Transfer (REST) APIs allowing to program flows thanks to its internal Flow Programmer module. These APIs allow to program the behavior of a switch based on a "match" and "action" tuple. Among all actions done on a received packet, the Flow Programmer allows to push, pop and swap MPLS labels. In order to program the behavior of the initiated vRouter, we propose to use the API provided by OpenDaylight Flow Programmer. The vRouters role is to push MPLS label into packets going out from Site D to other sites (A and C), and to pop the MPLS labels from incoming packets sent from these remote sites. To program each flow OpenDaylight requires the Datapath ID (DPID) of the vRouter, inbound and outbound port numbers, MPLS labels to be pushed, popped and swapped, and the IP address of next hops where the packet should be sent to. In the following we will discuss how these information is managed to be sent to the SDNC. Orchestrator-based SDN Framework 57 DPID: During the resource creation time, the vRouter is programmed to be connected automatically to OpenDaylight. The connection establishment between these two entities is explained in OpenFlow specification, where the vRouter sends its DPID to the controller via OpenFlow features reply. This DPID, known by SCM and SDNC, is further used as the unique ID of this virtual resource. Port numbers: Inbound and outbound port numbers are practically interface numbers of the vRouter created by the SO. To create a virtual resource, the SCM relies on a resource template explaining the interface ordering of that resource. This template describes which interface is used for management purpose, which interface is connected to the CE and which one is connected to the P router inside the MPLS network. This template is registered inside the database of the SCM, and this module uses this template to generate REST requests sent to the SDNC. MPLS labels: MPLS labels are other parameters needed to program the flow inside the vRouter. These labels are generated and managed by SDCM. This module controls the consistency of labels inside a managed MPLS network. Labels are generated in this layer and are sent to the SCM to use in service deployment step. Next hop IP address: When a packet enters to vRouter from the CE side, the MPLS label will be pushed into the packet and it will be sent to the next LSR. Knowing that the MPLS network, including LSRs, is managed and configured by the SO, this one has an updated vision of the topology of this network. The IP address of the P router directly connected to the PE is one of information that can be exported from the topology database of SO managed by SCM. Once the vRouter is created and configured on the PE3, the LSP of the MPLS network also should be updated. At the end of the vRouter creation step, the customer owns three sites, each one connected to a PE hosting a vRouter. The SCM configures on each vRouter (1 and 2) the label that should be pushed to each packet sent to Site D and vice versa (cf. arrows 6, 8, 10 of Fig. 5.6). It configures also the P router connected directly to PE3 to take into account the label used by the vRouter3 (cf. arrow 12 of Fig. 5.6). Service Monitoring Manager (SMM) In parallel to the three main modules explained previously, the SO contains a monitoring system, called SMM, that monitors vertically the functionality of all orchestrators modules from the SRM to the SCM and its SDNC. This module has two interfaces to external part of the orchestrator. On the one hand, it receives upcoming alarms and statistics from the lower orchestrator, RO, and on the other hand it communicates the service statistics to the external application via the NBI. Internal architecture of the Resource Orchestrator As it is mentioned in previous sections, 4.1.2 and 5.2.1.3, during the service configuration phase, the RO will be called to initiate resources required to implement that service. In the service configuration step, if a resource is missing, the SCM will request the RO to initiate the resource on the specified location. The initiated resource can be virtual or physical according to the operator politic and/or negotiated service contract. Existing cloud orchestration systems, such as OpenStack platform [START_REF] Sefraoui | OpenStack: Toward an Open-source Solution for Cloud Computing[END_REF], are good candidates to implement a RO. OpenStack is a modular cloud orchestrator that permits providing and managing a large range of virtual resources, from computing resource, using its Nova module, to L2/L3 LAN connection between the resources, using its Neutron module. The flexibility of this platform, the variety of supported hypervisors and its optimized resource management [START_REF] Huanle | An OpenStack-Based Resource Optimization Scheduling Framework[END_REF] can help us to automatically provision virtual resources, including virtual servers or virtual network units. We continue exploiting the proposed framework based on a RO implemented by the help of OpenStack. In order to implement and manage required resources needed to bring up a network service, an interface will be created between the SO and the RO where the SCM can communicate to the underlying OpenStack platform providing the virtual resource pool. This interface provides a resource management abstraction to SO. The resource request will be passed through various internal blocks of the OpenStack, such as Keystone that controls the access to the platform. As our study is mostly focused on service management, in this proposal we don't describe the functionality of each OpenStack module in details. In general, the internal architecture of the required RO is composed of two main modules, one used to provide virtual resources, a composition of Nova, Cinder, Glance and Swift modules of OpenStack, and another one used to monitor these virtual resources, thanks to the Ceilometer module of OpenStack. If the RO faces an issue it will inform the SO which is consuming the resource. The service run-time lifecycle and performance is monitored by the SO. When it faces an upcoming alarm sent by the RO or a service run-time problem occurring on virtual resources, it will either perform some task to resolve the problem autonomously or send an alarm to the service consumer application (service portal). Creating a virtual resource requires a set of information, software and hardware specifications, such as the version of the firmware installed inside that resource, number of physical interfaces, the capacity of its Random-Access Memory (RAM), and its startup configurations like the IP address of the resource. For example to deploy a vRouter, the SO needs a software image which installs the firmware of this vRouter. Like all computing resources, a virtual one also requires some amount of RAM and Hard Disk space to use. In OpenStack world, these requirements are gathered within a Flavor. Fig. 5.7 illustrates a REST call, sent from the SCM to the RO, requesting the creation of the vRouter. In this example, the SCM requests the creation of the "vPE1", that is a vRouter, on 5.3. Implementation 59 a resource called "PE1" using an image called "cisco-vrouter" and the flavor "1". curl -X POST -H "X-Auth-Token:\$1" -H "Content-Type: application/json" -d ' { "server": { "name": "vPE1", "imageRef": "cisco_vrouter", "flavorRef": "1", "availability-zone" : "SP::PE1", "key_name" : "OrchKeyPair" } } ' http://resourceorchestrator:8774/v2/admin/servers | python -m json.tool FIGURE 5.7: REST call allowing to reserve a resource Framework interfaces The composition of this framework requires the creation of three interfaces (cf. Fig. 5.3). The first one, the NBI, provides an abstracted service model enriched by some value-added services to the third party application or service portal. The second one, the SBI, interconnects the SO to the resource layer through the SDNC. This interface permits the SCM to configure and control virtual or physical resources. Inter-orchestrator (middle) interfaces, is the third interface that is presented for the first time in this framework. This interface interconnects the SO to the ROs. The modular aspect created by this interface permits to implement a distributed orchestration architecture. This architecture allows one or several SO(s) to control and communicate to one or several RO(s). Implementation In order to describe the internal architecture of the framework, we implement different layers of the Service Orchestrator through the MPLS VPN deployment example. Hardware architecture Fig. 5.8 shows the physical architecture of our implementation. This one is composed mainly by three servers each one implementing one of the main blocks: -Server1 implements the Mininet Platform [START_REF] De Oliveira | Using Mininet for emulation and prototyping Software-Defined Networks[END_REF]. For the sake of simplicity and because of lack of resources, we implement the infrastructure of our implementation based on a Mininet platform. This one implements all resources, routers and hosts, needed to deploy our desired architecture. -Server2 implements OpenDaylight SDN controller [START_REF]The OpenDaylight SDN Platform[END_REF]. For this implementation we use the Carbon version of the OpenDaylight. From its SBI this controller manages resources implemented by the Mininet platform based on OpenFlow protocol. Implementation 61 For this implementation we study the case where all three customer sites are already connected to the core network and the physical connection between CE and PE routers is established. Software architecture Given that our analysis focuses on the architecture of the SO, in this implementation we study the case where the required resource already exists. In this case the deployment of the service relies on the SO and its related SDNC. Fig. 5.10 shows the internal architecture and the class diagram of the implemented SO. The architecture of the orchestrator is based on the object oriented paradigm developed in Python 2.7. In our implementation each SOs layer is developed in a separated package: Service Request Manager (SRM): contains several classes including Service_request, Customer and Service_model. On the one hand it implements a REST API used by the customer, on the other it manages all available services proposed to the customer and the service requested arrived from the customer. For this, it uses two other objects (classes) each one controlling the resources managed in this layer. The first one, Customer class, manages the customer, its subscribed services and available services to him. The second one, the Service_model, manages customer face service models. This model is used to make a representation of the service to the customer. In the first step, this module retrieves related PE list connected to each remote site from the Topology module. Using the integrated Dijkstra engine of the Topology module, it calculates the shortest path to reach other sites from each PE. And using the labels generated by the Label_manager, and device model templates managed by the Flow_manager, it generates a list of device models to be deployed on the underlying network devices to create the required LSP. In our implementation we use a device model database containing all models needed to create a MPLS network on an OpenFlow based infrastructure. This database is managed by the Flow_manager module. The entries of this database are each one a flow template { " s e r v i c e _ t y p e " : " mpls_vpn " , " customer_id " : " customer_1 " , " p r o p e r t i e s " : { " c e _ l i s t " : [ { " c e _ i d " : " ce1 " , " l a n _ n e t " : " 1 9 Conclusion In this chapter, we proposed a SDN framework derived from the operator-side service lifecycle discussed in 4.1.2. This framework which is structured in a modular way encapsulates SDNC with two orchestrators, SO and RO, dedicated respectively to the management of services and resources. The proposed framework is externally limited by NBI and SBI and internally clarifies the border between the two orchestrators by identifying an internal interface between them, called the middle interface, which provides a virtual resource abstraction layer on the top the RO. Our approach gives the foundation for the rigorous definition of the SDN architecture. It is important to note the difference between the SO and its complementary system, RO. The RO provisions, maintains and monitors physical devices hosting several virtual resources. It doesn't dispose any perspective of running configuration on each virtual resource. Unlike RO, the SO manages the internal behavior of each resource. It is also the responsible of interconnecting several virtual resources to conduct a required service. monitoring tasks, done at the operator-side service lifecycle, as potentially interesting candidates, some parts of which may be delegated to the GC. Such an outsourcing moreover leads to enrich in some ways the APIs described in Fig. 6.2. Applying the BYOC concept to Type 1 services Configuring a service in the SO is initiated after the Service compilation phase of the operator side service lifecycle 4.1.2. This one translates the abstracted network models into detailed network configurations thanks to integrated network topology and statement databases. In order to apply the BYOC concept, all or a part of the service compilation phase may be outsourced to the application side represented by the GC. For example, the resource configuration set of a requested VPN service, discussed in Section 5.1, can be generated by a GC. This delegation needs an interface between the SDCM and the GC. We suggest to enrich the first API with dedicated primitives allowing the GC to proceed to the delegated part of the complete compilation process (cf. the primitive "Outsourced (Service Compilation)" in Fig. 6.3). It is worth pointing out that the compilation process assigned to the GC could be partial because the operator may want to maintain the confidentiality of sensitive information, as for example the topology of its infrastructure. Applying the BYOC concept to Type 2 services In this case the application may configure a service and monitor it via the NBI. This type involves the compilation phase, discussed earlier, and the monitoring one. Outsourcing the monitoring task from the Controller to the GC, thanks to the BYOC concept, requires an asynchronous API that permits to transfer the real-time network events to GC during the monitoring phase. The control application implemented in the GC observes the network state thanks to the real-time events sent from the Controller. A recent work [START_REF] Aflatoonian | An asynchronous push/pull communication solution for Northbound Interface of SDN based on XMPP[END_REF] proposed an XMPP-based push/pull solution to implement an NBI that permits to communicate the networks real-time events to the application for a monitoring purpose. The outsourced monitoring is located in the second API of Fig. 6.2 and could be expressed by refining some existing primitives of the API (cf. "Outsourced (Service Monitoring Req./Resp.)" of Fig. 6.3). Applying the BYOC concept to Type 3 services This type concerns the application that configures a service (Type 1) and monitors it (Type 2), according to which it may modify the network configuration and re-initiate the partial compilation process (Type 1). The second API of Fig. 6.2 should be sufficient to implement such type of service even if it may necessitate non trivial refinements or extensions in order to be able to collect the information needed by GC. The delegation of the control induced by this kind of GC comes exactly from the computation of the new configuration together with its re-injection in the network, through the SDNC, in order to modify it. Northbound Interface permitting the deployment of a BYOC service Requirements for specification of the NBI The GC is connected to the SO through the NBI. This is where the service operator communicates with the service customer and sometimes couples with the client side applications, orchestrators, and GC(s). In order to accomplish these functionalities certain packages should be implemented. These packages maintain two categories of tasks: 1) Service creation, configuration and modification, and 2) Service monitoring and BYOC service control. Synchronous vs Asynchronous interactions The former uses a synchronous interaction that implements a simple request/reply communication that permits the client-side application to send service requests and modifications, while the latter uses an asynchronous interaction where a notification will be pushed to the subscribed service application. The asynchronous nature of this package makes it useful for sending control messages to the GC. From its SBI, the SO tracks the network events sent by resources. Based on the service profile related to the resources, it sends events to concerning modules implemented either inside the SO or within an external GC. Push/Pull paradigm to structure the interactions The communication between the GC and the SO is based on Push-and-Pull (PaP) algorithm [START_REF] Bhide | Adaptive Push-Pull: Disseminating Dynamic Web Data[END_REF] that is basically used for the HTTP browsing reasons. In this proposal we try to adapt this algorithm to determine the communication method of the NBI which will use publish/submit messaging paradigm. The GC subscribes to the SO. To manage BYOC-type services, Decision Engine (DE) and Service Dispatcher (SD) modules are implemented within the SO. The DE receives messages sent by network elements and based on them it decides whether to treat the message inside the SO or forward the message to the GC. For messages needed to be analyzed within a GC, the DE sends them to the SD. The SD distributes these messages to every GC that has subscribed to the corresponding service. a: The service customer requests a service from the SRM, through the service portal. In addition to the service request confirmation, the system sends the subscription details about the way that service is managed, internal or BYOC. b: Using the subscription details, the user connects to the SD unit and subscribes to the relevant service. c: When a control message, e.g. OpenFlow PackeIn message, is sent to the DE, the DE creates a notification and sends it to the SD. d: The SD unit pushes the event to all subscribers of that specific service. The initiative concerning the WebSockets [START_REF] Fette | The WebSocket Protocol[END_REF] should eventually be interesting, but actually this solution is still under development. As mentioned in the work of Franklin and Zdonik [START_REF] Franklin | Data in Your Face: Push Technology in Perspective[END_REF], push systems are actually implemented with the help of a periodic pull that may cause an important charge in the network. Alternative solutions like Asynchronous JAvascript and Xml (AJAX) also rely on client's initiated messages. We argue that XMPP [START_REF] Saint | Extensible Messaging and Presence Protocol (XMPP): Core, RFC 3920[END_REF] could be a good candidate due to its maturity and simplicity it may cope with all the previous requirements. XMPP As An Alternative Solution XMPP [START_REF] Saint | Extensible Messaging and Presence Protocol (XMPP): Core, RFC 3920[END_REF], also known as Jabber, is originally developed as an Instant Messaging (IM) protocol by the Jabber community. This protocol, formalized by the IETF, uses an XML streaming technology in order to exchange XML elements, called stanza, between any two entities across the network, each one identified by a unique Jabber ID (JID). The JID format is composed of three elements: "node@domain/resource" where the "node" can be a username, the "domain" is a server and the "resource" can be a device identifier. XMPP Standard Foundation tries to enlarge the capability of this protocol by providing a collection of XMPP Extension Protocols (XEP)s [START_REF]XMPP Extensions[END_REF], XEP-0072 [START_REF]XEP-0072: SOAP Over XMPP[END_REF], for example, defines methods for transporting SOAP messages over XMPP. Thanks to its flexibility, the XMPP is used in a large domain, from a simple application such as instant messaging to a larger one such as remote computing and cloud computing [START_REF] Hornsby | From instant messaging to cloud computing, an XMPP review[END_REF]. The work [START_REF] Wagener | XMPP for cloud computing in bioinformatics supporting discovery and invocation of asynchronous web services[END_REF] shows how XMPP is a compelling solution for cloud services and how its push mechanism eliminates unnecessary polling. XMPP forms a push mechanism where nodes can receive messages and notifications whenever they occur on the server. This asynchronous nature eliminates the need for periodic pull messages. Chapter 6. Bring Your Own Control (BYOC) two main crucial packages listed in the beginning of this section; packages that execute 1) Service creation, configuration and modification, and 2) Service monitoring and BYOC service control. The NBI security problem also is considered in this proposal. The XMPP specifications describe security functions as the core parts of the protocol [START_REF] Saint | Extensible Messaging and Presence Protocol (XMPP): Core, RFC 3920[END_REF] and all XMPP libraries support these functionalities by default. XMPP provides a secure communication through an encrypted channel (Transport Layer Security (TLS)) and restricts the client access via the Simple Authentication and Security Layer (SASL) that permits XMPP servers to accept only encrypted connections. All this signifies that XMPP is well suited for constructing a secured NBI allowing to deploy a BYOC service. NBI Data Model In order to hide the service implementation complexity, services can be represented as a simple resource model described in a data modeling language. YANG data modeling language [START_REF] Bjorklund | YANG -A Data Modeling Language for the Network Configuration Protocol (NETCONF)[END_REF] can be a good candidate for this purpose. A YANG data model is translated into an equivalent XML syntax called YANG Independent Notation (YIN) that, on the one hand, allows the use of a rich set of XML-based tools and, on the other hand, can be easily transported through the XMPP-based NBI. Simulation results In order to evaluate this proposal, we assessed the XMPP NBI implementation performance in term of delay and overhead costs by comparing a simple GC using an XMPP-based NBI with the same GC using a RESTful-based one. Once the term "near to real-time" is used to develop a system, the delay is the first parameter to be reduced. In a multi-tenant environment the system charge is also the other important parameter to take into account. To measure these parameters and compare them in the XMPP case versus the REST one, we need to implement a simple GC that exploits the NBI to monitor packets belonging to some specific service, in our case HTTP filtering service. The underlying network is simulated thanks to Mininet [START_REF] De Oliveira | Using Mininet for emulation and prototyping Software-Defined Networks[END_REF] project. We implemented two NBIs, XMPP-based and RESTful) which accessed to the NBI, in parallel. We use the term "event" to describe control messages sent from the SO to the GC. In the XMPP-based NBI case this event is pushed near-to realtime, thanks to the XMPP protocol. But for the RESTful one, the event message should be stored in a temporary memory before the GC pull them up REST requests. In this case, to simulate a real-time process and to reduce the delay, REST requests are sent in a little time intervals. In the case of the XMPP-based NBI, the event is sent in a delay of 0.28 ms. The NBI overhead of this NBI is 530 Bytes which is the size of an XMPP message needed to carry the event. In the other case, with the RESTful NBI, the GC will pull periodic message to push this information, a request/response message that will be at least 293 Bytes. In order to reduce the delay, the time interval between each request should be scaled down. This periodic request/response messages will create a huge overhead in the NBI. The Fig. Conclusion In the first part of this chapter we introduced BYOC as a new concept providing a convenient framework structuring the openness of the SDN on its northbound side. We derived from the lifecycle characterizing the services deployed in an SDN, the parts of services the control of which may be delegated by the operator to external GC through dedicated APIs located in the NBI. We presented EaYB business model through which the operator monetizes the openness of its SDN platform thanks to the BYOC concept. Several use cases are briefly presented, that have potential interest to be implemented by the BYOC concept. In the second part we determined basic requirements to specify an NBI that tightly couples the SDN framework, presented recently, with the GC. We proposed an XMPP-based NBI conforming to previously discussed requirements and allowing to deploy the BYOC service. Apart all the numerous advantages of the XMPP-based NBI, the main limitation concern the transfer of large service descriptions. These ones are restricted by the "maximum stanza site" value that limits the maximum size of the XMPP message processed and accepted by the server. This value can however be parameterized when deploying the XMPP server. This dissertation is setted out to investigate the role that SDN plays in various aspects of network service control and management, and to use an SDN based framework as service management system. In this final chapter, we will review the research contributions of this dissertation, as well as discuss directions for future research. Contributions The following are the main research contributions of this dissertation. -A double-sided service lifecycle and data model (Chapter 4) At the beginning of this dissertation the SDN based service management was one of the non-answered questions. service. The second type is the customer who monitors his service, and the third one is the customer who, using the management interface, receives some service parameters based on which he reconfigures or updates that service. Based on this analysis, the client-side service lifecycle can be modified. In this section we analyzed all phases that each service type might add to the service lifecycle. On the other side, the operator-side service lifecycle analysis presents a service lifecycle model representing all phases an operator should cross to deploy, configure and maintain a service. This double-sided analysis allows to determine actions that each service customer and operator can take on a service that is the common object between a customer and an operator. At the second time, we presented the data model of each lifecycle sides based on a double-layered data model approach. In this approach a service can be modeled in two data models: service and device, and an elementary model, called transformation, defines how one of these two models can be transformed to the other one. The In the first part of this chapter we introduce BYOC as a concept allowing to delegate, through the NBI, the control of all or a part of a service to an external controller, called "Guest Controller (GC)". The latter might be managed by the same customer requesting and consuming the service or by a third party operator. Opening a control interface at the top of the SDN platform requires some specifications at the NBI level. We discussed at the second part of this chapter the requirements of the NBI allowing to open the BYOC API. Based on these requirements we proposed the use of XMPP as the protocol allowing to deploy such an API. Future researches The framework and its multilevel service provisioning interface introduced in this dissertation, provides a new service type, called BYOC, to future research. While this work has demonstrated the potential of opening a tuned control access to a service though a dynamic IPS service in Chapter 7, many opportunities for extending the scope of this thesis remain. In this section we discuss some of these opportunities. A detailed study of the theoretical and technical approach of the BYOC Opening up the control interface to a GC by BYOC concept may create some new revenue resources. Indeed, BYOC allows not only to the service customer to implement its personalized control algorithm and fully managing its service, but also it allows the operator to monetize the openness of its SDN-based system. We presented the Earn as You Bring (EaYB) business model allowing the operator to resell a service to a customer controlled by third party GC [START_REF] Aflatoonian | BYOC: Bring Your Own Control a new concept to monetize SDN's openness[END_REF]. Opening the control platform and integrating an external Controller in a service production chain, however, may create some security and complexity problems. One of the fundamental issues concerns the impact of the BYOC concept on the performance of the network Chapter 8. Conclusions and Future Research controller. In fact, externalizing the control engine of a service to a GC may create a significant delay on decision step of the controller, the delay that will have a direct effect on the QoS. The second issue concerns the confidentiality of information available to the GC. By opening its control interface, the operator provides the GC with information that may be confidential. To avoid this type of security problem, a data access control mechanism must take place, through which the operator controls all the data communicated between the controller and the GC while maintaining the flexibility of the BYOC model [START_REF] Jiang | A Secure Multi-Tenant Framework for SDN[END_REF]. The analysis of advantages of BYOC model and the complexity and security issues that BYOC may bring to the service management process can be the subject of a future work. This analysis requires a more sophisticated study of this concept, the potential business model that it can introduce (ex. EaYB), the methods and protocols used to implement the northern interface and to control the access to resources exposed to the GC, and the real impact of this type of services on the performance of services. BYOC as a key enabler to flexible NFV service chaining A NFV SC defines a set of Service Function (SF)s and the order of these SF through which a packet should pass in the downlink and uplink traffic. Chaining network elements to create a service is not a new subject. Indeed, legacy network services are made of several network functions which are hardwired back-to-back. These solutions however remain difficult to deploy and expensive to change. As soon as software-centric networking technologies, such as SDN and NFV brought the promise of programmability and flexibility to the network, the flexible service chaining became one of the academic challenges. The flexible service chaining consists in choosing the relevant SC through the analysis of traffic. There are several initiatives trying to propose an architecture for creation of Service Function Chaining (SFC) [START_REF] Onf | L4-L7 Service Function Chaining Solution Architecture[END_REF][START_REF] Halpern | Service Function Chaining (SFC) Architecture[END_REF][START_REF]ETSI. Network Functions Virtualisation (NFV); Architectural Framework. TS ETSI GS NFV 002[END_REF]. Among these solutions, IETF [START_REF] Halpern | Service Function Chaining (SFC) Architecture[END_REF] and ONF [START_REF] Onf | L4-L7 Service Function Chaining Solution Architecture[END_REF] propose to use a traffic classifier at the ingress point of BYOC as a key concept leading to 5G dynamic network slicing 5th generation (5G) networks needs to support new demands from a wide variety of service groups from e-health to broadcast services [START_REF]5G white paper[END_REF]. In order to cover all these domains 5G networks need to support diverse requirements in terms of network availability, throughput, capacity and latency [START_REF] Salah | 5g service requirements and operational use cases: Analysis and metis ii vision[END_REF]. In order to deliver services to such wide domains and to answer these various requirements, network slicing has been introduced in 5G networks [START_REF] Ngmn Alliance | Description of network slicing concept[END_REF][START_REF] Galis | Autonomic Slice Networking-Requirements and Reference Model[END_REF][START_REF] Jiang | Network slicing management & prioritization in 5G mobile systems[END_REF]. Network slicing allows operators to establish different capabilities for each service group and serve multiple tenants in parallel. SDN will play an important role in shifting to dynamic network slicing [START_REF] Ordonez-Lucena | Network Slicing for 5G with SDN/NFV: Concepts, Architectures, and Challenges[END_REF]110,[START_REF] Hakiri | Leveraging SDN for the 5G networks: trends, prospects and challenges[END_REF]. The control and forwarding plane decoupling leads to separation of software from hardware, the concept that allows to share the infrastructure between different tenants each one using one or several slices of the network. In [112] "Dynamic programmability and control" brought by SDN, is presented as one of the key principles guiding the dynamic network slicing. In this work the authors argue that "the dynamic programming of network slices can be accomplished either by custom programs or within an automation framework driven by analytics and machine learning." Applying the BYOC concept to 5G networks leads to externalizing the control of one or several slices to a GC owned or managed by a customer, an Over The Top (OTT), or an OSS. We argue that this openness is totally in line with the dynamic programmability and control principle of 5G networks presented in [112]. The innovative algorithms implemented within the GC controlling the slice of the network empowers promising value-added services and business models. However, this externalization creates some management and orchestration issues presented previously in [START_REF] Ordonez-Lucena | Network Slicing for 5G with SDN/NFV: Concepts, Architectures, and Challenges[END_REF]. Nous proposons de valoriser notre Framework en introduisant un modèle original de contrôle appelé BYOC (Bring Your Own Control) qui formalise, selon différentes modalités, la capacité d'externaliser un service à la demande par la délégation d'une partie de son contrôle à un tiers externe. Un service externalisé à la demande est structurée en une partie client et une partie SP. Cette dernière expose à la partie client des API qui permettent de demander l'exécution des actions induites par les différentes étapes du cycle de vie. Nous illustrons notre approche par l'ouverture d'une API BYOC sécurisée basée sur XMPP. La nature asynchrone de ce protocole ainsi que ses fonctions de sécurité natives facilitent l'externalisation du contrôle dans un environnement SDN multi-tenant. Nous illustrons la faisabilité de notre approche par l'exemple du service IPS (système de prévention d'intrusion) décliné en BYOC. Mots clefs : Réseau logiciel programmable, Interface nord, Interface de programmation applicative, Apporter votre propre contrôle, Externalisation / Délégation, Multi-client Abstract Over the past decades, Service Providers (SPs) have been crossed through several generations of technologies redefining networks and requiring new business models. The ongoing network transformation brings the opportunity for service innovation while reducing costs and mitigating the locking of suppliers. Digitalization and recent virtualization are changing the service management methods, traditional network services are shifting towards new on-demand network services. These ones allow customers to deploy and manage their services independently and optimally through a well-defined interface opened to the SP's platform. To offer this freedom to its customers, the SP must be able to rely on a dynamic and programmable network control platform. We argue in this thesis that this platform can be provided by Software-Defined Networking (SDN) technology. We first characterize the perimeter of this class of new services. We identify the weakest management constraints that such services should meet and we integrate them in an abstract model structuring their lifecycle. This one involves two loosely coupled views, one specific to the customer and the other one to the SP. This double-sided service lifecycle is finally refined with a data model completing each of its steps. The SDN architecture does not support all stages of the previous lifecycle. We extend it through an original Framework allowing the management of all the steps identified in the lifecycle. This Framework is organized around a service orchestrator and a resource orchestrator communicating via an internal interface. Its implementation requires an encapsulation of the SDN controller. The example of the MPLS VPN serves as a guideline to illustrate our approach. A PoC based on the OpenDaylight controller targeting the main parts of the Framework is proposed. We propose to value our Framework by introducing a new and original control model called BYOC (Bring Your Own Control) which formalizes, according to various modalities, the capability of outsourcing an ondemand service by the delegation of part of its control to an external third party. An outsourced on-demand service is divided into a customer part and an SP one. The latter exposes to the former APIs which allow requesting the execution of the actions involved in the different steps of the lifecycle. We present an XMPP-based Northbound Interface (NBI) allowing opening up a secured BYOCenabled API. The asynchronous nature of this protocol together with its integrated security functions, eases the outsourcing of control into a multi-tenant SDN framework. We illustrate the feasibility of our approach through a BYOC-based Intrusion Prevention System (IPS) service example. Keywords: Sofware Defined Networking, Northbound Interface, API, Bring Your Own Control, Outsourcing, Multi-tenancy 2. 1 SDNIntroduction 1 Controllers and their NBI . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 MPLS VPN configuration parameters . . . . . . . . . . . . . . . . . . . . . . . 3.2 MPLS VPN configuration parameters accessible via OpenDaylight API . . . . 3.3 MPLS VPN configuration parameters accessible via OpenDaylight VPN Service project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Service lifecycle phases and their related data models and transformation methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Au cours des dernières décennies, les fournisseurs de services (SP) ont eu à gérer plusieurs générations de technologies redéfinissant les réseaux et nécessitant de nouveaux modèles économiques. L'équilibre financier d'un SP dépend principalement des capacités de son réseau qui est valorisé par sa fiabilité, sa disponibilité et sa capacité à fournir de nouveaux services. La croissance des demandes d'accès au réseau conduisent les fournisseurs de services à rechercher des solutions rentables pour y répondre tout en réduisant la complexité et le cout du réseau et en accélérant l'innovation de service. Le réseau d'un opérateur est conçu sur la base d'équipements soigneusement développés, testés et configurés. En raison des enjeux critiques liés à ce réseau, les opérateurs limitent autant que faire se peut, sa modifications. Les éléments matériels, les protocoles et les services nécessitent plusieurs années de standardisation avant d'être intégrés dans les équipements par les fournisseurs. Ce verrouillage matériel réduit la capacité des fournisseurs de services à innover, intégrer et développer de nouveaux services. La transformation du réseau offre la possibilité d'innover en matière de service tout en réduisant les coûts et en atténuant les restrictions imposées par les équipementiers. Transformation signifie qu'il est possible d'optimiser l'exploitation des capacités du réseau grâce à la puissance des applications pour finalement donner au réseau du fournisseur de services une dimension de plate-forme de prestation de services numériques. L'émergence récente de la technologie Software Defined Networking (SDN) accompagné du modèle Network Function Virtualisation (NFV) permettent d'envisager l'accélération de la transformation du réseau. La promesse de ces approches se décline en terme de flexibilité et d'agilité du réseau tout en créant des solutions rentables. Le concept SDN introduit la possibilité de découpler les fonctionnalités de contrôle et de réacheminement des équipements réseau en plaçant les premières sur une unité centrale appelée contrôleur. Cette séparation permet de contrôler le réseau à partir d'une couche applicative centralisé, ce qui simplifie les tâches de contrôle et de gestion du réseau. De plus la programmabilité du contrôleur accélère la transformation du réseau des fournisseurs de services. niveaux d'abstraction au client, chacun permettant d'offrir une partie des capacités nécessaires pour un service à la demande. Découpler le plan de contrôle et le plan de données des réseaux MPLS et localiser le premier dans un contrôleur apporte plusieurs avantages en termes de gestion de service, d'agilité de service et de contrôle de la QoS. La couche de contrôle centralisée offre une interface de gestion de service disponible pour le client. Néanmoins, cette localisation et cette ouverture peuvent créer plusieurs défis. Le backone MPLS est un environnement partagé entre les clients d'un opérateur. Pour déployer un réseau VPN l'opérateur configure un ensemble de périphériques, situés en coeur et en bordure de réseau. Ces équipements fournissent ainsi e, parallèle plusieurs services aux clients qui utilisent la connexion VPN comme moyen de transaction fiable de leurs données confidentielles. L'externalisation du plan de contrôle vers un contrôleur SDN (SDNC) apporte beaucoup de visibilité sur le trafic échangé au sein du réseau. C'est grâce à l'interface nord (NBI) de ce xxiii contrôleur qu'un client peut créer un service à la demande et gérer ce service dynamiquement. La granularité de cette information donne au client plus de liberté dans la création et la gestion de son service. xxiv Le cycle de vie du service côté client géré par ce type d'applications contient deux étapes principales : de service jusqu'à sa configuration et sa surveillance, et le second gère toutes les opérations basées sur les ressources. Ces deux familles gérant ensemble tout le cycle de vie du service côté opérateur. Ce framework est composé de deux couches d'orchestration principales : -Orchestrateur de service (SO) -Orchestrateur de ressource (RO) L' "Orchestrateur de service" sera dédié aux opérations de la partie service et est conforme au cycle de vie du service côté opérateur : -Demande de service -Décomposition de service, compilation -Configuration de service xxvi -Maintenance et surveillance de service -Mise à jour de service -Retrait de service Cet orchestrateur reçoit les ordres de service et initie le cycle de vie du service en décomposant les demandes de service complexes et de haut niveau en modèles de service élémentaires. Figure 2 . 2 Figure 2.4 shows a simplified view of the SDN's architecture based on this separation. FIGURE 2 . 5 : 25 FIGURE 2.5: OpenFlow Protocol in practice Fig. 2 . 2 [START_REF] Han | Network function virtualization: Challenges and opportunities for innovations[END_REF] shows three main components of this switch: Fig. 2 . 2 Fig.2.9 illustrates our first analysis of different controllers, their core modules, NBI and applications. Proposing all control function required for implementing a service, may rely on the use of several SDNC. Managing the lifeceycle of a service also requires the use of several APIs proposed through the NBI. FIGURE 2 . 2 FIGURE 2.10: The MANO architecture proposed by ETSI (source [62]) FIGURE 3 . 1 : 31 FIGURE 3.1: Label Switch Routers (LSR)s Fig. 3 . 3 Fig. 3.2 shows the topology of a simple MPLS network. The network is designed with three types of equipment: 3. 1 . 1 Introduction to MPLS networks 27 the destination CE is connected directly to him. It then pops the label and forwards the IPv4 packet to the CE1. FIGURE 3 . 5 :Fig. 3 . 353 FIGURE 3.5: OpenContrail control plane architecture (source [69]) 3. 2 . 2 figure vRouters, control node uses the Extensible Messaging and Presence Protocol (XMPP) based interface. These nodes communicate with other control nodes using their east-west interfaces implemented in BGP.OpenContrail is a suitable solution used to interconnect VMs within one or multiple data centers. VMs are initiated inside a compute node that are general-purpose virtualized servers. Each compute node contains a vRouter implementing the forwarding plane of the OpenContrail architecture. Each VM contains one or several Virtual Network Interface Cart (vNIC)s, and each vNIC is connected to a vRouter's tap interface. In this architecture, the link connecting the VM to the tap interface is equivalent to the CE-PE link of VPN service. This interface is dynamically created as soon as the VM is spawned.In OpenContrail proposed architecture, XMPP performs the same function as MP-BGP in signaling overlay networks. After joining spawning a VM the vRouter assigns an MPLS label to the related tap interface connected to the VM. Next, it advertises the network prefix and the label to the control node, using a XMPP Publish Request message. This message, going from the vRouter to the Control node is equivalent to a BGP update from both semantic and structural point of view. The Control node, acts like a Route Reflector (RR) that centralizes route signaling and sends routes from one vRouter to another one by an XMPP Update Notification.Proposed OpenContrail architecture and its complementary blocs provide a turnkey solution suitable for public and private clouds. However, this solution covers mostly data center oriented use cases based on specific forwarding devices, called vRouters. The XMPP-based interface used by the latter creates "technological dependency" and reduces the openness of the solution, while the XMPP is not a commune interface usable by other existing SDN controllers. 3 . 2 . 2 . 1 MPLS 3221 Configuring and controlling MPLS networks via SDN controllers is one of challenges. Nowadays, SDN controllers propose to externalize MPLS control plane inside modules some of which are implemented within the controller or application layer. The work[START_REF] Ali | MPLS-TE and MPLS VPNS with Openflow[END_REF] proposes the implementation of MPLS Traffic Engineering (MPLS-TE) and MPLS-based VPN using OpenFlow and NOX. In this work authors discuss how the implementation of MPLS control plane becomes simple thanks to the consistent and up to date topology map of the controller. This externalization is done though the network applications implemented at the top of the SDN controllers, applications like Traffic Engineering (TE), Routing, VPN, Discovery, and Label Distribution. This work is an initiative to SDN-based MPLS networks, and the internal architecture of SDN controller and APIs allowing to configure an MPLS network is not explained in details. Chapter 3. SDN-based Outsourcing Of A Network Service In order to analyze the internal architecture of controllers proposing the deployment of MPLS networks, and also to study SDN APIs allowing to configure the underlying network, we try to orient our studies on one of the most developed open source SDN controllers, OpenDaylight. Networks in OpenDaylight controller With its large development community and it various projects the OpenDaylight controller is one of the most popular controllers in the SDN academic and open source world. Open-Daylight is an open source project under the Linux Foundation based on the microservices architecture. In this architecture, each core function of the controller is a microservice which can be activated or deactivated dynamically. OpenDaylight supports a large number of network protocols beyond OpenFlow, such as NETCONF, OVSDB, BGP, and SNMP. FIGURE 3 . 6 : 36 FIGURE 3.6: OpenDaylight Architecture (source [32]) : Maintains the state of the Forwarding Information Base (FIB) that associates routes and NextHop for each VRF. This information is sent to the OVS by OpenFlow. The VPN Service project provides the NBI APIs for deploying an L3 VPN for the Data Center (DC) Cloud environment. FIGURE 3 . 8 : 2 . 3 . 3823 FIGURE 3.8: VPN Configuration Using OpenDaylight VPN Service projetct Contents 1 . 1 1 1. 2 2 1. 3 2 1. 4 3 1. 5 1112232435 Thesis context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Motivation and background . . . . . . . . . . . . . . . . . . . . . . . . . . . Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Contributions of this thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . Document structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Chapter 4 . 4 Service lifecycle and Service Data Model following these two views simplifies the service abstraction design. The first viewpoint allows us to identify the APIs structuring the NBI and shared by both actors (operator and service consumer). Fig. 4 . 4 Fig. 4.1 illustrates the client-side service lifecycle managed by this type of applications, containing two main steps: -Service creation: The application specifies the service characteristics it needs, it negotiates the associated SLA which will be available for limited duration and finally it requests a new service creation. In the reminder of the text we will mark it [Service creation]. -Service retirement: The application retires the service at the end of the negotiated duration. This step defines the end of the service life. In the reminder of the text we will mark it [Service retirement]. FIGURE 4 . 1 : 41 FIGURE 4.1: Client side Service Lifecycle of Type-1 applications Fig. 4 . 4 2 illustrates the supplementary step added by this type of applications to the client-side service lifecycle. This lifecycle contains three main steps: -Service creation: [Service creation] cf. Section 4.1.1.1. Chapter 4 . 4 Service lifecycle and Service Data Model monitored thanks to the upcoming events and statics sent from the SDNC to the application. In the reminder of the text we will mark it [Service monitoring]. -Service retirement: [Service retirement] cf. Section 4.1.1.1. FIGURE 4 . 2 : 42 FIGURE 4.2: Client side Service Lifecycle of Type-2 applications .3 and contains four main steps: -Service creation: [Service creation] cf. Section 4.1.1.1. -Service monitoring: [Service monitoring] cf. Section 4.1.1.2. -Service modification: Upcoming events and statistics may trigger an algorithm implemented inside the application (implemented at the top of the SDNC), the output of which reconfigures the underlying network resources through the SDNC. In the reminder of the text we will mark it [Service modification]. -Service retirement: [Service retirement] cf. Section 4.1.1.1. FIGURE 4 . 3 : 43 FIGURE 4.3: Client side Service Lifecycle of Type-3 applications FIGURE 4 . 4 : 44 FIGURE 4.4: Global Client Side Service Lifecycle 4. 1 . 1 1 and 4.1.2. The Fig. 4.6 illustrates the interactions between these two service lifecycles. During the service run-time the client and the operator interact with each other using the NBI. This interface interconnects different phases of each part, as described below: -Service Creation and Modification ↔ Service Request, Decomposition, Compilation and Configuration: the client-side service creation and specification phase leads to three first phases of the service lifecycle in the operator side; service request, decomposition, compilation and configuration. TABLE 4 . 1 : 41 Consequently, the service model -device model transformation is a top-bottom model transformation. Otherwise, if the service retirement is triggered by the service operator, a new service model should be represented to the customer. This one requires a bottom-up model transformation done with one of explained methods. Service lifecycle phases and their related data models and transformation methods SM -Service model, DM -Device model, D -Declarative, I -Imperative Chapter 5 . 5 manage any network service conform to the service lifecycle model presented in the previous chapter. We organize this set of functions in two orchestrators, one dedicated exclusively to the management of the resources: the resource orchestrator, and the other one grouping the remaining functions: the service orchestrator. The general framework structuring the internal architecture of SDN is presented in Section 5.2 and illustrated with an example. This framework is externally limited by NBI and SBI and internally clarifies the border between the two orchestrators by identifying an internal interface between them, called the middle interface. An SDN-based Framework For Service Provisioning SDNC NBI to program these resources by recently generated instructions (arrows 6, 10, 12 of Fig.5.2). Finally at the end of the service deployment, the client will be informed about the service implementation through the SRM (arrows "Service Creation Resp." of Fig.5.2). Fig. 5 . 5 Fig. 5.11 shows the negotiated MPLS VPN service model requested by the customer. In this model the customer requests creating a VPN connection between three remote sites each one connected to a CE (ce1, ce2, and ce3). Fig. 5 . 5 Fig.5.12 shows the simplified algorithm implemented within the MPLS_vpn_transformer. FIGURE 5 . 12 : 512 FIGURE 5.12: Implemented MPLS VPN transformer simplified algorithm Finally, we 1 29 3. 3 1293 described in 5.3 the implementation of the main components of the proposed framework based on OpenDaylight controller and Mininet platform. In this prototype we study the service data model transformation, discussed in 4.2, through a simple MPLS VPN service deployment.Introduction to MPLS networks . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.1.1 MPLS data plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.1.2 MPLS control plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.1.3 MPLS VPN Sample Configuration . . . . . . . . . . . . . . . . . . . . 26 3.1.4 MPLS VPN Service Management . . . . . . . . . . . . . . . . . . . . . 27 3.2 SDN-based MPLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.2.1 OpenContrail Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.2.2 OpenFlow-based MPLS Networks . . . . . . . . . . . . . . . . . . . . Outsourcing problematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34NBI refers to the software interfaces between the controller and the applications running atop. These ones are abstracted through the application layer consisting in a set of applications and management systems acting upon the network behavior at the top of the SDN stack through the NBI. The centralized nature of this architecture brings large benefits to the network management domain, including the third party network programmability access. Network applications, on the highest layer of the architecture, achieve the desired network behavior without knowledge of detailed physical network configuration. The implementation of the NBI relies on the level of the network abstraction to be provided to the application and the type of the control that the application brings to the controller, called SO in our work.NBI appears as a natural administrative border between the SDN Orchestrator, managed by an operator, and its potential clients residing in the application layer. We argue that providing to the operator the capability of mastering SDN's openness on its northbound side should largely be profitable to both operator and clients. We introduce such a capability through the concept of BYOC: Bring Your Own Control which consists in delegating all or a part of the network control or management role to a third party application called Guest Controller (GC) and owned by an external client. An overall structure of this concept is presented in Fig.6.1, which shows the logical position of the Bring Your Own Control (BYOC) application in the traditional SDN architecture, that includes partly the Control Layer and the Application one.Chapter 6. Bring Your Own Control (BYOC) Figure 6 . 6 Figure 6.4 illustrates the communication between components of the system in detail. FIGURE 7 . 6 : 7 . 1 . 3 . 3 Fig. 7 . 7671337 FIGURE 7.6: A model of Decision Base implemented within the SO FIGURE 7 . 7 : 1 59 5. 3 59 5. 3 . 2 60 5. 3 . 3 61 5. 4 77159359326033614 FIGURE 7.7: Decision Engine task flowchart service model is a general and simplified model of the service presented to the service customer. And the device model is the technical definition of the device configuration generated based on the negotiated service model. The service object, shared between the operator and the customer is described in the service model. Consequently, the client-side service lifecycle is using the service model and all phases of the lifecycle are based on this model. The service model crosses down the operator-side lifecycle and is transformed to one or different device models. In Section 4.2 we discuss the model used or generated by each operator-side service lifecycle phase. In this section we discussed also the transformation type each step might do to convert a model from a service to a device one. -A service management framework based on SDN paradigm (Chapter 5) The service lifecycle analysis gives us a tool to determine all activities an operator should do to manage a service. In Chapter 5, based on the operator-side service lifecycle, we propose a framework through which a service model presented to the 8.2. Future researches 93 customer, is transformed to device models deployed on resources. The architecture of this framework is based on a double-layered system managing the service lifecycle through two orchestrators: service orchestrator and resource orchestrator. The first one puts together all functions allowing operator to manage a service vertically, and the second one manages resources needed by the first one to deploy a service. -Bring Your Own Control BYOC service (Chapter 6) The proposed framework gives rise to a system deploying and managing services. It opens an interface to the customers' side. In this chapter we present a new service control model, called Bring Your Own Control (BYOC) that follows the Type 3 applications model discussed in Section 2.5.2. the SC allowing to classify traffic flows based on policies. This classification allows to specify a path ID to the flow used to forward the flow on a specific path, called Service Function Path (SFP). In the ONF proposal the SDNC has a central role, where it sets up SFPs by programming Service Function Forwarder (SFF) to steer the flow through the sequence of SF instances. It also locates and program the flow classifier through the SBI allowing to classify a flow.Applying the BYOC concept to the approach proposed by ONF consists in opening a control interface between the SDNC and a GC that implements all functions needed to classify the flow and reconfigure the SFF, and the flow classifier based on new flows arrived on the classifier. Delegating the control of the SFC to the customer, gives more flexibility, visibility and freedom to the customer to create a flexible SFC based on its customized path computation algorithms and its applications requirements. On the other hand, a BYOC based SFC allows the Service Provider to lighten the service OpEx. une frontière administrative naturelle entre l'orchestrateur SDN, géré par un opérateur, et ses clients potentiels résidant dans la couche application. Fournir à l'opérateur la capacité de maîtriser l'ouverture de SDN sur son côté nord devrait être largement profitable à l'opérateur et aux clients. Nous introduisons une telle fonctionnalité à travers le concept de BYOC : Bring Your Own Control qui consiste à déléguer tout ou partie du contrôle et/ou de la gestion de réseau à une application tierce appelée Guest Controller (GC) et appartenant à un client extérieur. xxviii et qui permet à l'application côté client d'envoyer des demandes de service et des modifications, tandis que la seconde utilise une interaction asynchrone où une notification sera envoyée à l'application de service abonnée. La nature asynchrone de cette librairie la rend utile pour envoyer des messages de contrôle au GC. La communication entre le GC et le SO 'importance d'un système de provisionnement de service est basée sur son NBI qui connecte le portail de service au système de provisionnement de service (SO) dans un environnement SDN. Cette interface fournit une abstraction de la couche service et des fonctions essentielles pour créer, modifier et détruire un service, et, comme décrit ci-dessus, elle prend en compte est basée sur l'algorithme Push-and-Pull (PaP) essentiellement utilisé dans les applications web. Dans cette proposition, nous essayons d'adapter cet algorithme pour déterminer la mé- thode de communication de la NBI qui utilisera le paradigme de publication / soumission de messagerie. NBI fait référence aux interfaces logicielles entre le contrôleur et ses applications. Celles-ci sont extraites à travers la couche application consistant en un ensemble d'applications et de systèmes de gestion agissant sur le comportement du réseau en haut de la pile SDN à travers la NBI. La nature centralisée de cette architecture apporte de grands avantages au domaine de gestion de réseau. Les applications réseau, sur la couche supérieure de l'architecture, atteignent le comportement réseau souhaité sans connaître la configuration détaillée du réseau physique. L'implémentation de la NBI repose sur le niveau d"abstraction du réseau à fournir à l'application et sur le type de contrôle que l'application apporte au contrôleur, appelé SO dans notre travail. xxvii NBI apparaît comme Lles unités de contrôle externalisées appelées GC. Cette interface est un point d'accès partagé entre différents clients, chacun contrôlant des services spécifiques avec un abonnement associé à certains événements et notifications. Il est donc important que cette interface partagée implémente un environnement isolé pour fournir un accès multi-tenant. Celui-ci devrait être contrôlé à l'aide d'un système intégré d'authentification et d'autorisation. Dans notre travail, nous introduisons une NBI basée sur le protocole XMPP. Ce protocole est développé à l'origine comme un protocole de messagerie instantané (IM) par la communauté. Ce protocole utilise une technologie de streaming pour échanger des éléments XML, appelés stanza, entre deux entités du réseau, chacune identifiée par un unique identifiant JID. La raison principale de la sélection de ce protocole pour implémenter la NBI du système de provisionnement de services repose sur son modèle d'interaction asynchrone qui, à l'aide de son système push intégré, autorise l'implémentation d'un service BYOC. de vie du service représentant toutes les phases qu'un opérateur doit traverser pour déployer, configurer et maintenir un service. Cette analyse recto-verso permet de déterminer les actions que chaque client et opérateur de service peut effectuer sur un service qui est l'objet commun entre un client et un opérateur.Nous avons présenté pour la deuxième fois le modèle de données de chaque cycle de vie basé sur une approche de modèle de données à deux couches. Dans cette approche, un service peut être modélisé en deux modèles de données : service et dispositif, et un modèle élémentaire, appelé transformation, définit comment l'un de ces deux modèles peut être transformé en un autre. Le modèle de service est un modèle général et simplifié du service présenté au client du service. Et le modèle de périphérique est la définition technique de la configuration de périphérique générée sur la base du modèle de service négocié. L'objet de service partagé entre l'opérateur et le client est décrit dans le modèle de service. Par Afin de définir un framework d'approvisionnement de service basé sur SDN permettant de définir les couches de contrôle et d'application, une analyse du cycle de vie du service devait avoir lieu. Nous avons organisé l'analyse du cycle de vie du service selon deux points de vue : client et opérateur. La première vue concerne le cycle de vie du service client qui traite les différentes phases dans lesquelles un client de service (ou client) peut être pendant le cycle de vie du service. Cette analyse est basée sur la classification des applications et des services que nous avons précédemment faite. Selon cette classification, un client de service peut utiliser l'interface de gestion de service pour gérer trois types de services. Le premier est le cas où le client demande et configure un service. Le deuxième type est le client qui surveille son service, et le troisième est le client qui, en utilisant l'interface de gestion, reçoit certains paramètres de service sur la base desquels il reconfigure ou met à jour ce service. Sur la base de cette analyse, le cycle de vie du service côté client peut être modifié. Nous avons analysé toutes les phases que chaque type de service pourrait ajouter au cycle de vie du service. D'un autre côté, l'analyse du cycle de vie du service côté opérateur présente un xxix modèle de cycle conséquent, le cycle de vie du service côté client utilise le modèle de service et toutes les phases du cycle de vie sont basées sur ce modèle. Le modèle de service traverse le cycle de vie côté opérateur et est transformé en un ou plusieurs modèles de ressource. L'analyse du cycle de vie du service nous donne un outil pour déterminer toutes les activités qu'un opérateur doit effectuer pour gérer un service. Basé sur le cycle de vie du service côté opérateur, nous proposons un framework à travers lequel un modèle de service présenté au client est transformé en modèles de ressources déployés sur des ressources. L'architecture de ce framework repose sur un système à deux couches gérant le cycle de vie du service via deux orchestrateurs : orchestrateur de service et orchestrateur de ressource. Le premier regroupe toutes les fonctions permettant à l'opérateur de gérer un service verticalement, et le second gère les ressources nécessaires au premier pour déployer un service. Le framework proposé donne lieu à un système de déploiement et de gestion de services. Il ouvre une interface du côté des clients. Nous présentons un nouveau modèle de contrôle de service, appelé Bring Your Own Control (BYOC) qui suit le modèle d'application de type 3. Nous introduisons BYOC comme un concept permettant de déléguer, à travers la NBI, le contrôle de tout ou partie d'un service à un contrôleur externe, appelé Guest Controller (GC). Ce dernier peut être géré par le même client demandant et consommant le service ou par un opérateur tiers. L'ouverture d'une interface de contrôle au nord de la plate-forme SDN nécessite certaines spécifications au niveau de NBI. Nous avons abordé dans la suite de notre travail les exigences de la NBI permettant d'ouvrir l'API de BYOC. Sur la base de ces exigences, nous avons proposé l'utilisation de XMPP comme le protocole permettant de déployer une telle API. L'analyse des avantages du concept BYOC et les problèmes de complexité et de sécurité que BYOC peut apporter au processus de gestion des services peuvent faire l'objet d'un travail futur. Cette analyse nécessite une étude plus sophistiquée de ce concept, du modèle économique potentiel qu'il peut introduire (ex. Earn as You Bring EaYB), des méthodes et des protocoles utilisés pour implémenter l'interface nord et contrôler l'accès aux ressources exposées au GC, et l'impact réel de ce type de services sur la performance des services. xxx L'ouverture de l'interface de contrôle de type BYOC permet de créer des nouveaux modèles de service non seulement dans le domaine SDN mais aussi dans les domaines NFV et 5G. service management framework based on SDN paradigm The service lifecycle analysis gives us a tool to determine all activities an operator should do to manage a service. Based on the operator-side service lifecycle, we propose a framework through which a service model presented to the customer, is transformed to device models deployed on resources. This framework is organized into two orchestrator systems called respectively Service Orchestrator and Resource Orchestrator interconnected by an internal interface. Our approach is illustrated through the analysis of the MPLS VPN service, and a Proof Of Concept (POC) of our framework based on the OpenDaylight controller is proposed. -Bring Your Own Control (BYOC) service model Chapter 1. Introduction illustrate our approach through the outsourcing of an Intrusion Prevention System (IPS) service. We exploit the proposed framework by introducing a new and original service control model called Bring Your Own Control (BYOC). It allows the customer or a third party operator to participate in the service lifecycle following various modalities. We analyse the characteristics of interfaces allowing to deploy a BYOC service and we Table Flow Table Group Table Group Table Port Port Port Port OpenFlow Protocol OpenFlow Switch FlowGroupGroupPort Datapath Control Channel Pipeline FIGURE 2.6: OpenFlow Switch Components (source [23]) TABLE 2 . 2 1: SDN Controllers and their NBI This question has been raised several times and a common conclusion is that northbound APIs are indeed important, but it is too early to define a single standard at this time [38, 39, Chapter 2. Programming the network 40] Net Manager" and "Host Manager". The first one, Flow Manager, controls and routes flows based on a specific load-balancing algorithm implemented in this module. This one implements necessary controller core functions and Layer 2 protocols such as Dynamic Host Configuration Protocol (DHCP), Address Resolution Protocol (ARP) and Spanning Tree Pro- tocol (STP). The second module, Net Manager, keeps track of the network topology, link usages and links packet latency. The third module, Host Manager, monitors the state of each TABLE 3 . 3 1: MPLS VPN configuration parameters TABLE 3 . 3 OpenDaylight Native MPLS API LAN IP address ✓ RT ✓ RD ✓ AS ✓ VRF name ✓ Routing protocols ✓ VPN ID ✓ MPLS labels ✓ 3: MPLS VPN configuration parameters accessible via OpenDaylight VPN Service project .[START_REF] Kim | Improving network management with software defined networking[END_REF]. This model contains all previous steps needed to manage three types of applications, discussed earlier. We introduce to this Service Creation Service Service Retirement Monitoring Service Modification & update Chapter 5 An SDN-based Framework For Service Provisioning Contents 2.1 Technological context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2 Modeling programmable networks . . . . . . . . . . . . . . . . . . . . . . . 8 2.3 Fundamentals of programmable networks . . . . . . . . . . . . . . . . . . . 9 2.4 Software-Defined Networking (SDN) . . . . . . . . . . . . . . . . . . . . . . 11 2.4.1 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.4.2 SDN Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.4.3 SDN Southbound Interface (SBI) . . . . . . . . . . . . . . . . . . . . . 12 2.4.4 SDN Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.4.5 SDN Northbound Interface (NBI) . . . . . . . . . . . . . . . . . . . . 15 2.5 SDN Applications Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.5.1 SDN Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.5.2 Intuitive classification of SDN applications . . . . . . . . . . . . . . . 18 2.5.3 Impact of SDN Applications on Controller design . . . . . . . . . . . 19 2. 6 Network Function Virtualization, an approach to service orchestration . . 20 Chapter 6. Bring Your Own Control (BYOC) a unique Uniform Resource Locator (URL). Resources are application's state and functionality which are represented by a uniform interface to transfer the state between the client and server. Unlike most of web services architecture, it is not necessary to use XML as a data interchange format in REST. The implementation of the REST is standard-less and the format of exchanged information can be in XML, JavaScript Object Notation (JSON),The simplicity, performance and scalability of REST are the reasons of its popularity in SDN controllers' world. REST is easier to use and is more flexible. In order to interact with the Web Services, no expensive tools are required. Comparing to our requirements explained in section 6.2.1, the fundamental limitation of this method rests on the absence of asynchronous capabilities and managing a secured multi-access.Traditional Web Services solutions, such as Simple Object Access Protocol (SOAP) have previously been used to specify NBI but quickly abandoned in favor of the RESTful approach. Comma-Separated Values (CSV), plain text, Rich Site Summary (RSS) or even in HyperText Markup Language (HTML), i.e. REST is ambivalent. 6.6 shows the Overhead charge of the NBI obtained during this test. The NBI Overhead charge rests constant for the XMPP-based case and varies for the RESTful one. The overhead charge of the NBI in the simulated real-time case (less that 1 ms of delay) for the RESTful NBI is about 3 MB. To reduce this charge and achieve the same Overhead as XMPP-based one, we need to increase the time interval up to 200 ms. This time interval will have a direct effect on the event transfer delay. Overhead Bytes 3500000 3000000 2500000 2000000 REST 1500000 XMPP 1000000 500000 0 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 Interval ms FIGURE 6.6: NBI Overhead Charge There were several initiatives to define the perimeter of Chapter 8. Conclusions and Future Research their perimeter and aiming to define an NBI was almost impossible. ONF had just started the NBI group activities aiming to define an NBI answering requirements of most of applications. However, this work was far from being realized, because defining a standard NBI, that is an application interface, requires a careful analysis of several implementations and the feedback gained by all those implementations.In order to define a reference SDN-based service provisioning framework allowing to define the control and application layer edge, a service lifecycle analysis had to take place. At the first time, in Section 4.1, we presented the service lifecycle analysis in two point of views: client and operator. The fist view, client-side service lifecycle, discusses different phases in which a service customer (or client) can be during the service lifecycle. This analysis is held based-on the application and service classification that we have previously done in Section 2.5.2. According to this classification, a service customer can use the service management interface to manage three types of services. The first one is the case where the customer requests and configures a SDN architecture layers, several SDN controllers were in the design and development phase, and developed SDN controllers and frameworks were deployed each one for specific research topics. Some of SDN-based services were deployed by internal SDN controller's functions and some of them were controlled by applications developed at the top of the controller programing the network via the controllers NBI. Due to the nature of ongoing projects, and the fact that there were not any clear definition of SDN controller core functions and northbound applications, defining the border of these two layers, i.e. SDN controller and SDN applications, helping to delimitate Au cours des dernières décennies, les fournisseurs de services (SP) ont eu à gérer plusieurs générations de technologies redéfinissant les réseaux et nécessitant de nouveaux modèles économiques. Cette évolution continue du réseau offre au SP l'opportunité d'innover en matière de nouveaux services tout en réduisant les coûts et en limitant sa dépendance auprès des équipementiers. L'émergence récente du paradigme de la virtualisation modifie profondément les méthodes de gestion des services réseau. Ces derniers évoluent vers l'intégration d'une capacité « à la demande » dont la particularité consiste à permettre aux clients du SP de pouvoir les déployer et les gérer de manière autonome et optimale. Pour offrir une telle souplesse de fonctionnement, le SP doit pouvoir s'appuyer sur une plateforme de gestion permettant un contrôle dynamique et programmable du réseau. Nous montrons dans cette thèse qu'une telle plate-forme peut être fournie grâce à la technologie SDN (Software-Defined Networking).Nous proposons dans un premier temps une caractérisation de la classe de services réseau à la demande. Les contraintes de gestion les plus faibles que ces services doivent satisfaire sont identifiées et intégrées à un modèle abstrait de leur cycle de vie. Celui-ci détermine deux vues faiblement couplées, l'une spécifique au client et l'autre au SP. Ce cycle de vie est complété par un modèle de données qui en précise chacune des étapes.L'architecture SDN ne prend pas en charge toutes les étapes du cycle de vie précédent. Nous introduisons un Framework original qui encapsule le contrôleur SDN, et permet la gestion de toutes les étapes du cycle de vie. Ce Framework est organisé autour d'un orchestrateur de services et d'un orchestrateur de ressources communiquant via une interface interne. L'exemple du VPN MPLS sert de fil conducteur pour illustrer notre approche. Un PoC basé sur le contrôleur OpenDaylight ciblant les parties principales du Framework est proposé. En plus de maintenir l'agilité de la gestion des services, nous proposons d'introduire un framework de gestion des services au-delà du SDNC. Ce framework fournit une vue d'ensemble de toutes les fonctions et commandes. Nous renforçons ce framework en ajoutant la question de l'accès du client aux ressources et services gérés. En effet, ce framework doit être capable de fournir une granularité variable, capable de gérer tous les types de services :-Applications de type 1 : Le modèle abstrait de service apporté par NBI du framework permet à l'application côté client de configurer un service avec un minimum d'informations communiquées entre l'application et le framework. L'accès restreint fourni par le framework empêche les fuites de données involontaires ou intentionnelles et la mauvaise configuration du service.-Applications de type 2 : Du côté sud, les blocs internes du framework reçoivent les événements de réseau venus directement à partir des ressources, ou indirectement via le SDNC. Du côté nord, ces blocs ouvrent une API aux applications leurs permettant de s'abonner à certaines métriques utilisées pour des raisons de surveillance. Sur la base d'événements remontés par les ressources, ces métriques sont calculées par des blocs internes du framework et sont envoyées à l'application appropriée.-Applications de type 3 : L'accès contrôlé aux fonctions basées sur SDN assurées par le framework fournit non seulement une API de gestion de service, mais aussi une interface de contrôle de service ouvert à l'application client. L'API de contrôle avec une granularité fine permet aux clients d'avoir un accès de bas niveau aux ressources réseau via le framework. En utilisant cette API les clients reçoivent les événements réseau envoyés par les équipements, à partir desquels ils reconfigurent le service. Afin de fournir un framework capable de mettre en oeuvre les API mentionnées, nous devons analyser le cycle de vie du service en détail. Cette analyse conduit à l'identification de tous les blocs internes du framework et à leurs articulations internes pour permettre aussi bien la présentation d'une API de service et de contrôle que le déploiement l'allocation et la configuration de ressources.Cycle de vie du service et modèle de données de serviceAfin de réduire la complexité de la gestion du cycle de vie, nous divisons le cycle de vie du service global en deux points de vue complémentaires : la vue du client et celle de l'opérateur. Chacune des deux vues capture uniquement les informations utiles pour l'acteur associé. La vue globale peut cependant être obtenue en composant les deux vues partielles.Sur la base de la classification des applications abordées dans nos études, nous analysons le cycle de vie du service côté client pour les trois principaux types d'applications. Les applications de type 1 sont constituées d'applications créant un service réseau à l'aide de la NBI.Cette catégorie ne surveille ni ne modifie le service en fonction des événements réseau. BYOC devrait clairement permettre de réduire la charge de traitement du contrôleur. En effet, les architectures et les propositions SDN existantes centralisent la plupart du contrôle de réseau et de la logique de décision dans une seule entité. Celle-ci doit supporter une charge importante en fournissant un grand nombre de services tous déployés dans la même entité. Une telle complexité est clairement un problème que BYOC peut aider à résoudre en externalisant une partie du contrôle à une application tierce. La préservation de la confidentialité de l'application client de service est un autre point important apporté par BYOC.En fait, centraliser le contrôle du réseau dans un système et passer toutes les données de ce contrôleur peut créer des problèmes de confidentialité qui peuvent empêcher l'utilisateur final, que nous appelons SC, d'utiliser le SDNC. Enfin et surtout, BYOC peut aider l'opérateur à affiner sensiblement son modèle économique basé sur SDN en déléguant un contrôle presque "à la carte" via des APIs dédiés. Une telle approche peut être exploitée intelligemment selon le nouveau paradigme de "Earn as you bring" (EaYB) que nous présentons et décrivons ci-dessous.En effet, un client extérieur possédant un algorithme sophistiqué propriétaire peut vouloir commercialiser les traitements spécialisés associés, à d'autres clients via l'opérateur SDN qui pourrait apparaître comme un courtier de ce type de capacités. Il convient de souligner que ces avantages de BYOC peuvent en partie être compensés par la tâche non triviale de vérifier la validité des décisions prises par l'intelligence externalisée qui doivent être au moins conformes aux différentes politiques mises en oeuvre par l'opérateur dans le contrôleur. Ce point qui mérite plus d'investigation pourrait faire l'objet de recherches futures.L'externalisation d'une partie des tâches de gestion et de contrôle modifie le modèle de cycle de vie du service. Il s'agit en effet de traduire du côté client, des parties de certaines tâches appartenant initialement à l'opérateur. Une analyse minutieuse nous permet d'identifier les tâches de compilation et de surveillance, réalisées au niveau du cycle de vie du service côté opérateur, comme des candidats potentiellement intéressants, dont certaines parties peuvent être déléguées au GC. Le GC est connecté au SO à travers la NBI. C'est là que l'opérateur de service communique avec le client du service et parfois avec les applications côté client, les orchestrateurs et les GCs.Afin de réaliser ces fonctionnalités, certaines librairies devraient être implémentées. Ces dernières prennent en charge deux catégories de tâches : 1) création, configuration et modification de service, et 2) surveillance de service et contrôle de service BYOC. La première utilise une interaction synchrone qui implémente une simple communication requête / réponse This aspect is not mentioned in this figure because it falls outside of the scope of the service lifecycle. Acknowledgements To formalize data model on each layer and the transformation allowing to map one layer onto the other one, Moberg et al. [START_REF] Moberg | A two-layered data model approach for network services[END_REF] proposed to use the YANG data model [START_REF] Bjorklund | YANG -A Data Modeling Language for the Network Configuration Protocol (NETCONF)[END_REF] as the formal modeling language. Using the unified YANG modeling for both service and device layers was also previously proposed by Wallin et al. [START_REF] Wallin | Automating Network and Service Configuration Using NETCONF and YANG[END_REF]. These three elements required to model a service, (i.e. service layer, device layer and transformation model), are illustrated in [START_REF] Moberg | A two-layered data model approach for network services[END_REF] where the authors specify with this approach IP VPN services. At the service layer the model consists in a formal description of VPN services presented to the customer, and derived from a list of VPN service parameters including BGP AS number, VPN name and a list of VPN endpoints (CE and/or PE). At the second layer the device model is the set of device configurations corresponding to a VPN service. This one is defined by all configurations done on PEs connected to all requested endpoints. This information includes PE IP address, RT, RD, BGP AS number, etc. Finally, for the third element which is the transformation template mapping one layer on the other one, the authors propose the use of a declarative model. In the example the template is based on Extensible Markup Language (XML) which according to service model parameters, generates a device model. Applying the two-layered model approach on Service Lifecycle Proposed model based on YANG data modeling language brings dynamicity and agility to the service management system. Its modular aspect allows to reduce the new service creation and modification costs. In this section we apply this model on the proposed service lifecycle, discussed in Section 4.1. This analysis aims to introduce a model allowing to formalize service lifecycle phases and their respective data models. An example of this analysis is presented in the Table 4.1 at the end of this Section. Applying two-layered model on client side service lifecycle The client side service representation is a minimal model containing informal information about the customers service. All steps of client side service lifecycle are relying on the negotiated service model, i. e. the service layer model of two-layered approach (Section 4.2.1) that is a representation of the service and its components. From the operator side point of view, data model used by this service lifecycle rests on the service layer model. Applying two-layered model on operator side service lifecycle The integration of new services and updating the service delivery platform involves the creation and updating data models used by the client side service lifecycle. Contrary to the client side, the operator side service lifecycle relies on several data models. Illustrating Service Deployment Example The operator side service lifecycle is presented in Section 4.1.2. This model represents all processes an operator may take into account to manage a service. We introduce a service and resource management platform which encapsulates an SDNC and provides through other functional modules, the capabilities to implement each step of the service lifecycle presented before. Fig. 5.1 illustrates this platform with the involved modules together with special data required and generated by each module. It shows the diversity of information needed to manage a service automatically. We will detail the different modules of this platform in the next section. We prefer now to illustrate this model by describing the main processes through the example of a VPN service connecting two remote sites of a client connected to physical routers: PE1 and PE2. In MPLS networks, each CE is connected to a VRF instance hosted in a PE. In our example we call these instances, (i.e. VRFs) respectively vRouter1 and vRouter2. The first step of the service lifecycle which consists in the "Service Creation" gives rise in the nominal case to a call flow the details of which are presented in Fig. 5.2. In the first step (arrow 1 of Fig. In the last chapter we proposed a new service implementing BYOC-based services. Lastly in chapter 4 we analyzed the lifecycle of a service based on two service actors, the client and the operator, view points. Dividing the service lifecycle into two parts refines our analysis and helped us to present lastly a SDN framework in chapter 5, where we discussed a framework through which a negotiated service model can be vertically implemented. This framework is issued from the operator-side service lifecycle steps, where that permits not only to implement and control a service, but also to manage its lifecycle. The second part of service lifecycle, client-side, presents all steps that every applications types, presented in section 2.5.2, may take to deploy, monitor and reconfigure a service through the SDNC. In this chapter we rapidly showed how the lastly presented framework permits to deploy a BYOC-type service. We also presented an XMPP-based NBI allowing to open the interface to the GC. IPS Control Plane as a Service Referenced architecture The architecture of proposed service is based on the Intrusion Prevention System (IPS) architecture divided into two entities. The first one is Intrusion Detection System (IDS)-end that is implemented in key points of the network and observes real-time traffics. The second one, called Security Manager (SM), is a central management system that, thanks to its This database is comparable with the database used normally within Firewalls where there are actions like: ACCEPT, REJECT, and DROP. The difference between these ones and the DB presented in BYOC is in fields "IPProto" and "Action" where we record the type of message log / alert (in the IPProto field) and the ID of the GC, gcId (in the Action field). The values stored in this database are configured by the network administrator (operator) that manages the entire infrastructure. Service Dispatcher (SD) and NBI The SD is the module directly accessible by GCs. It identifies GCs using the identifier of each one (gcId), registered in the Action field of the DB. We propose here to use the XMPP protocol to implement the interface between the SD and GCs where each endpoint is identified by a JID. Detailed components of the Guest Controller (GC) The Fig. 7.8 shows the detailed components of the SM implemented the GC. As stated recently the SD sends the log and alert messages arriving from IDS-ends to an appropriate GC. These messages contain the specific values (143, 144) in their IPProto field. By receiving a message, the SM needs to know the origin of this message, whatever it is: log and alert. For this we propose to implement Security Proxy (SP). By examining the IPProto field, the SP decides whether a message relates to a log or an alert. Applying the GC decision on the infrastructure Once a decision is made by the GC, it sends a service update message to the SDCM. This decision may update a series of devices. In our example, to block an attacking traffic, the decision just updates the OpenFlow switch that is installed in front of the IDS-end. This new configuration, deployed on the switch, allows the GC to block the inbound traffic entering the customers sites (interface I.1 on the figure 7.5). The update message sent from the GC contains a service data model equivalent to the model presented in the service creation phase. Thanks to this homogeneity of models, the BYOC service update becomes transparent for the SO and the update process will be done through existing blocks of the SO. Distributed IPS control plane Opening a control interface on the IDS-end equipment through the SO allows to break down the inner modules of the SM between several GCs. The fig. 7.9 illustrates this example in details. In this example, an attack signature database is shared between multiple SMs.
183,007
[ "1043610" ]
[ "482801", "491313" ]
01758354
en
[ "spi" ]
2024/03/05 22:32:10
2017
https://pastel.hal.science/tel-01758354/file/65301_TANNE_2017_archivage.pdf
Laura De Lorenzis Claudia Comi Jose Adachi l'opportunité de rencontrer l'équipe R&D dont Résumé La défaillance d'une structure est souvent provoquée par la propagation de fissures dans le matériau initialement sain qui la compose. Une connaissance approfondie dans le domaine de la mécanique de la rupture est essentielle pour l'ingénieur. Il permet notamment de prévenir des mécanismes de fissurations garantissant l'intégrité des structures civiles, ou bien, de les développer comme par exemple dans l'industrie pétrolière. Du point de vue de la modélisation ces problèmes sont similaires, complexes et difficiles. Ainsi, il est fondamental de pouvoir prédire où et quand les fissures se propagent. Ce travail de thèse se restreint à l'étude des fissures de type fragile et ductile dans les matériaux homogènes sous chargement quasi-statique. On adopte le point de vue macroscopique c'est-à-dire que la fissure est une réponse de la structure à une sollicitation excessive et est caractérisée par une surface de discontinuité du champ de déplacement. La théorie la plus communément admise pour modéliser les fissures est celle de Griffith. Elle prédit l'initiation de la fissure lorsque le taux de restitution d'énergie est égal à la ténacité du matériau le long d'un chemin préétabli. Ce type de critère requière d'évaluer la variation de l'énergie potentielle de la structure à l'équilibre pour un incrément de longueur de la fissure. Mais l'essence même de la théorie de Griffith est une compétition entre l'énergie de surface et l'énergie potentielle de la structure. Cependant ce modèle n'est pas adapté pour des singularités d'entaille faible i.e. une entaille qui ne dégénère pas en pré-fissure. Pour pallier à ce défaut des critères de type contraintes critiques ont été développés pour des géométries régulières. Malheureusement ils ne peuvent prédire correctement l'initiation d'une fissure puisque la contrainte est infinie en fond d'entaille. Une seconde limitation de la théorie de Griffith est l'effet d'échelle. Pour illustrer ce propos, considérons une structure unitaire coupé par une fissure de longueur a. Le chargement critique de cette structure évolue en 1/ √ a, par conséquent le chargement admissible est infini lorsque la taille du défaut tend vers zéro. Ceci n'a pas de sens physique et est en contradiction avec les expériences. Il est connu que cette limitation provient du manque de contrainte critique (ou longueur caractéristique) dans le modèle. Pour s'affranchir de ce défaut Dugdale et Barenblatt ont proposé dans leurs modèles de prendre en compte des contraintes cohésives sur les lèvres de la fissure afin d'éliminer la singularité de contraintes en fond d'entaille. i Plus récemment, les modèles variationnels à champ de phase aussi connu sous le nom de modèles d'endommagements à gradient [START_REF] Francfort | Revisiting brittle fracture as an energy minimization problem[END_REF][START_REF] Bourdin | The Variational Approach to Fracture[END_REF] ont fait leurs apparition début des années 2000. Ces modèles permettent de s'affranchir des problèmes liés aux chemins de fissures et sont connus pour converger vers le modèle de Griffith lorsque le paramètre de régularisation tend vers 0. De plus les résultats numériques montrent qu'il est possible de faire nucléer une fissure sans singularité grâce à la présence d'une contrainte critique. Ces modèles à champ de phase pour la rupture sont-ils capables de surmonter les limitations du modèle de Griffith ? Concernant les chemins de fissures, les modèles à champ de phase ont prouvé être redoutablement efficaces pour prédire les réseaux de fractures lors de chocs thermiques [START_REF] Sicsic | Initiation of a periodic array of cracks in the thermal shock problem: a gradient damage modeling[END_REF][START_REF] Bourdin | Morphogenesis and propagation of complex cracks induced by thermal shocks[END_REF]. Dans cette thèse, les résultats obtenus montrent que les modèles d'endommagement à gradient sont efficaces pour prédire la nucléation de fissure en mode I et de tenir compte de l'effet d'échelle. Naturellement ces modèles respectent le critère d'initiation de la théorie de Griffith et sont étendus à la fracturation hydraulique comme illustré dans le deuxième volet de cette thèse. Cependant ils ne peuvent rendre compte de la rupture de type ductile tel quel. Un couplage avec les modèles de plasticité parfaite est nécessaire afin d'obtenir des mécanismes de rupture ductile semblables à ceux observés pour les métaux. Le manuscrit est organisé comme suit: Dans le premier chapitre, une large introduction est dédiée à l'approche variationnelle de la rupture en partant de Griffith vers une approche moderne des champs de phase en rappelant les principales propriétés. Le second chapitre étudie la nucléation de fissures dans des géométries pour lesquels il n'existe pas de solution exacte. Des entailles en U-et V-montrent que le chargement critique évolue continûment du critère en contrainte critique au critère de ténacité limite avec la singularité d'entaille. Le problème d'une cavité elliptique dans un domaine allongé ou infini est étudié. Le troisième chapitre se concentre autour de la fracturation hydraulique en prenant en compte l'influence d'un fluide parfait sur les lèvres de la fissure. Les résultats numériques montrent que la stimulation par injection de fluide dans d'un réseau de fissures parallèles et de même longueur conduit à la propagation d'une seule des fissures du réseau. Il s'avère que cette configuration respecte le principe de moindre énergie. Le quatrième chapitre se focalise uniquement sur le modèle de plasticité parfaite en partant de l'approche classique vers une l'approche variationnelle. Une implémentation numérique utilisant le principe de minimisation alternée de l'énergie est décrite et vérifiée dans un cas simple de Von Mises. Le dernier chapitre couple les modèles d'endommagement à gradient avec les modèles de plasticité parfaite. Les simulations numériques montrent qu'il est possible d'obtenir des fissures de type fragile ou ductile en variant un seul paramètre uniquement. De plus ces simulations capturent qualitativement le phénomène de nucléation et de propagation de fissures en suivant les bandes de cisaillement. Introduction Structural failure is commonly due to fractures propagation in a sound material. A better understanding of defect mechanics is fundamental for engineers to prevent cracks and preserve the integrity of civil buildings or to control them as desired in energy industry for instance. From the modeling point of view those problems are similar, complex and still facing many challenges. Common issues are determining when and where cracks will propagate. In this work, the study is restricted to brittle and ductile fractures in homogeneous materials for rate-independent evolution problems in continuum mechanics. We adopt the macroscopic point of view, such that, the propagation of a macro fracture represents a response of the structure geometry subject to a loading. A fracture à la Griffith is a surface of discontinuity for the displacement field along which stress vanishes. In this widely used theory the fracture initiates along an a priori path when the energy release rate becomes critical, this limit is given by the material toughness. This criterion requires one to quantify the first derivative of potential energy with respect to the crack length for a structure at the equilibrium. Many years of investigations were focused on the notch tips to predict when the fracture initiates, resulting to a growing body of literature on computed stress intensity factors. Griffith is by essence a competition between the surface energy and the recoverable bulk energy. Indeed, a crack increment reduces the potential energy of the structure while it is compensated by the creation of a surface energy. However such a fracture criterion is not appropriate to account for weak singularity i.e. a notch angle which does not degenerate into a crack. Conversely many criteria based on a critical stress are adapted for smooth domains, but fail near stress singularities. Indeed, a nucleation criterion based solely on pointwise maximum stress will be unable to handle with crack formation at the singularity point i.e. σ → ∞. A second limitation of Griffith's theory is the scale effects. To illustrate this, consider a unit structure size cut by a pre-fracture of length a. The critical loading evolves as ∼ 1/ √ a, consequently the maximum admissible loading is not bounded when the defect size decays. Again this is physically not possible and is inconsistent with experimental observations. It is well accepted that this discrepancy is due to the lack of a critical stress (or a critical length scale) in Griffith's theory. To overcome these aforementioned issues in Griffith's theory, Dugdale and Barenblatt pioneers of cohesive and ductile fractures theory proposed to kill the stress singularity at the tip by accounting of stresses on fracture lips. iii Recently, many variational phase-field models [START_REF] Bourdin | The Variational Approach to Fracture[END_REF] are known to converge to a variational Griffith -like model in the vanishing limit of their regularization parameter. They were conceived to handle the issues of crack path. Furthermore, it has been observed that they can lead to numerical solution exhibiting crack nucleation without singularities. Naturally, these models raise some interesting questions: can Griffith limitations be overcome by those phase-field models? Concening crack path, phase-field models have proved to be accurate to predict fracture propagation for thermal shocks [START_REF] Sicsic | Initiation of a periodic array of cracks in the thermal shock problem: a gradient damage modeling[END_REF][START_REF] Bourdin | Morphogenesis and propagation of complex cracks induced by thermal shocks[END_REF]. In this dissertation, numerical examples illustrate that Griffith limitations such as, nucleation and size effects can be overcome by the phase-field models referred to as gradient damage models in the Chapter 2. Naturally this models preserve Griffith's propagation criterion as shown in the extended models for hydraulic fracturing provided in Chapter 3. Of course Griffith's theory is unable to deal with ductile fractures, but in Chapter 5 we show that by coupling perfect plasticity with gradient damage models we are able to capture some of ductile fractures features, precisely the phenomenology of nucleation and propagation. The dissertation is organized as follows: In Chapter 1, a large introduction of phasefield models to brittle fracture is exposed. We start from Griffith to the modern approach of phase-field models, and recall some of their properties. Chapter 2 studies crack nucleation in commonly encountered geometries for which closed-form solutions are not available. We use U-and V-notches to show that the nucleation load varies smoothly from that predicted by a strength criterion to that of a toughness criterion, when the strength of the stress concentration or singularity varies. We present validation and verifications of numerical simulations for both types of geometries. We consider the problem of an elliptic cavity in an infinite or elongated domain to show that variational phase-field models properly account for structural and material size effects. Chapter 3 focuses on fractures propagation in hydraulic fracturing, we extend the variational phase-field models to account for fluid pressure on the crack lips. We recover the closed form solution of a perfect fluid injected into a single fracture. For stability reason, in this example we control the total amount of injected fluid. Then we consider a network of parallel fractures stimulated. The numerical results show that only a single crack grows and this situation is always the best energy minimizer compared to a multi-fracking case where all fractures propagate. This loss of symmetry in the cracks patterns illustrates the variational structure and the global minimization principle of the phase-field model. A third example deals with fracture stability in a pressure driven laboratory test for rocks. The idea is to capture different stability regimes using linear elastic fracture mechanics to properly design the experiment. We test the phase-field models to capture fracture stability transition (from stable to unstable). Chapter 4 is concerned with the variational perfect plasticity models and its implementation and verification. We start by recalling main ingredients of the classic approach of perfect elasto-plasticity models and then recasting into the variational structure. Later the algorithm strategy is exposed with a verification example. The strength of the proposed algorithm is to solve perfect elasto-plasticity maiv terials by prescribing the yield surfaces without dealing with non differentiability issues. Chapter 5 studies ductile fractures, the proposed model couple gradient damage models with perfect plasticity independently exposed in Chapter 1 and 4. Numerical simulations show that transition from brittle to ductile fractures is recovered by changing only one parameter. Also the ductile fracture phenomenology, such as crack initiation at the center and propagation along shear bands are studied in plane strain specimens and round bars in three dimensions. The main research contributions is in Chapter 2,3 and 5. My apologies to the reader perusing the whole dissertation which contains repetitive elements due to self consistency and independent construction of all chapters. v Chapter 1 Variational phase-field models of brittle fracture In Griffith's theory, a crack in brittle materials is a surface of discontinuity for the displacement field with vanishing stress along the fracture. Assuming an a priori known crack path, the fracture propagates when the first derivative of the potential energy with respect to the crack length at the equilibrium becomes critical. This limit called the fracture toughness is a material property. The genius of Griffith was to link the crack length to the surface energy, so the crack propagation condition becomes a competition between the surface energy and the recoverable bulk energy. By essence this criterion is variational and can be recast into a minimality principle. The idea of Francfort and Marigo in variational approach to fracture [START_REF] Francfort | Revisiting brittle fracture as an energy minimization problem[END_REF] is to keep Griffith's view and extend to any possible crack geometry and complex time evolutions. However cracks remain unknown and a special method needs to be crafted. The approach is to approximate the fracture by a damage field with a non zero thickness. In this region the material stiffness is deteriorated leading to decrease the sustainable stresses. This stress-softening material model is ill-posed mathematically [START_REF] Comi | On localisation in ductile-brittle materials under compressive loadings[END_REF] due to a missing term limiting the damage localization thickness size. Indeed, since the surface energy is proportional to the damage thickness size, we can construct a broken bar without paying any surface energy, i.e. by decaying the damaged area. To overcome this aforementioned issue, the idea is to regularize the surface energy. The adopted regularization takes its roots in Ambrosio and Tortorelli's [START_REF] Ambrosio | Existence theory for a new class of variational problems[END_REF][START_REF] Ambrosio | On the approximation of free discontinuity problems[END_REF] functionals inspired by Mumford-Shah's work [START_REF] Mumford | Optimal approximation by piecewise smooth functions and associated variational problem[END_REF] in image segmentation. Gradient damage models is closely related to Ambrosio and Tortorelli's functionals and have been adapted to brittle fracture. The introduction of a gradient damage term comes up with a regularized parameter. This parameter denoted is also called internal length and governs the damage thickness. Following Pham and Marigo [START_REF] Pham | Approche variationnelle de l'endommagement : I. les concepts fondamentaux[END_REF][START_REF] Pham | Approche variationnelle de l'endommagement : Ii. les modèles à gradient[END_REF][START_REF] Pham | From the onset of damage until the rupture: construction of the responses with damage localization for a general class of gradient damage models[END_REF], the damage evolution problem is built on three principles, the damage irreversibility, stability and balance of the total energy. The beauty of the model is that the unknown discrete crack evolution is approximated by a regularized functional evolution which is intimately related to Griffith by its variational structure and its asymptotic behavior. This chapter is devoted to a large introduction of gradient damage models which 1 Chapter 1. Variational phase-field models of brittle fracture constitute a basis of numerical simulations performed in subsequent chapters. The presentation is largely inspired by previous works of Bourdin-Maurini-Marigo-Francfort and many others. In the sequel, section 1.1 starts with the Griffith point of view and recasts the fracture evolution into a variational problem. By relaxing the pre-supposed crack path constraint in Griffith's theory, the Francfort and Marigo's variational approach to fracture models is retrieved. We refer the reader to [START_REF] Francfort | Revisiting brittle fracture as an energy minimization problem[END_REF][START_REF] Bourdin | The variational approach to fracture[END_REF] for a complete exposition of the theory. Following the spirit of the variations principle, gradient damage models are introduced and constitute the basis of numerical simulations performed. Section 1.2 focuses on the application to a relevant one-dimensional problem which shows up multiple properties, such as, nucleation, critical admissible stress, size effects and optimal damage profile investigated previously by [START_REF] Pham | Approche variationnelle de l'endommagement : I. les concepts fondamentaux[END_REF][START_REF] Pham | Approche variationnelle de l'endommagement : Ii. les modèles à gradient[END_REF]. To pass from a damage model to Griffith-like models, connections need to be highlighted, i.e letting the internal length to zero. Hence, section 1.3 is devoted to the Γ-convergence in one-dimensional setting, to show that gradient damage models behave asymptotically like Griffith. Finally, the implementation of such models is exposed in section 1.4 . Gradient damage models From Griffith model to its minimality principle The Griffith model can be settled as follow, consider a perfectly brittle-elastic material with A the Hooke's law tensor and G c the critical energy release rate occupying a region Ω ⊂ R n in the reference configuration. The domain is partially cut by a fracture set Γ of length l, which grows along an a priori path Γ. Along the fracture, no cohesive effects or contact lips are considered here, thus, it stands for stress free on Γ(l). The sound region Ω \ Γ is subject to a time dependent boundary displacement ū(t) on a Dirichlet part of its boundary ∂ D Ω and time stress dependent g(t) = σ •ν on the remainder ∂ N Ω = ∂Ω\∂ D Ω, where ν denotes the appropriate normal vector. Also, for the sake of simplicity, body force is neglected. The infinitesimal total deformation e(u) is the symmetrical part of the spatial gradient of the displacement field u such that e(u) = ∇u + ∇ T u 2 . In linear elasticity the free energy is a differentiable convex state function given by ψ e(u) = 1 2 Ae(u) : e(u) . Thereby, the stress-strain relation naturally follows σ = ∂ψ(e) ∂e = Ae(u). By the quasi-static assumption made, the cracked solid is, at each time, in elastic equilibrium with the loads that it supports at that time. The problem is finding the unknown displacement u = u(t, l) for a given t and l = l(t) that satisfies the following constitutive equations, 1.1. Gradient damage models          div σ =0 in Ω \ Γ(l) u =ū(t) on ∂ D Ω \ Γ(l) σ • ν =g(t) on ∂ N Ω σ • ν =0 on Γ(l) (1.1) At the time t and for l(t) let the kinematic field u(t, l) be at the equilibrium such that it solves (1.1). Hence, the potential energy can be computed and is composed of the elastic energy and the external work force, such that, P(t, l) = Ω\Γ(l) 1 2 Ae(u) : e(u) dx - ∂ N Ω g(t) • u dH n-1 where dH n-1 denotes the Hausdorff n-1 -dimensional measure, i.e. its aggregate length in two dimensions or surface area in three dimensions. The evolution of the crack is given by Griffith's criterion: Definition 1 (Crack evolution by Griffith's criterion) i. Consider that the crack can only grow, this is the irreversibility condition, l(t) ≥ 0. ii. The stability condition says that the energy release rate G is bounded from above by its critical value G c , G(t, l) = - ∂P(t, l) ∂l ≤ G c . iii. The energy balance guarantee that the energy release rate is critical when the crack grows, G(t, l) -G c l = 0 Griffith says in his paper [START_REF] Griffith | The phenomena of rupture and flow in solids[END_REF], the "theorem of minimum potential energy" may be extended so as to of predicting the breaking loads of elastic solids, if account is taken of the increase of surface energy which occurs during the formation of cracks. Following Griffith, let us demonstrate that crack evolution criteria are optimality conditions of a total energy to minimize. Provided some regularity on P(t, l) and l(t), let formally the minimization problem be: for any loading time t such that the displacement u is at the Chapter 1. Variational phase-field models of brittle fracture equilibrium, find the crack length l which minimizes the total energy composed of the potential energy and the surface energy subject to irreversibility, min l≥l(t) P(t, l) + G c l (1.2) An optimal solution of the above constraint problem must satisfy the KKT1 conditions. A common methods consist in computing the Lagrangian, given by, L(t, l, λ) := P(t, l) + G c l + λ(l(t)l) (1.3) where λ denotes the Lagrange multiplier. Then, apply the necessary conditions, Substitute the Lagrange multiplier λ given by the stationarity into the dual feasibility and complementary slackness condition to recover the irreversibility, stability and energy balance of Griffith criterion. Futhermore, let the crack length l and the displacement u be an internal variables of a variational problem. Note that the displacement does not depend on l anymore. Provided a smooth enough displacement field and evolution of t → l(t) to ensure that calculations make sense, the evolution problem can be written as a minimality principle, such as, Definition 2 (Fracture evolution by minimality principle) Find stable evolutions of l(t), u(t) satisfying at all t: i. Initial conditions l(t 0 ) = l 0 and u(t 0 , l 0 ) = u 0 ii. l(t), u(t) is a minimizer of the total energy, E(t, l, u) = Ω\Γ(l) 1 2 Ae(u) : e(u)dx - ∂ N Ω g(t) • u dH n-1 + G c l (1.5) amongst all l ≥ l(t) and u ∈ C t := u ∈ H 1 (Ω \ Γ(l)) : u = ū(t) on ∂ D Ω \ Γ(l) . 1.1. Gradient damage models iii. The energy balance, E(t, l, u) = E(t 0 , l 0 , u 0 ) + t t 0 ∂ D Ω (σ • ν) • u dH n-1 - ∂ N Ω ġ(t) • u dH n-1 ds (1.6) One observes that stability and irreversibility have been substituted by minimality, and the energy balance takes a variational form. To justify this choice, we show first that irreversibility, stability and kinematic equilibrium are equivalent to the first order optimality conditions of E(t, l, u) for u and l separately. Then, followed by the equivalence of the energy balance adopted in the evolution by minimality principle and within Griffith criterion. Proof. For a fixed l, u is a local minimizer of E(t, l, u), if for all v ∈ H 1 0 (Ω \ Γ(l)), for some h > 0 small enough, such that u + hv ∈ C t , E(t, l, u + hv) = E(t, l, u) + hE (t, l, u) • v + o(h) ≥ E(t, l, u) (1.7) thus, E (t, l, u) • v ≥ 0 (1.8) where E (t, l, u) denotes the first Gateaux derivative of E at u in the direction v. By standard arguments of calculus of variations, one obtains, E (t, l, u) • v = Ω\Γ(l) 1 2 Ae(u) : e(v)dx - ∂ N Ω g(t) • v dH n-1 (1.9) Integrating the term in e(v) by parts over Ω \ Γ(l), and considering both faces of Γ(l) with opposites normals, one gets, E (t, l, u) • v = - Ω\Γ(l) div Ae(u) • v dx + ∂Ω Ae(u) • ν • v dH n-1 - Γ(l) Ae(u) • ν • v dH n-1 - ∂ N Ω g(t) • v dH n-1 (1. E (t, l, u) • v = - Ω\Γ(l) div Ae(u) • v dx + ∂ N Ω Ae(u) • ν -g(t) • v dH n-1 - Γ(l) Ae(u) • ν • v dH n-1 (1.11) Chapter 1. Variational phase-field models of brittle fracture Taking v = -v ∈ H 1 0 (Ω\Γ(l)) , the optimality condition leads to E (t, l, u)•v = 0. Formally by a localization argument taking v such that it is concentrated around boundary and zero almost everywhere, we obtain that all integrals must vanish for any v. Since the stress-strain relation is given by σ = Ae(u), we recover the equilibrium constitutive equations,          div Ae(u) = 0 in Ω \ Γ(l) u = ū(t) on ∂ D Ω \ Γ(l) Ae(u) • ν = g(t) on ∂ N Ω Ae(u) • ν = 0 on Γ(l) (1.12) Now consider u is given. For any l > 0 for some h > 0 small enough, such that l + h l ≥ l(t), the derivative of E(t, l, u) at l in the direction l is, E (t, l, u) • l ≥ 0 ∂P(t, l, u) ∂l + G c ≥ 0 (1.13) this becomes an equality, G(t, l, u) = G c when the fracture propagates. To complete the equivalence between minimality evolution principle and Griffith, let us verify the energy balance. Provided a smooth evolution of l, the time derivative of the right hand side equation (1.6) is, dE(t, l, u) dt = ∂ D Ω (σ • ν) • u dH n-1 - ∂ N Ω ġ(t) • u dH n-1 (1.14) and the explicit left hand side, dE(t, l, u) dt = E (t, l, u) • u + E (t, l, u) • l - ∂ N Ω ġ(t) • u dH n-1 . (1.15) The Gateaux derivative with respect to u have been calculated above, so E (t, l, u) • u stands for, E (t, l, u) • u = - Ω\Γ div(Ae(u)) • u dx + ∂ D Ω Ae(u) • ν • u dH n-1 + ∂ N Ω Ae(u) • ν • u dH n-1 - ∂ N Ω g(t) • u dH n-1 . (1.16) Since u respects the equilibrium and the admissibility u = u on ∂ D Ω, all kinematic contributions to the elastic body vanish and the energy balance condition becomes, E (t, l, u) • l = 0 ⇔ ∂P ∂l + G c l = 0 (1.17) Gradient damage models At this stage minimality principle is equivalent to Griffith criterion for smooth evolution of l(t). Let's give a graphical interpretation of that. Consider a domain partially cut by a pre-fracture of length l 0 subject to a monotonic increasing displacement load, such that, ū(t) = tū on ∂ D Ω and stress free on the remainder boundary part. Hence, the elastic energy is ψ e(tu) = t 2 2 Ae(u) : e(u) and the irreversibility is l ≥ l 0 . The fracture stability is given by t 2 ∂P (1, l) ∂l + G c ≥ 0 and for any loading t > 0, the energy release rate for a unit loading is bounded by G(1, l) ≤ G c /t 2 . Forbidden region The fracture evolution is smooth if G(1, l) is strictly decreasing in l, i.e. P(1, l) is strictly convex as illustrated on the Figure 1.1(left). Thus, stationarity and local minimality are equivalent. Let's imagine that material properties are not constant in the structure, simply consider the Young's modulus varying in the structure such that G(1, l) has a concave part, see Figure 1.1(right). Since G(1, l) is a deceasing function, the fracture grows smoothly by local minimality argument until being stuck in the local well for any loadings which is physically inconsistent. Conversely, considering global minimization allows up to a loading point, the nucleation of a crack in the material, leading to a jump of the fracture evolution. Extension to Francfort-Marigo's model In the previous analysis, the minimality principle adopted was a local minimization argument because it considers small perturbations of the energy. This requires a topology, which includes a concept of distance defining small transformations, whereas for global minimization principle it is topology-independent. Without going too deeply into details, arguments in favor of global minimizers are described below. Griffith's theory does not Chapter 1. Variational phase-field models of brittle fracture hold for a domain with a weak singularity. By weak singularity, we consider any free stress acute angle that does not degenerate into a crack (as opposed to strong singularity). For this problem, by using local minimization, stationary points lead to the elastic solution. The reason for this are that the concept of energy release rate is not defined for a weak singularity and there is no sustainable stress limit over which the crack initiates. Hence, to overcome the discrepancy due to the lack of a critical stress in Griffith's theory, double criterion have been developed to predict fracture initiation in notched specimen, more details are provided in Chapter 2. Conversely, global minimization principle has a finite admissible stress allowing cracks nucleation, thus cracks can jump from a state to another, passing through energy barriers. For physical reasons, one can blame global minimizers to not enforce continuity of the displacement and damage field with respect to time. Nevertheless, it provides a framework in order to derive the fracture model as a limit of the variational damage evolution presented in section 1.3. This is quite technical but global minimizers from the damage model converge in the sens of Γ-convergence to global minimizers of the fracture model. Finally, under the assumptions of a pre-existing fracture and strict convexity of the potential energy, global or local minimization are equivalent and follow Griffith. In order to obtain the extended model of Francfort-Marigo variational approach to fracture [START_REF] Francfort | Revisiting brittle fracture as an energy minimization problem[END_REF][START_REF] Bourdin | The variational approach to fracture[END_REF][START_REF] Marigo | Initiation of cracks in griffith's theory: An argument of continuity in favor of global minimization[END_REF][START_REF] Bourdin | Variational Models and Methods in Solid and Fluid Mechanics, chapter Fracture[END_REF] one has to keep the rate independent variational principle and the Griffith fracture energy, relax the constrain on the pre-supposed crack path by extending to all possible crack geometries Γ and consider the global minimization of the following total energy E(u, Γ) := Ω\Γ 1 2 Ae(u) : e(u) dx - ∂ N Ω g(t) • u dH n-1 + G c H n-1 (Γ) (1.18) associated to cracks evolution problem given by, Definition 3 (Crack evolution by global minimizers) u(t), Γ t satisfies the variational evolution associated to the energy E(u, Γ) if the following three conditions hold: i. t → Γ t is increasing in time, i.e Γ t ⊇ Γ s for all t 0 ≤ s ≤ t ≤ T . ii. for any configuration (v, Γ) such that v = g(t) on ∂ D Ω \ Γ t and Γ ⊇ Γ t , E v, Γ ≥ E u(t), Γ t (1.19) iii. for all t, E u(t), Γ t = E u(t 0 ), Γ t 0 + t t 0 ∂ D Ω (σ • ν) • u(t) dH n-1 - ∂ N Ω ġ(t) • u dH n-1 ds (1.20) Gradient damage models It is convenient, to define the weak energy by extending the set of admissibility function to an appropriate space allowing discontinuous displacement field, but preserving "good" properties. SBD(Ω) = u ∈ SBV (Ω); Du = ∇u + (u + -u -) • ν dH n-1 J(u) (1.21) where, Du denotes the distributional derivative, J(u) is the jump set of u. Following De Giorgi in [START_REF] De Giorgi | Existence theorem for a minimum problem with free discontinuity set[END_REF], the minimization problem is reformulated in a weak energy form functional of SBV , such as, min u∈SBV (Ω) Ω 1 2 Ae(u) : e(u) dx - ∂ N Ω g(t) • u dH n-1 + G c H n-1 J(u) (1.22) For existence of solution in the discrete time evolution and time continuous refer to [START_REF] Francfort | Existence and convergence for quasi-static evolution in brittle fracture[END_REF][START_REF] Babadjian | Existence of strong solutions for quasi-static evolution in brittle fracture[END_REF]. The weak energy formulation will be recalled in section 1.3 for the Γ-convergence in one dimension. Gradient damage models to brittle fracture Because the crack path remains unknown a special method needs to be crafted. The approach is to consider damage as an approximation of the fracture with a finite thickness where material properties are modulated continuously. Hence, let the damage α being an internal variable which evolves between two extreme states, up to a rescaling α can be bounded between 0 and 1, where α = 0 is the sound state material and α = 1 refers to the broken part. Intermediate values of the damage can be seen as "micro cracking", a partial disaggregation of the Young's modulus. A possible choice is to let the damage variable α making an isotropic deterioration of the Hooke's tensor, i.e. a(α)A where a(α) is a stiffness function. Naturally the recoverable energy density becomes, ψ(α, e) = 1 2 a(α)Ae(u) : e(u), with the elementary property that ψ(α, e) is monotonically decreasing in α for any fixed u. The difficulty lies in the choice of a correct energy dissipation functional. At this stage of the presentation a choice would be to continue by following Marigo-Pham [START_REF] Pham | Approche variationnelle de l'endommagement : I. les concepts fondamentaux[END_REF][START_REF] Pham | Approche variationnelle de l'endommagement : Ii. les modèles à gradient[END_REF][START_REF] Pham | From the onset of damage until the rupture: construction of the responses with damage localization for a general class of gradient damage models[END_REF][START_REF] Marigo | An overview of the modelling of fracture by gradient damage models[END_REF][START_REF] Pham | Stability of homogeneous states with gradient damage models: Size effects and shape effects in the three-dimensional setting[END_REF] for a full and self consistent construction of the model. Their main steps are, assume a dissipation potential k(α), apply the Drucker-Ilushin postulate, then, introduce a gradient damage term to get a potential dissipation of the form k(α, ∇α). Instead, we will continue by following the historical ideas which arose from the image processing field with Mumford-Shah [START_REF] Mumford | Optimal approximation by piecewise smooth functions and associated variational problem[END_REF] where continuous functional was proposed to find the contour of the image in the picture by taking into account strong variations of pixels intensity across boundaries. Later, Ambrosio-Tortorelli [START_REF] Ambrosio | Existence theory for a new class of variational problems[END_REF][START_REF] Ambrosio | On the approximation of free discontinuity problems[END_REF] proposed the following functional which constitute main ingredients of the regularized damage models, Ω 1 2 (1 -α) 2 |∇u| 2 dx + Ω α 2 + 2 |∇α| 2 dx where > 0 is a regularized parameter called internal length. One can recognize the second term as the dissipation potential composed of two parts, a local term depending only on the damage state and a gradient damage term which penalizes sharp localization of the damage. The regularized parameter came up with the presence of the gradient damage term which has a dimension of the length. Following [START_REF] Ambrosio | Approximation of functionals depending on jumps by elliptic functionals via Γ-convergence[END_REF][START_REF] Ambrosio | On the approximation of free discontinuity problems[END_REF][START_REF] Bourdin | Numerical implementation of a variational formulation of quasi-static brittle fracture[END_REF][START_REF] Bourdin | The variational approach to fracture[END_REF][START_REF] Braides | Approximation of Free-Discontinuity Problems[END_REF], we define the regularized total energy of the gradient damage model for a variety of local dissipations and stiffness functions denoted w(α) and a(α), not only w(α) = α 2 and a(α ) = (1 -α) 2 by E (u, α) = Ω 1 2 a(α)Ae(u) : e(u) dx - ∂ N Ω g(t) • u dH n-1 + G c 4c w Ω w(α) + |∇α| 2 dx (1.23) where G c is the critical energy release rate, c w = 1 0 w(α)dα with w(α) and a(α) following some elementary properties. 1. The local dissipation potential w(α) is strictly monotonically increasing in α. For a sound material no dissipation occurs hence w(0) = 0, for a broken material the dissipation must be finite, and up to a rescaling we have w(1) = 1. 2. The elastic energy is monotonically decreasing in α for any fixed u. An undamaged material should conserve its elasticity property and no elastic energy can be stored in a fully damaged material such that, the stiffness function a(α) is a decreasing function with a(0) = 1 and a(1) = 0. 3. For numerical optimization reasons one can assume that a(α) and w(α) are continuous and convex. A large variety of models with different material responses can be constructed just by choosing different functions for a(α) and w(α). A non exhaustive list of functions used in the literature is provided in Table 1.1. Despite many models used, we will mainly focus on AT 1 and sometimes refers to AT 2 for numerical simulations. Now let us focus on the damage evolution of E (u, α) defined in (1.23). First, remark that to get a finite energy, the gradient damage is in L 2 (Ω) space. Consequently, the trace can be defined at the boundary, so, damage values can be prescribed. Accordingly let the set of admissible displacements and admissible damage fields C t and D, equipped with their natural H 1 norm, C t = u ∈ H 1 (Ω) : u = ū(t) on ∂ D Ω , D = α ∈ H 1 (Ω) : 0 ≤ α ≤ 1, ∀x ∈ Ω . The evolution problem is formally similar to one defined in Definition 2 and reads as, 1.1. Gradient damage models Name a(α) w(α) AT 2 (1 -α) 2 α 2 AT 1 (1 -α) 2 α LS k 1 -w(α) 1 + (c 1 -1)w(α) 1 -(1 -α) 2 KKL 4(1 -α) 3 -3(1 -α) 4 α 2 (1 -α) 2 /4 Bor c 1 (1 -α) 3 -(1 -α) 2 + 3(1 -α) 2 -2(1 -α) 3 α 2 SKBN (1 -c 1 ) 1 -exp (-c 2 (1 -α) c 3 ) 1 -exp (-c 2 ) α Table 1.1: Variety of possible damage models, where c 1 , c 2 , c 3 are constants. AT 2 introduced by Ambrosio Tortorelli and used by Bourdin [START_REF] Bourdin | Numerical implementation of a variational formulation of quasi-static brittle fracture[END_REF], AT 1 model initially introduced by Pham-Amor [START_REF] Pham | Gradient damage models and their use to approximate brittle fracture[END_REF], LS k in Alessi-Marigo [START_REF] Alessi | Gradient damage models coupled with plasticity and nucleation of cohesive cracks[END_REF], KKL for Karma-Kessler-Levine used in dynamics [START_REF] Karma | Phase-field model of mode III dynamic fracture[END_REF], Bor for Borden in [START_REF] Borden | A phase-field description of dynamic brittle fracture[END_REF], SKBN for Sargadoa-Keilegavlena-Berrea-Nordbottena in [START_REF] Sargado | High-accuracy phase-field models for brittle fracture based on a new family of degradation functions[END_REF]. Definition 4 (Damage evolution by minimality principle) For all t find (u, α) ∈ (C t , D) that satisfies the damage variational evolution: i. Initial condition α t 0 = α 0 and u t 0 = u 0 ii. (u, α) is a minimizer of the total energy, E (u, α) E (u, α) = Ω 1 2 a(α)Ae(u) : e(u) dx - ∂ N Ω g(t) • u dH n-1 + G c 4c w Ω w(α) + |∇α| 2 dx (1.24) amongst all α ≥ α(t) iii. Energy balance, E (u t , α t ) = E (u 0 , α 0 ) + t t 0 ∂ D Ω (σ • ν) • u dH n-1 - ∂ N Ω ġ(t) • u dH n-1 ds. (1.25) This damage evolution is written in a weak form in order to obtain the damage criterion in a strong formulation, we have to explicit the first order necessary optimality conditions of the constraint minimization of E for (u, α) given by, E (u, α)(v, β) ≥ 0 ∀(v, β) ∈ H 1 0 (Ω) × D (1.26) Chapter 1. Variational phase-field models of brittle fracture Using calculus of variation argument, one gets, E (u, α)(v, β) = Ω a(α)Ae(u) : e(v) dx - ∂ N Ω (g(t) • ν) • v dH n-1 + Ω 1 2 a (α)Ae(u) : e(u)β dx + G c 4c w Ω w (α) β + 2 ∇α • ∇β dx. (1.27) Integrating by parts the first term in e(v) and the last term in ∇α • ∇β, the expression leads to, E (u, α)(v, β) = - Ω div a(α)Ae(u) • v dx + ∂ N Ω [a(α)Ae(u) -g(t)] • ν • v dH n-1 + Ω 1 2 a (α)Ae(u) : e(u) + G c 4c w w (α) -2 ∆α β dx + G c 4c w ∂Ω 2 ∇α • ν β dH n-1 . (1.28) This holds for all β ≥ 0 and for all v ∈ H 1 0 (Ω), thus, one can take β = 0 and v = -v. Necessary, the first two integrals are equal to zero. Again, we recover the kinematic equilibrium with the provided boundary condition since σ = a(α)Ae(u),      div a(α)Ae(u) = 0 in Ω a(α)Ae(u) = g(t) on ∂ N Ω u = ū(t) on ∂ D Ω (1.29) The damage criteria and its associated boundary conditions arise for any β ≥ 0 and by taking v = 0 in (1.28), we obtain that the third and fourth integrals are non negative.      1 2 a (α)Ae(u) : e(u) + G c 4c w w (α) -2 ∆α ≥ 0 in Ω ∇α • ν ≥ 0 on ∂Ω (1.30) The damage satisfies criticality when (1.30) becomes an equality. Before continuing with the energy balance expression, let us focus a moment on the damage criterion. Notice that it is composed of an homogeneous part depending in w (α) and a localized contribution in ∆α. Assume the structure being at an homogeneous damage state, such that α is constant everywhere, hence the laplacian damage term vanishes. In that case, the elastic domain in a strain space is given by, 1.1. Gradient damage models Ae(u) : e(u) ≤ G c 2c w w (α) -a (α) (1.31) and in stress space, by, .32) this last expression requires to be bounded such that the structure has a maximum admissible stress, A -1 σ : σ ≤ G c 2c w w (α)a(α) 2 -a (α) (1 max α w (α) c (α) < C (1.33) where c(α) = 1/a(α) is the compliance function. If α → w (α)/c (α) is increasing the material response will be strain-hardening. For a decreasing function it is a stress-softening behavior. This leads to, w (α)a (α) > w (α)a (α) (Strain-hardening) w (α)c (α) < w (α)c (α) (Stress-softening) (1.34) Those conditions restrict proper choice for w(α) and a(α). Let us turn our attention back to find the strong formulation of the problem using the energy balance. Assuming a smooth evolution of damage in time and space, the time derivative of the energy is given by, dE (u, α) dt = E (u, α)( u, α) - ∂ N Ω ( ġ(t) • ν) dH n-1 (1.35) The first term has already been calculated by replacing (v, β) with ( u, α) in (1.27), so that, dE (u, α) dt = - Ω div a(α)Ae(u) • u dx + ∂ N Ω [a(α)Ae(u) -g(t)] • ν • u dH n-1 + ∂ D Ω a(α)Ae(u) • ν • u dH n-1 - ∂ N Ω ġ(t) • ν • u dH n-1 + Ω 1 2 a (α)Ae(u) : e(u) + G c 4c w w (α) -2 ∆α α dx + G c 4c w ∂Ω 2 ∇α • ν α dH n-1 (1.36) Chapter 1. Variational phase-field models of brittle fracture The first line vanishes with the equilibrium and boundary conditions, the second line is equal to the right hand side of the energy balance definition (1.25). Since the irreversibility α ≥ 0 and the damage criterion (1.30) hold, the integral is non negative, therefore the energy balance condition gives,      1 2 a (α)Ae(u) : e(u) + G c 4c w w (α) -2 ∆α • α = 0 in Ω (∇α • ν) • α = 0 on ∂Ω (1.37) Notice that the first condition in (1.37) is similar to the energy balance of Griffith, in the sense that the damage criterion is satisfied when damage evolves. Finally, the evolution problem is given by the damage criterion (1.30), the energy balance (1.37) and the kinematic admissibility (1.29). The next section is devoted to the construction of the optimal damage profile by applying the damage criterion to a one-dimensional traction bar problem for a given . Then, defined the critical energy release rate as the energy required to break a bar and to create an optimal damage profile. Application to a bar in traction The one-dimension problem The aim of this section is to apply the gradient damage model to a one-dimensional bar in traction. Relevant results are obtained with this example such as, the role of critical admissible stress, the process of damage nucleation due to stress-softening, the creation of an optimal damage profile for a given and the role of gradient damage terms which ban spacial jumps of the damage. In the sequel, we follow Pham-Marigo [START_REF] Pham | Construction et analyse de modèles d'endommagement à gradient[END_REF][START_REF] Pham | From the onset of damage until the rupture: construction of the responses with damage localization for a general class of gradient damage models[END_REF][START_REF] Pham | Approche variationnelle de l'endommagement : I. les concepts fondamentaux[END_REF]] by considering a one-dimensional evolution problem of a homogeneous bar of length 2L stretched by a time controlled displacement at boundaries and no damage value is prescribed at the extremities, such that, the admissible displacement and damage sets are respectively, C t := {u : u(-L) = -tL, u(L) = tL}, D := {α : 0 ≤ α ≤ 1 in [0, L]} (1.38) with the initial condition u 0 (x) = 0 and α 0 (x) = 0. Since no external force is applied, the total energy of the bar is given by, E (u, α) = L -L 1 2 a(α)Eu 2 dx + G c 4c w L -L w(α) + |α | 2 dx (1.39) where E is the Young's modulus, > 0 and (•) = ∂(•)/∂x. For convenience, let the compliance being the inverse of the stiffness such that c(α) = a -1 (α). Assume that α is at least continuously differentiable, but a special treatment would be required for α = 1 1.2. Application to a bar in traction which is out of the scope in this example. The pair (u t , α t ) ∈ C t × D is a solution of the evolution problem if the following conditions holds: 1. The equilibrium, σ t (x) = 0, σ t (x) = a(α t (x))Eu t (x), u t (-L) = -tL and u t (L) = tL The stress is constant along the bar. Hence it is only a function of time, such that, 2tLE = σ t L -L c α t (x) dx (1.40) Once the damage field is known. The equation (1.40) gives the stress-displacement response. 2. The irreversibility, αt (x) ≥ 0 (1.41) 3. The damage criterion in the bulk, - c (α t (x)) 2E σ 2 t + G c 4c w w (α t (x)) -2 α t (x) ≥ 0 (1.42) 4. The energy balance in the bulk, - c (α t (x)) 2E σ 2 t + G c 4c w w (α t (x)) -2 α t (x) αt (x) = 0 (1.43) 5. The damage criterion at the boundary, α t (-L) ≥ 0 and α t (L) ≤ 0 (1.44) 6. The energy balance at the boundary, α t (±L) αt (±L) = 0 (1.45) For smooth or brutal damage evolutions the first order stability enforce α t (±L) = 0 to respect E (u, α) = 0. Thus the damage boundary condition is replaced by α t (±L) = 0 when damage evolves. All equations are settled to solve the evolution problem. Subsequently, we study a uniform damage in the bar and then focus on the localized damage solution. The homogeneous damage profile Consider a case of a uniform damage in the bar α t (x) = α t , which is called the homogeneous solution. We will see that the damage response depends on the evolution of α → w (α)/c (α), i.e. for stress-hardening (increasing function) the damage evolves uniformly in the bar, and localizes for stress-softening configuration. Now suppose that the damage does not evolve and remains equal to its initial value, α t = α 0 = 0. Then using the damage criterion in the bulk (1.42) the admissible stress must satisfy, σ 2 t ≤ 2EG c 4c w w (0) c (0) (1.46) and the response remains elastic until the loading time t e , such that, t 2 ≤ - G c 2E c w w (0) a (0) = t 2 e (1.47) Suppose the damage evolves uniformly belongs to the bar, using the energy balance (1.43) and the damage criterion (1.42) we have, σ 2 t ≤ 2EG c 4c w w (α t ) c (α t ) , σ 2 t - 2EG c 4c w w (α t ) c (α t ) αt = 0 (1.48) The homogeneous damage evolution is possible only if α t → w (α)/c (α) is growing, this is the stress-hardening condition. Since αt > 0, the evolution of the stress is given by, σ 2 t = 2EG c 4c w w (α t ) c (α t ) ≤ max 0<α<1 2EG c 4c w w (α t ) c (α t ) = σ 2 c (1.49) where σ c is the maximum admissible stress for the homogeneous solution. One can define the maximum damage state α c obtained when σ t = σ c . This stage is stable until the loading time t c , t 2 ≤ - G c 2E c w w (α c ) a (α c ) = t 2 c (1.50) Since w (α)/c (α) is bounded and > 0, a fundamental property of gradient damage model is there exists a maximum value of the stress called critical stress, which allows crack to nucleate using the minimality principle. The localized damage profile The homogeneous solution is no longer stable if the damage α t → w (α)/c (α) is decreasing after α c . To prove that, consider any damage state such that, α t (x) > α c and the stress-softening property, leading to, 1.2. Application to a bar in traction 0 ≤ 2EG c 4c w w (α t (x)) c (α t (x)) ≤ 2EG c 4c w w (α c ) c (α c ) = σ 2 c (1.51) By integrating the damage criterion (1.42) over (-L, L) and using (1.44), we have, σ 2 t 2E L -L c (α t (x)) dx ≤ G c 4c w L -L w (α t (x)) dx + 2 α t (L) -α t (-L) ≤ G c 4c w L -L w (α t (x)) dx (1.52) then, put (1.51) into (1.52) to conclude that σ t ≤ σ c and use the equilibrium (1.40) to obtain σ t ≥ 0. Therefore using (1.52) we get that α t (x) ≥ 0, consequently the damage is no longer uniform when stress decreases 0 ≤ σ t ≤ σ c . Assume α t (x) is monotonic over (-L, x 0 ) with α t (-L) = α c and the damage is maximum at x 0 , such that, α t (x 0 ) = max x α t (x) > α c . Multiplying the equation (1.42) by α t (x), an integrating over [-L, x) for x < x 0 we get, α 2 t (x) = - 2c w σ 2 t EG c c(α t (x)) -c(α c ) + w(α t (x)) -w(α c ) (1.53) Plugging this above equation into the total energy restricted to the (-L, x 0 ) part, E (u t (x), α t (x)) (-L,x 0 ) = x 0 -L σ 2 t 2a(α t (x))E dx + G c 4c w x 0 -L w(α t (x)) + α 2 t (x) dx = x 0 -L σ 2 t 2a(α c )E dx + G c 4c w x 0 -L 2w(α) -w(α c ) dx (1.54) Note that the energy does not depend on α anymore, we just have two terms: the elastic energy and the surface energy which depends on state variation of w(α). The structure is broken when the damage is fully localized α(x 0 ) = 1. From the equilibrium (1.40), the ratio stress over stiffness function is bounded such that |σ t c(α)| < C, thus, |σ 2 t c(1)| → 0 and (1.53) becomes, α 2 t (x) = w(α t (x)) -w(α c ) , ∀x ∈ (-L, x 0 ) Remark that, the derivative of the damage and u across the point x 0 where α = 1 is finite. By letting the variable β = α t (x), the total energy of the partial bar (-L, x 0 ) is Chapter 1. Variational phase-field models of brittle fracture E (u t (x), α t (x)) (-L,x 0 ) = lim x→x 0 G c 4c w x -L 2w(α t (x)) -w(α c ) dx = lim β→1 G c 4c w β αc 2w(α) -w(α c ) β dβ = lim β→1 G c 4c w β αc 2w(β) -w(α c ) w(β) -w(α c ) dβ = lim β→1 G c 4c w β αc 2 w(β) -w(α c ) + w(α c ) w(β) -w(α c ) dβ = G c 2c w k(α c ) (1.55) with, k(α c ) := 1 αc w(β) -w(α c ) dβ + w(α c ) D 4 , where D is the damage profile size between the homogeneous and fully localized state, given by, D = L -L dx α (x) = 1 αc 2 w(β) -w(α c ) dβ. (1.56) Note that the right side of the bar (x 0 , L) contribute to the exact same total energy than the left one (-L, x 0 ). Different damage response is observed depending on the choice of w(α) and a(α). The model AT 1 for instance has an elastic part, thus α c = 0 and the energy release during the breaking process of a 1d bar is equal to G c . Models with an homogeneous response before localization, AT 2 for example, overshoot G c due to the homogeneous damage profile. A way to overcome this issue, is to consider that partial damage do not contribute to the dissipation energy, it can be relaxed after localization by removing the irreversibility. Another way is to reevaluate c w such as, c w = k(α c ). Limit of the damage energy From inception to completion gradient damage models follows the variational structure of Francfort-Marigo's [START_REF] Francfort | Revisiting brittle fracture as an energy minimization problem[END_REF][START_REF] Bourdin | The variational approach to fracture[END_REF] approach seen as an extension of Griffith, but connections between both need to be highlighted. Passing from damage to fracture, i.e. letting → 0 requires ingredients adapted from Ambrosio Tortorelli [START_REF] Ambrosio | Approximation of functionals depending on jumps by elliptic functionals via Γ-convergence[END_REF][START_REF] Ambrosio | On the approximation of free discontinuity problems[END_REF] on convergence of global minimizers of the total energy. A framework to study connections between damage and fracture variational models is that of Γ-convergence which we briefly introduce below. We refer the reader to [START_REF] Braides | Approximation of Free-Discontinuity Problems[END_REF][START_REF] Braides | Gamma-convergence for Beginners[END_REF][START_REF] Dal | An introduction to Γ-convergence[END_REF] for a complete exposition of the underlying theory. Limit of the damage energy In the sequel, we restrict the study to a 1d case structure of interval Ω ⊂ R whose size is large compare to the internal length and with a unit Young's modulus. We prescribe a boundary displacement ū on a part ∂ D Ω and stress free on the remaining part ∂ N Ω := ∂Ω \ ∂ D Ω. We set aside the issue of damage boundary conditions for now and we define the weak fracture energy, E(u, α, Ω) = F(u, Ω) if u ∈ SBV (Ω) +∞ otherwise (1.57) and F(u, Ω) := 1 2 Ω (u ) 2 dx + G c #(J(u)) (1.58) where #(J(u)) denotes the cardinality of jumps in the set of u. Derived from E(u, α, Ω) its associated regularized fracture energy is, E (u, α, Ω) = F (u, α, Ω) if u ∈ W 1,2 (Ω), α ∈ W 1,2 (Ω; [0, 1]) +∞ otherwise (1.59) and F (u, α, Ω) := 1 2 Ω a(α)(u ) 2 dx + G c 4c w Ω w(α) + α 2 dx (1.60) To prove that up to a subsequence minimizers for E converge to global minimizers of E we need the fundamental theorem of the Γ-convergence given in the Appendix A. We first show the compactness of the sequence of minimizers of E , then the Γconvergence of E to E. Before we begin, let the truncation and optimal damage profile lemma be, Lemma 1 Let u (resp. (u, α)) be a kinematically admissible global minimizer of F (resp. F ). Then u L ∞ (Ω) ≤ ū L ∞ (Ω) Proof. Let M = ū L ∞ , and u * = inf {sup{-M, u}, M }. Then F(u * ) ≤ F(u) with equality if u = u * . Lemma 2 Let α be the optimal profile of S (α ) := I w(α ) + (α ) 2 dx where I ⊂ R, then S (α ) = 4c w . Proof. In order to construct α we solve the optimal profile problem: Let γ be the solution of the following problem: find γ ∈ C 1 [-δ, x 0 ) such that γ(-δ) = 0 and lim x→x 0 γ(x) = ϑ, and which is a minimum for the function, F (γ) = x 0 -δ f (γ(x), γ (x), x)dx (1.61) where f (γ(x), γ (x), x) := w(γ(x)) + γ 2 (x) (1.62) Note that the first derivative of f is continuous. We will apply the first necessary optimality condition to solve the optimization problem described above, if γ is an extremum of F , then it satisfies the Euler-Lagrange equation, 2γ = w (γ) 2 and γ (-δ) = 0 (1.63) Note that w (γ) ≥ 0 implies γ convex, thus γ is monotonic in [-δ, x 0 ). Multiplying by γ and integrating form -δ to x, we obtain, γ 2 (x) -γ 2 (-δ) = w(γ(x)) -w(γ(-δ)) 2 (1.64) Since γ (-δ) = 0 and w(γ(-δ)) = 0, one gets, γ (x) = w(γ(x)) 2 (1.65) Let us define, α (x) = γ (|x -x 0 |) then, α (x) := γ (|x -x 0 |) if |x -x 0 | ≤ δ 0 otherwise (1.66) Note that α is continuous at x 0 and values ϑ, we have that, S (α ) = I w(α ) + (α ) 2 dx = 2 x 0 -δ w(γ ) + (γ ) 2 dx (1.67) Plug (1.65) into the last integral term, and change the variables β = γ (x), it turns into S (α ) = 2 x 0 -δ w(γ ) + (γ ) 2 dx = 2 γ(x 0 ) γ(-δ) w(β) β dβ = 4 ϑ 0 w(β) dβ (1.68) The fully damage profile is obtained once ϑ → 1, we get, This will be usefull for the recovery sequence in higer dimensions. S (α ) = lim Compactness Theorem 1 Let (x) := x 0 w(s)ds, and assume that there exists C > 0 such that 1 -(s) ≤ C a(s) for any 0 ≤ s ≤ 1. Let (u , α ) be a kinematic admissible global minimizer of E . Then, there exists a subsequence (still denoted by (u , α ) ), and a function u ∈ SBV (Ω) such that u → u in L 2 (Ω) and α → 0 a.e. in Ω as → 0 Proof. Note that the technical hypothesis is probably not optimal but sufficient to account for the AT 1 and AT 2 functionals. Testing α = 0 and an arbitrary kinematically admissible displacement field ũ, we get that, E (u , α ) ≤ E (ũ, 0) ≤ 1 2 Ω |ũ | 2 dx ≤ C (1.69) So that E (u , α ) is uniformly bounded by some C > 0. Also, this implies that w(α ) → 0 almost everywhere in Ω, and from properties of w, that α → 0 almost everywhere in Ω.Using the inequality a 2 + b 2 ≥ 2|ab| on the surface energy part, we have that, Ω 2 w(α )|α | dx ≤ Ω w(α ) + (α ) 2 dx ≤ C (1.70) In order to obtain the compactness of the sequence u , let v := (1 -(α )) u and using the truncation Lemma 1, v is uniformly bounded in L ∞ (Ω). Then, v = (1 -(α ))u -(α )α u ≤ (1 -(α ))|u | + w(α )|α ||u | ≤ a(α )|u | + w(α )|α ||u | (1.71) From the uniform bound on E (u , α ), we get that the first term is bounded in L 2 (Ω), while (1.70) and the truncation Lemma 1 show that the second term is bounded in L 1 (Ω) thus in L 2 (Ω). Finally, i. v is uniformly bounded in L ∞ (Ω) ii. v is uniformly bounded in L 2 (Ω) Chapter 1. Variational phase-field models of brittle fracture iii. J(v ) = ∅ invoking the Ambrosio's compactness theorem in SBV (in the Appendix A), we get that there exists v ∈ SBV (Ω) such that v → v strongly in L 2 (Ω). To conclude, since u = v (1-(α )) and α → 0 almost everywhere, we have, u → u in L 2 (Ω) Remark the proof above applies unchanged to the higher dimension case. Gamma-convergence in 1d The second part of the fundamental theorem of Γ-convergence requires that E Γ-converges to E. The definition of the Γ-convergence is in the Appendix A. The first condition means that E provides an asymptotic common lower bound for the E . The second condition means that this lower bound is optimal. The Γ-convergence is performed in 1d setting and is decomposed in two steps as follow: first prove the lower inequality, then construct the recovery sequence. Lower semi-continuity inequality in 1d We want to show that for any u ∈ SBV (Ω), and any (u , α ) such that u → u and α → 0 almost everywhere in Ω, we have, lim inf →0 E (u , α , Ω) ≥ 1 2 Ω (u ) 2 dx + G c #(J(u)) (1.72) Proof. Consider any interval I ⊂ Ω ⊂ R, such that, lim inf →0 E (u , α , I) ≥ 1 2 I (u ) 2 dx if u ∈ W 1,2 (I) (1.73) and, lim inf →0 E (u , α , I) ≥ G c otherwise (1.74) If lim inf →0 E (u , α , I) = ∞, both statements are trivial, so we can assume that there exist 0 ≤ C < ∞ such that, lim inf →0 E (u , α , I) ≤ C (1.75) We focus on (1.73) first, and assume that u ∈ W 1,2 (I). From (1.75) we deduce that w(α ) → 0 almost everywhere in I. Consequently, α → 0 almost everywhere in I. By Egoroff's theorem, for any > 0 there exists I ⊂ I such that |I | < and such that α → 0 uniformly on I \ I . For any δ > 0, thus we have, 1.3. Limit of the damage energy 1 -δ ≤ a(α ) on I \ I , for all and small enough, so that, I\I (1 -δ) (u ) 2 dx ≤ I\I a(α ) (u ) 2 dx ≤ I a(α ) (u ) 2 dx (1.76) Since u → u in W 1,2 (I) , and taking the lim inf on both sides, one gets, (1 -δ) 2 I\I (u ) 2 dx ≤ lim inf →0 1 2 I a(α ) (u ) 2 dx (1.77) we obtain the desired inequality (1.73) by letting → 0 and δ → 0. To prove the second assertion (1.74), we first show that lim →0 sup x∈I α = 1, proceeding by contradiction. Suppose there exists δ > 0 such that α < 1δ on I. Then, I a(1 -δ) (u ) 2 dx ≤ I a(α ) (u ) 2 dx Taking the lim inf on both sides and using (1.75) , we get that, lim inf →0 I (u ) 2 dx ≤ C a(1 -δ) So u is uniformly bounded in W 1,2 (I), and therefore u ∈ W 1,2 (I), which contradicts our hypothesis. Reasoning as before, we have that α → 0 almost everywhere in I. Proceeding the same way on the interval (b , c ), one gets that, lim inf →0 G c 4c w I w(α ) + (α ) 2 dx ≥ G c which is (1.74). In order to obtain (1.72), we apply (1.74) on arbitrary small intervals centered around each points in the jump set of u and (1.73) on each remaining intervals in I. Recovery sequence for the Γ-limit in 1d The construction of the recovery sequence is more instructive. Given (u, α) we need to buid a sequence (u , α ) such that lim sup F (u , α ) ≤ F(u, α). Proof. If F (u, α) = ∞, we can simply take u = u and α = α, so that we can safely assume that F(u, α) < ∞. As in the lower inequality, we consider the area near discontinuity points of u and away from them separately. Let (u, α) be given, consider an open interval I ⊂ R and a point x 0 ∈ J(u) ∩ I. Without loss of generality, we can assume that x 0 = 0 and I = (-δ, δ) for some δ > 0 . The construction of the recovery sequence is composed of two parts, first the recovery sequence for the damage, then one for the displacement. The optimal damage profile obtained in the Lemma 2, directly gives, lim sup →0 G c 4c w δ -δ w(α ) + (α ) 2 dx ≤ G c , (1.81) this is the recovery sequence for the damage. Now, let's focus on the recovery sequence for the bulk term. We define b and u (x) :=    x b u(x) if -b ≤ x ≤ b u(x) otherwise (1.82) Since a(α ) ≤ 1, we get that, -b -δ a(α ) (u ) 2 dx ≤ -b -δ (u ) 2 dx (1. a(α ) (u ) 2 dx ≤ b -b (u ) 2 dx ≤ b -b u b + xu b 2 dx ≤ 2 b -b u b 2 dx + 2 b -b xu b 2 dx ≤ 2 b 2 b -b |u| 2 dx + 2 b -b (u ) 2 dx (1.85) Since |u| ≤ M , the first term vanish when b → 0. Combining (1.83),(1.85) and (1.84). Then, taking the lim sup on both sides and using I |u | 2 dx < ∞, we get that, lim sup →0 1 2 δ -δ a(α ) (u ) 2 dx ≤ 1 2 δ -δ (u ) 2 dx (1.86) Finally combining (1.81) and (1.86), one obtains lim sup →0 δ -δ 1 2 a(α ) (u ) 2 + G c 4c w δ -δ w(α ) + (α ) 2 dx ≤ 1 2 δ -δ (u ) 2 dx + G c (1.87) For the final construction of the recovery sequence, notice that we are free to assume that #(J(u)) is finite and chose δ ≤ inf{|x ix j |/2 s.t. x i , x j ∈ J(u), x i = x j }. For each x i ∈ J(u), we define I i = (x iδ, x i + δ) and use the construction above on each I i whereas on I \ I i we chose u = u and α linear and continuous at the end points of the I i . With this construction, is easy to see that α → 1 uniformly in I \ I i and that, lim sup →0 I\ I i 1 2 a(α )(u ) 2 dx ≤ I (u ) 2 dx, (1.88) and, lim sup →0 I\ I i w(α ) + (α ) 2 dx = 0 (1.89) Altogether, we obtain the upper estimate for the Γ-limit for pairs (u, 1) of finite energy, i.e. lim sup →0 F (u , α ) ≤ F (u , 1) (1.90) Extension to higher dimensions To extend the Γ-limit to higher dimensions the lower inequality part is technical and is not developed here. But, the idea is to use Fubini's theorem, to build higher dimension by taking 1d slices of the domain, and use the lower continuity on each section see [START_REF] Ambrosio | Existence theory for a new class of variational problems[END_REF][START_REF] Braides | Approximation of Free-Discontinuity Problems[END_REF]. The recovery sequence is more intuitive, a possible construction is to consider a smooth Γ ⊂ Ω and compute the distance to the crack J(u), such that, d(x) = dist(x, J(u)) (1.91) and let the volume of the region bounded by p-level set of d, such that, s(y) = |{x ∈ R n ; d(x) ≤ y}| (1.92) Figure 1.2: Iso distance to the crack J(u) for the level set b and δ Following [START_REF] Evans | Measure theory and fine properties of functions[END_REF][START_REF] Evans | On the partial regularity of energy-minimizing, areapreserving maps[END_REF], the co-area formula from Federer [START_REF] Federer | Geometric measure theory[END_REF] is, Ω f (x) ∇g(x) dx = +∞ -∞ g -1 (y) f (x)dH n-1 (x) dy (1.93) In particular, taking g(x) = d(x) which is 1-Lipschitz, i.e. ∇d(x) = 1 almost everywhere. We get surface s(y), s(y) = s(y) ∇d(x) dx = y 0 H n-1 ({x; d(x) = t})dt (1.94) and s (y) = H n-1 ({x; d(x) = y}) (1.95) In particular, s (0) = lim y→0 s(y) y = 2H n-1 (J(u)) (1.96) Limit of the damage energy Consider the damage, α (d(x)) :=      1 if d(x) ≤ b γ (d(x)) if b ≤ d(x) ≤ δ 0 otherwise (1.97) The surface energy term is, Ω w(α ) + |∇α | 2 dx = 1 d(x)≤b dx + b ≤d(x)≤δ w(α (d(x))) + |∇α (d(x))| 2 dx (1.98) The first integral term, is the surface bounded by the iso-contour distant b from the crack, i.e s(b ) = d(x)≤b dx = b 0 H n-1 ({x; d(x) = y}) dy (1. ≤ δ/ 0 w(α (x )) + α (x ) 2 s (x ) dx (1.102) Passing the limit → 0 and using the Remark 1 on the optimal profile invariance, we get, lim sup →0 G c 4c w Ω w(α (x)) + |∇α (x)| 2 dx ≤ G c H n-1 (J(u)) (1.103) For the bulk term, consider the displacement, u (x) :=    d(x) b u(x) if d(x) ≤ b u(x) otherwise (1.104) Similarly to the 1d, one gets, lim sup →0 Ω 1 2 a(α )(∇u ) 2 dx ≤ Ω 1 2 (∇u ) 2 dx (1.105) Therefore, lim sup →0 Ω 1 2 a(α )(∇u ) 2 dx + G c 4c w Ω w(α + |∇α | 2 dx ≤ Ω 1 2 (∇u ) 2 dx + G c H n-1 (J(u)) (1.106) Numerical implementation In a view to numerically implement gradient damage models, it is common to consider time and space discretization. Let's first focus on the time-discrete evolution, by considering a time interval [0, T ] subdivided into (N + 1) steps, such that, 0 = t 0 < t 1 < • • • < t i-1 < t i < • • • < t N = T . At any step i, the sets of admissible displacement and damage fields C i and D i are, For any i find (u i , α i ) ∈ (C i , D i ) that satisfies the discrete evolution by local minimizer if the following hold: i. Initial condition α t 0 = α 0 and u t 0 = u 0 ii. For some C i := u ∈ H 1 (Ω) : u = ūi on ∂ D Ω D i := β ∈ H 1 (Ω) : α i-1 (x) ≤ β ≤ 1, ∀x ∈ Ω , (1.107 h i > 0, find (u i , α i ) ∈ (C i , D i ), such that, (v, β) -(u i , α i ) ≤ h i , E (u i , α i ) ≤ E (v, β) (1.108) where, E (u, α) = Ω 1 2 a(α)Ae(u) : e(u) dx - ∂ N Ω g(t) • u dH n-1 + G c 4c w Ω w(α) + |∇α| 2 dx (1.109) One observes that our time-discretization evolution do not enforce energy balance. Since a(α) and w(α) are convex, the total energy E (u, α) is separately convex with respect to u and α, but that is not convex. Hence, a proposed alternate minimization algorithm guarantees to converge to a critical point of the energy satisfying the irreversibility condition [START_REF] Bourdin | Numerical implementation of a variational formulation of quasi-static brittle fracture[END_REF][START_REF] Burke | An adaptive finite element approximation of a variational model of brittle fracture[END_REF]. The idea is for each time-step t i , we minimize the problem with respect to any kinematic admissible u for a given α, then, fixed u and minimize E (u, α) with respect to α subject to the irreversibility α i ≥ α i-1 , repeat the procedure until the variation of the damage is small. This gives the following algorithm see Algorithm 1, where δ α is a fixed tolerance parameter. For the space discretization of E (u, α), we use the finite element methods considering linear Lagrange elements for u and α. To solve the elastic problem preconditioned conjugate gradient solvers is employed, and the constraint minimization with respect to the damage is implemented using the variational inequality solvers provided by PETSc [START_REF] Balay | PETSc Web page[END_REF][START_REF] Balay | PETSc users manual[END_REF][START_REF] Balay | Efficient management of parallelism in object oriented numerical software libraries[END_REF]. All computations were performed using the open source mef902 . Due to the non-convexity of E , solution satisfying irreversibility and stationarity might not be unique. For remainder solutions, a study selection can be performed. For instance looking at solutions which satisfy the energy balance, or selecting displacement and damage fields which are continuous in time. Another way is to compare results with all previous one in order to avoid local minimizers solution (see [START_REF] Bourdin | The Variational Approach to Fracture[END_REF][START_REF] Bourdin | The variational formulation of brittle fracture: numerical implementation and extensions[END_REF] for more details on the backtracking idea). This method will select global minimizers from the set of solutions. 1: Let j = 0 and α 0 := α i-1 2: repeat 3: Compute the equilibrium, u j+1 := argmin u∈C i E (u, α j ) 4: Compute the damage, α j+1 := argmin α∈D i α≥α i-1 E (u j+1 , α) 5: j := j + 1 6: until α j -α j-1 L ∞ ≤ δ α 7: Set, u i := u j and α i := α j Conclusion The strength of the phase fields models to brittle fracture is the variational structure of the model conceived as an approximation of Griffith and its evolution based on three principles: irreversibility of the damage, stability and energy balance of the total energy. A fundamental property of the model is the maximum admissible stress illustrated in the one dimensional example. This also constrained the damage thickness size, since it governs . Numerically the fracture path is obtained by alternate searching of the damage trajectory which decreases the total energy and the elastic solution of the problem. Appendix A Theorem 2 (Ambrosio's compactness and lower semicontinuity on SBV) Let (f n ) n be a sequence of functions in SBV (Ω) such that there exists non-negative constants C 1 , C 2 and C 3 such that, i. f n is uniformly bounded in L ∞ (Ω) ii. ∇f n is uniformly bounded in L q (Ω, R n ) with q > 1 iii. H n-1 (J(f n )) is uniformly bounded Then, there exists f ∈ SBV (Ω) and a subsequence f k(n) such that, i. f k(n) → f strongly in L p (Ω), for all p < ∞ ii. ∇f k(n) → ∇f weakly in L q (Ω; R n ) iii. H n-1 (J(f )) ≤ lim inf n H n-1 (J(f n )) Theorem 3 (Fundamental theorem of Γ-convergence) If E Γ -converges to E, u is a minimizer of E , and (u ) is compact in X, then there exists u ∈ X such that u is a minimizer of E, u → u, and E (u ) → E(u). Definition 6 (Γ-convergence) Let E : X → R and E : X → R, where X is a topological space. Then E Γ converges to E if the following two conditions hold for any u ∈ X i. Lower semi continuity inequality: for every equence (u ) ∈ Xsuch that u → u E(u) ≤ lim inf →0 E (u ), ii. Existence of a recovery sequence: there exists a sequence (u ) ∈ X with u → u such that lim sup →0 E (u ) ≤ E(u). Chapter 2 Crack nucleation in variational phase-field models of brittle fracture Despite its many successes, Griffith's theory of brittle fracture [START_REF] Griffith | The phenomena of rupture and flow in solids[END_REF] and its heir, Linear Elastic Fracture Mechanics (LEFM), still faces many challenges. In order to identify a crack path, additional branching criteria whose choice is still unsettled have to be considered. Accounting for scale effects in LEFM is also challenging, as illustrated by the following example: consider a reference structure of unit size rescaled by a factor L. The critical loading at the onset of fracture scales then as 1/ √ L, leading to a infinite nucleation load as the structure size approaches 0, which is inconsistent with experimental observation for small structures [START_REF] Bažant | Scaling of quasibrittle fracture: asymptotic analysis[END_REF][START_REF] Issa | Size effects in concrete fracture: Part I, experimental setup and observations[END_REF][START_REF] Chudnovsky | Slow crack growth, its modeling and crack-layer approach: A review[END_REF]. It is well accepted that this discrepancy is due to the lack of a critical stress (or a critical lengthscale) in Griffith's theory. Yet, augmenting LEFM to account for a critical stress is very challenging. In essence, the idea of material strength is incompatible with the concept of elastic energy release rate near stress singularity, the pillar of Griffith-like theories, as it would imply crack nucleation under an infinitesimal loading. Furthermore, a nucleation criterion based solely on pointwise maximum stress will be unable to handle crack formation in a body subject to a uniform stress distribution. Many approaches have been proposed to provide models capable of addressing the aforementioned issues. Some propose to stray from Griffith fundamental hypotheses by incorporating cohesive fracture energies [START_REF] Ortiz | Finite-deformation irreversible cohesive elements for three-dimensional crack-propagation analysis[END_REF][START_REF] Del Piero | A diffuse cohesive energy approach to fracture and plasticity: the one-dimensional case[END_REF][START_REF] De Borst | Cohesive-zone models, higher-order continuum theories and reliability methods for computational failure analysis[END_REF][START_REF] Charlotte | Initiation of cracks with cohesive force models: a variational approach[END_REF] or material non-linearities [START_REF] Gou | Modeling fracture in the context of a strain-limiting theory of elasticity: A single plane-strain crack[END_REF]. Others have proposed dual-criteria involving both elastic energy release rate and material strength such as [START_REF] Leguillon | Strength or toughness? A criterion for crack onset at a notch[END_REF], for instance. Models based on the peridynamics theory [START_REF] Silling | Reformulation of elasticity theory for discontinuities and long-range forces[END_REF] may present an alternative way to handle these issues, but to our knowledge, they are still falling short of providing robust quantitative predictions at the structural scale. Francfort and Marigo [START_REF] Francfort | Revisiting brittle fracture as an energy minimization problem[END_REF] set to devise a formulation of brittle fracture based solely on Griffith's idea of competition between elastic and fracture energy, yet capable of handling the issues of crack path and crack nucleation. However, as already pointed-out in [START_REF] Francfort | Revisiting brittle fracture as an energy minimization problem[END_REF], their model inherits a fundamental limitation of the Griffith theory and LEFM: the lack of an internal length scale and of maximum allowable stresses. Amongst many numerical methods originally devised for the numerical implemen-tation of the Francfort-Marigo model [START_REF] Bourdin | Implementation of an adaptive finite-element approximation of the Mumford-Shah functional[END_REF][START_REF] Negri | Numerical minimization of the Mumford-Shah functional[END_REF][START_REF] Fraternali | Free discontinuity finite element models in two-dimensions for inplane crack problems[END_REF][START_REF] Schmidt | Eigenfracture: An eigendeformation approach to variational fracture[END_REF], Ambrosio-Tortorelli regularizations [START_REF] Ambrosio | Approximation of functionals depending on jumps by elliptic functionals via Γ-convergence[END_REF][START_REF] Ambrosio | On the approximation of free discontinuity problems[END_REF], originally introduced in [START_REF] Bourdin | Numerical experiments in revisited brittle fracture[END_REF], have become ubiquitous. They are known nowadays as phase-field models of fracture, and share several common points with the approaches coming from Ginzburg-Landau models for phase-transition [START_REF] Karma | Phase-field model of mode III dynamic fracture[END_REF]. They have been applied to a wide variety of fracture problems including fracture of ferro-magnetic and piezo-electric materials [START_REF] Abdollahi | Phase-field modeling of crack propagation in piezoelectric and ferroelectric materials with different electromechanical crack conditions[END_REF][START_REF] Wilson | A phase-field model for fracture in piezoelectric ceramics[END_REF], thermal and drying cracks [START_REF] Maurini | Crack patterns obtained by unidirectional drying of a colloidal suspension in a capillary tube: experiments and numerical simulations using a two-dimensional variational approach[END_REF][START_REF] Bourdin | Morphogenesis and propagation of complex cracks induced by thermal shocks[END_REF], or hydraulic fracturing [START_REF] Bourdin | A variational approach to the numerical simulation of hydraulic fracturing[END_REF][START_REF] Wheeler | An augmented-lagrangian method for the phase-field approach for pressurized fractures[END_REF][START_REF] Chukwudozie | Application of the Variational Fracture Model to Hydraulic Fracturing in Poroelastic Media[END_REF][START_REF] Wilson | Phase-field modeling of hydraulic fracture[END_REF] to name a few. They have been expended to account for dynamic effects [START_REF] Larsen | Existence of solutions to a regularized model of dynamic fracture[END_REF][START_REF] Bourdin | A time-discrete model for dynamic fracture based on crack regularization[END_REF][START_REF] Borden | A phase-field description of dynamic brittle fracture[END_REF][START_REF] Hofacker | A phase field model of dynamic fracture: Robust field updates for the analysis of complex crack patterns[END_REF], ductile behavior [START_REF] Alessi | Gradient damage models coupled with plasticity and nucleation of cohesive cracks[END_REF][START_REF] Miehe | Phase field modeling of fracture in multi-physics problems. Part II. coupled brittle-to-ductile failure criteria and crack propagation in thermo-elastic-plastic solids[END_REF][START_REF] Ambati | Phase-field modeling of ductile fracture[END_REF], cohesive effects [START_REF] Crismale | Viscous approximation of quasistatic evolutions for a coupled elastoplastic-damage model[END_REF][START_REF] Conti | Phase field approximation of cohesive fracture models[END_REF][START_REF] Freddi | Numerical insight of a variational smeared approach to cohesive fracture[END_REF], large deformations [START_REF] Ambati | A phase-field model for ductile fracture at finite strains and its experimental verification[END_REF][START_REF] Miehe | Phase field modeling of ductile fracture at finite strains. a variational gradient-extended plasticity-damage theory[END_REF][START_REF] Borden | A phasefield formulation for fracture in ductile materials: Finite deformation balance law derivation, plastic degradation, and stress triaxiality effects[END_REF], or anisotropy [START_REF] Li | Phase-field modeling and simulation of fracture in brittle materials with strongly anisotropic surface energy[END_REF], for instance. Although phase-field models were originally conceived as approximations of Francfort and Marigo's variational approach to fracture in the vanishing limit of their regularization parameter, a growing body of literature is concerned with their links with gradient damage models [START_REF] Frémond | Damage, gradient of damage and principle of virtual power[END_REF][START_REF] Lorentz | Analysis of non-local models through energetic formulations[END_REF]. In this setting, the regularization parameter is kept fixed and interpreted as a material's internal length [START_REF] Pham | The issues of the uniqueness and the stability of the homogeneous response in uniaxial tests with gradient damage models[END_REF][START_REF] Freddi | Regularized variational theories of fracture: A unified approach[END_REF][START_REF] Del | A variational approach to fracture and other inelastic phenomena[END_REF]. In particular, [START_REF] Pham | Approche variationnelle de l'endommagement : I. les concepts fondamentaux[END_REF][START_REF] Pham | Approche variationnelle de l'endommagement : Ii. les modèles à gradient[END_REF] proposed an evolution principle for an Ambrosio-Tortorelli like energy based on irreversibility, stability and energy balance, where the regularization parameter is kept fixed and interpreted as a material's internal length. This approach, which we refer to as variational phase-field models, introduces a critical stress proportional to 1/ √ . As observed in [START_REF] Pham | The issues of the uniqueness and the stability of the homogeneous response in uniaxial tests with gradient damage models[END_REF][START_REF] Bourdin | Morphogenesis and propagation of complex cracks induced by thermal shocks[END_REF][START_REF] Nguyen | On the choice of parameters in the phase field method for simulating crack initiation with experimental validation[END_REF], it can potentially reconcile stress and toughness criteria for crack nucleation, recover pertinent size effect at small and large length-scales, and provide a robust and relatively simple approach to model crack propagation in complex two-and three-dimensional settings. However, the few studies providing experimental verifications [START_REF] Pham | Experimental validation of a phase-field model for fracture[END_REF][START_REF] Nguyen | On the choice of parameters in the phase field method for simulating crack initiation with experimental validation[END_REF][START_REF] Bourdin | Morphogenesis and propagation of complex cracks induced by thermal shocks[END_REF] are still insufficient to fully support this conjecture. The goal of this chapter is precisely to provide such evidences, focusing on nucleation and size-effects for mode-I cracks. We provide quantitative comparisons of nucleation loads near stress concentrations and singularities with published experimental results for a range of materials. We show that variational phase-field models can reconcile strength and toughness thresholds and account for scale effect at the structural and the material length-scale. In passing, we leverage the predictive power of our approach to propose a new way to measure a material's tensile strength from the nucleation load of a crack near a stress concentration or a weak singularity. In this study, we focus solely on the identification of the critical stress at the first crack nucleation event and are not concerned by the post-critical fracture behavior. The chapter is organized as follows: in Section 2.1, we introduce variational phasefield models and recall some of their properties. Section 2.2 focuses on the links between stress singularities or concentrations and crack nucleation in these models. We provide validation and verification results for nucleation induced by stress singularities using Vshaped notches, and concentrations using U-notches. Section 2.3 is concerned with shape and size effects. We investigate the role of the internal length on nucleation near a defect, focusing on an elliptical cavity and a mode-I crack, and discussing scale effects at the material and structural length scales. Chapter 2. Crack nucleation in variational phase-field models of brittle fracture Variational phase-field models We start by recalling some important properties of variational phase-field models, focussing first on their construction as approximation method of Francfort and Marigo's variational approach to fracture, then on their alternative formulation and interpretation as gradient-damage models. Regularization of the Francfort-Marigo fracture energy Consider a perfectly brittle material with Hooke's law A and critical elastic energy release rate G c occupying a region Ω ⊂ R n , subject to a time dependent boundary displacement ū(t) on a part ∂ D Ω of its boundary and stress-free on the remainder ∂ N Ω. In the variational approach to fracture, the quasi-static equilibrium displacement u i and crack set Γ i at a given discrete time step t i are given by the minimization problem (see also [START_REF] Bourdin | The variational approach to fracture[END_REF]) (u i , Γ i ) = argmin u=ū i on ∂ D Ω Γ⊃Γ i-1 E(u, Γ) := Ω\Γ 1 2 Ae(u) • e(u) dx + G c H n-1 (Γ ∩ Ω \ ∂ N Ω), (2.1) where H n-1 (Γ) denotes the Hausdorff n -1-dimensional measure of the unknown crack Γ, i.e. its aggregate length in two dimensions or surface area in three dimensions, and e(u) := 1 2 (∇u + ∇ T u) denotes the symmetrized gradient of u. Because in (2.1) the crack geometry Γ is unknown, special numerical methods had to be crafted. Various approaches based for instance on adaptive or discontinuous finite elements were introduced [START_REF] Bourdin | Implementation of an adaptive finite-element approximation of the Mumford-Shah functional[END_REF][START_REF] Giacomini | A discontinuous finite element approximation of quasi-static growth of brittle fractures[END_REF][START_REF] Fraternali | Free discontinuity finite element models in two-dimensions for inplane crack problems[END_REF]. Variational phase-field methods, take their roots in Ambrosio and Tortorelli's regularization of the Mumford-Shah problem in image processing [START_REF] Ambrosio | Approximation of functionals depending on jumps by elliptic functionals via Γ-convergence[END_REF][START_REF] Ambrosio | On the approximation of free discontinuity problems[END_REF], adapted to brittle fracture in [START_REF] Bourdin | Numerical experiments in revisited brittle fracture[END_REF]. In this framework, a regularized energy E depending on a regularization length > 0 and a "phase-field" variable α taking its values in [0, 1] is introduced. A broad class of such functionals was introduced in [START_REF] Braides | Approximation of Free-Discontinuity Problems[END_REF]. They are E (u, α) = Ω a(α) + η 2 Ae(u) • e(u) dx + G c 4c w Ω w(α) + |∇α| 2 dx, (2.2) where a and w are continuous monotonic functions such that a(0) = 1, a(1) = 0, w(0) = 0, and w(1) = 1, η = o( ), and c w := 1 0 w(s) ds is a normalization parameter. The approximation of E by E takes place with the framework of Γ-convergence (see [START_REF] Maso | An introduction to Γ-convergence[END_REF][START_REF] Braides | Gamma-convergence for Beginners[END_REF] for instance). More precisely, if E Γ-converges to E, then the global minimizers of E converge to that of E. The Γ-convergence of a broad class of energies, including the ones above was achieved with various degrees of refinement going from static scalar elasticity to time discrete and time continuous quasi-static evolution linearized elasticity, and their finite element discretization [START_REF] Bellettini | Discrete approximation of a free discontinuity problem[END_REF][START_REF] Bourdin | Image segmentation with a finite element method[END_REF][START_REF] Braides | Approximation of Free-Discontinuity Problems[END_REF][START_REF] Giacomini | A discontinuous finite element approximation of quasi-static growth of brittle fractures[END_REF][START_REF] Chambolle | An approximation result for special functions with bounded variations[END_REF][START_REF] Chambolle | Addendum to "An Approximation Result for Special Functions with Bounded Deformation[END_REF][START_REF] Giacomini | Ambrosio-Tortorelli approximation of quasi-static evolution of brittle fractures[END_REF][START_REF] Burke | An adaptive finite element approximation of a variational model of brittle fracture[END_REF][START_REF] Burke | An adaptive finite element approximation of a generalized Ambrosio-Tortorelli functional[END_REF][START_REF] Iurlano | A density result for gsbd and its application to the approximation of brittle fracture energies[END_REF]. Throughout this chapter, we focus on two specific models: E (u, α) = Ω (1 -α) 2 + η 2 Ae(u) • e(u) dx + G c 2 Ω α 2 + |∇α| 2 dx, (AT 2 ) 2.1. Variational phase-field models introduced in [START_REF] Ambrosio | On the approximation of free discontinuity problems[END_REF] for the Mumford-Shah problem and in [START_REF] Bourdin | Numerical experiments in revisited brittle fracture[END_REF] for brittle fracture, and E (u, α) = Ω (1 -α) 2 + η 2 Ae(u) • e(u) dx + 3G c 8 Ω α + |∇α| 2 dx (AT 1 ) used in [START_REF] Bourdin | Morphogenesis and propagation of complex cracks induced by thermal shocks[END_REF]. The "surfing" problem introduced in [START_REF] Hossain | Effective toughness of heterogeneous media[END_REF] consists in applying a translating boundary displacement on ∂Ω given by ū(x, y) = ūI (x-V t, y), where ūI denotes the asymptotic farfield displacement field associated with a mode-I crack along the x-axis with tip at (0, 0), V is a prescribed loading "velocity", and t a loading parameter ("time"). ] with an initial crack Γ 0 = [0, l 0 ] × {0} for several values of . The AT 1 model is used, assuming plane stress conditions, and the mesh size h is adjusted so that /h = 5, keeping the "effective" numerical toughness G eff := G c 1 + h 4cw fixed (see [START_REF] Bourdin | The variational approach to fracture[END_REF]). The Poisson ratio is ν = 0.3, the Young's modulus is E = 1, the fracture toughness is G c = 1.5, and the loading rate V = 4. As expected, after a transition stage, the crack length depends linearly on the loading parameter with slope 3.99, 4.00 and 4.01 for =0.1, 0.05 and 0.025 respectively. The elastic energy release rate G, computed using the G θ method [START_REF] Destuynder | Sur une interprétation mathématique de l'intégrale de Rice en théorie de la rupture fragile[END_REF][START_REF] Sicsic | From gradient damage laws to Griffith's theory of crack propagation[END_REF][START_REF] Li | Gradient damage modeling of brittle fracture in an explicit dynamics context[END_REF] is very close to G eff . Even though Γ-convergence only mandates that the elastic energy release rate in the regularized energy converges to that of Griffith as → 0, we observe that as long as is "compatible" with the discretization size and domain geometry, its influence on crack propagation is insignificant. Similar observations were reported in [START_REF] Klinsmann | An assessment of the phase field formulation for crack growth[END_REF][START_REF] Zhang | Numerical evaluation of the phasefield model for brittle fracture with emphasis on the length scale[END_REF][START_REF] Pham | Experimental validation of a phase-field model for fracture[END_REF]. Figure 2.1(right) repeats the same experiment for a curve propagating along a circular path. Here, the boundary displacement is given by Muskhelishvili's exact solution for a crack propagating in mode-I along a circular path [START_REF] Muskhelishvili | Some Basic Problems of the Mathematical Theory of Elasticity: Fundamental Equations, Plane Theory of Elasticity, Torsion, and Bending (translated from Russian)[END_REF]. The Young's modulus, fracture toughness, and loading rate are set to 1. Again, we see that even for a fixed regularization length, the crack obeys Griffith's criterion. Chapter 2. Crack nucleation in variational phase-field models of brittle fracture When crack nucleation is involved, the picture is considerably different. Consider a one-dimensional domain of length L, fixed at one end and submitted to an applied displacement ū = e L at the other end. For the lack of an elastic singularity, LEFM is incapable of predicting crack nucleation here, and predicts a structure capable of supporting arbitrarily large loads without failing. A quick calculation shows that the global minimizer of (2.1) corresponds to an uncracked elastic solution if e < e c := 2Gc EL , while at e = e c , a single crack nucleates at an arbitrary location (see [START_REF] Francfort | Revisiting brittle fracture as an energy minimization problem[END_REF][START_REF] Bourdin | The variational approach to fracture[END_REF]). The failure stress is σ c = 2G c E/L, which is consistent with the scaling law σ c = O 1/ √ L mentioned in the introduction. The uncracked configuration is always a stable local minimizer of (2.1), so that if local minimization of (2.1) is considered, nucleation never takes place. Just as before, one can argue that for the lack of a critical stress, an evolution governed by the generalized Griffith energy (2.1) does not properly account for nucleation and scaling laws. When performing global minimization of (2.2) using the backtracking algorithm of [START_REF] Bourdin | Numerical implementation of a variational formulation of quasi-static brittle fracture[END_REF] for instance, a single crack nucleates at an -dependent load. As predicted by the Γ-convergence of E to E, the critical stress at nucleation converges to 2G c E/L as → 0. Local minimization of (2.2) using the alternate minimizations algorithm of [START_REF] Bourdin | Numerical experiments in revisited brittle fracture[END_REF], or presumably any gradient-based monotonically decreasing scheme, leads to the nucleation of a single crack at a critical load e c , associated with a critical stress σ c = O G c E/ , as described in [START_REF] Bourdin | Numerical implementation of a variational formulation of quasi-static brittle fracture[END_REF] for example. In the limit of vanishing , local and global minimization of (2.2) inherit therefore the weaknesses of Griffith-like theories when dealing with scaling properties and crack nucleation. Variational phase-field models as gradient damage models More recent works have sought to leverage the link between σ c and . Ambrosio-Tortorelli functionals are then seen as the free energy of a gradient damage model [START_REF] Frémond | Damage, gradient of damage and principle of virtual power[END_REF][START_REF] Lorentz | Analysis of non-local models through energetic formulations[END_REF][START_REF] Benallal | Bifurcation and stability issues in gradient theories with softening[END_REF][START_REF] Pham | Approche variationnelle de l'endommagement : I. les concepts fondamentaux[END_REF][START_REF] Pham | Approche variationnelle de l'endommagement : Ii. les modèles à gradient[END_REF] where α plays the role of a scalar damage field. In [START_REF] Pham | The issues of the uniqueness and the stability of the homogeneous response in uniaxial tests with gradient damage models[END_REF], a thorough investigation of a one-dimensional tension problem led to interpreting as a material's internal or characteristic length linked to a material's tensile strength. An overview of this latter approach, which is the one adopted in the rest of this work, is given below. In all that follows, we focus on a time-discrete evolution but refer the reader to [START_REF] Pham | Approche variationnelle de l'endommagement : I. les concepts fondamentaux[END_REF][START_REF] Pham | Approche variationnelle de l'endommagement : Ii. les modèles à gradient[END_REF][START_REF] Marigo | An overview of the modelling of fracture by gradient damage models[END_REF] for a time-continuous formulation which can be justified within the framework of generalized standard materials [START_REF] Halphen | Sur les matériaux standard généralisés[END_REF] and rate-independent processes [START_REF] Mielke | Evolution of rate-independent systems[END_REF]. At any time step i > 1, the sets of admissible displacement and damage fields C i and D i , equipped with their natural H 1 norm, are C i = u ∈ H 1 (Ω) : u = ūi on ∂ D Ω , D i = β ∈ H 1 (Ω) : α i-1 (x) ≤ β(x) ≤ 1, ∀x ∈ Ω , where the constraint α i-1 (x) ≤ β(x) ≤ 1 in the definition of D i mandates that the damage be an increasing function of time, accounting for the irreversible nature of the 2.1. Variational phase-field models damage process. The damage and displacement fields (u i , α i ) are then local minimizers of the energy E , i.e. there exists h i > 0 such that ∀(v, β) ∈ C i × D i such that (v, β) -(u i , α i ) ≤ h i , E (u i , α i ) ≤ E (v, β), (2.3) where • denotes the natural H 1 norm of C i × D i . We briefly summarize the solution of the uniaxial tension of a homogeneous bar [START_REF] Pham | Gradient damage models and their use to approximate brittle fracture[END_REF][START_REF] Pham | The issues of the uniqueness and the stability of the homogeneous response in uniaxial tests with gradient damage models[END_REF], referring the reader to the recent review [START_REF] Marigo | An overview of the modelling of fracture by gradient damage models[END_REF] for further details: As one increases the applied strain, the damage field remains 0 and the stress field constant until it reaches the elastic limit σ e = G c E c w w (0) 2s (0) . (2.4) where E is the Young modulus of the undamaged material, and s(α) = 1/a(α). If the applied displacement is increased further, the damage field increases but remains spatially constant. Stress hardening is observed until peak stress σ c , followed by stress softening. A stability analysis shows that for long enough domains (i.e. when L ), the homogeneous solution is never stable in the stress softening phase, and that a snapback to a fully localized solution such that max x∈(0,L) α(x) = 1 is observed. The profile of the localized solution and the width D of the localization can be derived explicitly from the functions a and w. With the choice of normalization of (2.2), the surface energy associated to the fully localized solution is exactly G c and its elastic energy is 0, so that the overall response of the bar is that of a brittle material with toughness G c and strength σ c . Knowing the material's toughness G c and the Young's modulus E, one can then adjust in such a way that the peak stress σ c matches the nominal material's strength. Let us denote by ch = G c E σ 2 c = K 2 Ic σ 2 c (2.5) the classical material's characteristic length (see [START_REF] Rice | The mechanics of earthquake rupture[END_REF][START_REF] Falk | A critical evaluation of cohesive zone models of dynamic fracture[END_REF], for instance), where E = E in three dimensions and in plane stress, or E = E/(1ν 2 ) in plane strain, and K Ic = √ G c E is the mode-I critical stress intensity factor. The identification above gives 1 := 3 8 ch ; 2 := 27 256 ch , (2.6) for the AT 1 and AT 2 models, respectively. Table 2.1 summarizes the specific properties of the AT 1 and AT 2 models. The AT 1 model has some key conceptual and practical advantages over the AT 2 model used in previous works, which were leveraged in [START_REF] Bourdin | Morphogenesis and propagation of complex cracks induced by thermal shocks[END_REF] for instance: It has a non-zero elastic limit, preventing diffuse damage at small loading. The length localization band D is finite so that equivalence with Griffith energy is obtained even for a finite value of , and not only in the limit of → 0, as predicted by Γ-convergence [START_REF] Sicsic | From gradient damage laws to Griffith's theory of crack propagation[END_REF]. By remaining quadratic in the α and u variables, its numerical implementation using alternate minimizations originally introduced in [START_REF] Bourdin | Numerical experiments in revisited brittle fracture[END_REF] is very efficient. Chapter 2. Crack nucleation in variational phase-field models of brittle fracture Model w(α) a(α) c w σ e σ c D ch AT 1 α (1 -α) 2 2 3 3GcE 8 3GcE 8 4 8 3 AT 2 α 2 (1 -α) 2 1 2 0 3 16 3GcE ∞ 256 27 Table 2.1: Properties of the gradient damage models considered in this work: the elastic limit σ e , the material strength σ c , the width of the damage band D, and the conventional material length ch defined in (2.5). We use the classical convention E = E in three dimension and in plane stress, and E = E/(1 -ν 2 ) in plane strain. In all the numerical simulations presented below, the energy (2.2) is discretized using linear Lagrange finite elements, and minimization performed by alternating minimization with respect to u and α. Minimization with respect to u is a simple linear problem solved using preconditioned gradient conjugated while constrained minimization with respect to α is reformulated as a variational inequality and implemented using the variational inequality solvers provided by PETSc [START_REF] Balay | Efficient management of parallelism in object oriented numerical software libraries[END_REF][START_REF] Balay | PETSc users manual[END_REF][START_REF] Balay | PETSc Web page[END_REF]. All computations were performed using the open source implementations mef901 and gradient-damage2 . Effect of stress concentrations The discussion above suggests that variational phase-field models, as presented in Section 2.1.2, can account for strength and toughness criteria simultaneously, on an idealized geometry. We propose to investigate this claim further by focusing on more general geometries, a V-shaped notch to illustrate nucleation near stress singularities and a Ushaped notch for stress concentrations. There is a wealth of experimental literature on crack initiation in such geometries using three-point bending (TPB), four-point bending (FPB), single or double edge notch tension (SENT and DENT) allowing us to provide qualitative validation and verification simulations of the critical load at nucleation. Initiation near a weak stress singularity: the V-notch Consider a V-shaped notch in a linear elastic isotropic homogeneous material. Let (r, θ) be the polar coordinate system emanating from the notch tip with θ = 0 corresponding to the notch symmetry axis, shown on Figure 2.2(left). Assuming that the notch lips Γ + ∪ Γ -are stress-free, the mode-I component of the singular part of the stress field in 2.2. Effect of stress concentrations plane strain is given in [START_REF] Leguillon | Computation of Singular Solutions in Elliptic Problems and Elasticity[END_REF]: σ θθ = kr λ-1 F (θ), σ rr = kr λ-1 F (θ) + (λ + 1)F (θ) λ(λ + 1) , σ rθ = -kr λ-1 F (θ) (λ + 1) , (2.7) where F (θ) = (2π) λ-1 cos((1 + λ)θ) -f (λ, ω) cos((1 -λ)θ) 1 -f (λ, ω) , (2.8) and f (λ, ω) = (1 + λ) sin((1 + λ)(π -ω)) (1 -λ) sin((1 -λ)(π -ω)) , (2.9) and the exponent of the singularity λ ∈ [1/2, 1], see (2.11) Note that this definition differs from the one often encountered in the literature by a factor (2π) λ-1 , so that when ω = 0 (i.e. when the notch degenerates into a crack), k corresponds to the mode-I stress intensity factor whereas when ω = π/2, k is the tangential stress, and that the physical dimension of [k] ≡ N/m -λ -1 is not a constant but depends on the singularity power λ. If ω < π/2 (i.e. ω > π/2), the stress field is singular at the notch tip so that a nucleation criterion based on maximum pointwise stress will predict crack nucleation for any arbitrary small loading. Yet, as long as ω > 0 (ω < π), the exponent of the singularity is sub-critical in the sense of Griffith, so that LEFM forbids crack nucleation, regardless of the magnitude of the loading. ūr = r λ E (1 -ν 2 )F (θ) + (λ + 1)[1 -νλ -ν 2 (λ + 1)]F (θ) λ 2 (λ + 1) ūθ = r λ E (1 -ν 2 )F (θ) + [2(1 + ν)λ 2 + (λ + 1)(1 -νλ -ν 2 (λ + 1)]F (θ) λ 2 (1 -λ 2 ) . (2.12) In the mode-I Pac-Man test, we apply a boundary displacement on the outer edge of the domain ∂ D Ω of the form tū on both components of u, t being a monotonically increasing loading parameter. We performed series of numerical simulations varying the notch angle ω and regularization parameter for the AT 1 and AT 2 models. Up to a rescaling and without loss of generality, it is always possible to assume that E = 1 and G c = 1. The Poisson ratio was set to ν = 0.3. We either prescribed the value of the damage field on Γ + ∪ Γ -to 1 (we refer this to as "damaged notch conditions") or let it free ("undamaged notch conditions"). The mesh size was kept at a fixed ratio of the internal length h = /5. For "small" enough loadings, we observe an elastic or nearly elastic phase during which the damage field remains 0 or near 0 away from an area of radius o( ) near the notch tip. Then, for some loading t = t c , we observed the initiation of a "large" add-crack associated with a sudden jump of the elastic and surface energy. Figure 2.3 shows a typical mesh, the damage field immediately before and after nucleation of a macroscopic crack and the energetic signature of the nucleation event. Figure 2.4 shows that up to the critical loading, the generalized stress intensity factor can be accurately recovered by averaging σ θθ (r, 0)/(2π r) λ-1 along the symmetry axis of the domain, provided that the region r ≤ 2 be excluded. Figure 2.5(left) shows the influence of the internal length on the critical generalized stress intensity factor for a sharp notch (ω = 0.18°) for the AT 1 and AT 2 models, using damaged and undamaged notch boundary conditions on the damage field. In this case, with the normalization (2.11), the generalized stress intensity factor coincides with the standard mode-I stress intensity factor K Ic . As suggested by the surfing experiment in t := k AT c K Ic = √ G c E . As reported previously in [START_REF] Klinsmann | An assessment of the phase field formulation for crack growth[END_REF] for instance, undamaged notch conditions lead to overestimating the critical load. We speculate that this is because with undamaged notch condition, the energy barrier associated with bifurcation from an undamaged (or partially damaged) state to a fully localized state needs to be overcome. As expected, this energy barrier is larger for the AT 1 model than for the AT 2 model for which large damaged areas ahead of the notch tip are observed. For flat notches (2ω = 179.64°) as shown in Figure 2.5(right), the generalized stress intensity factor k takes the dimension of a stress, and crack nucleation is observed when k c reaches the -dependent value σ c given in Table 2.1, i.e. when σ θθ | θ=0 = σ c , as in the uniaxial tension problem. In this case the type of damage boundary condition on the notch seems to have little influence. For intermediate values of ω, we observe in Figure 2.6 that the critical generalized stress intensity factor varies smoothly and monotonically between its extreme values and remains very close to K Ic for opening angles as high as 30°, which justifies the common numerical practice of replacing initial cracks with slightly open sharp notches and damaged notch boundary conditions. See Table 2.3 for numerical data. k c /(K Ic ) ef f ω = 0.18 • AT 1 -U AT 1 -D AT 2 -U AT 2 -D 10 -1 10 0 ch 1 2 4 6 8 k c 1 -0.5 ω = 89, 82 • AT 1 -U AT 1 -D AT 2 -U AT 2 -D Validation For intermediate values 0 < 2ω < π, we focus on validation against experiments from the literature based on measurements of the generalized stress intensity factor at a V-shaped notch. Data from single edge notch tension (SENT) test of soft annealed tool steel, (AISI O1 at -50 • C) [START_REF] Strandberg | Fracture at V-notches with contained plasticity[END_REF], four point bending (FPB) experiments of Divinycell® H80, H100, H130, and H200 PVC foams) [START_REF] Grenestedt | On cracks emanating from wedges in expanded PVC foam[END_REF], and double edge notch tension (DENT) experiments of poly methyl methacrylate (PMMA) and Duraluminium [START_REF] Seweryn | Brittle fracture criterion for structures with sharp notches[END_REF], were compiled in [START_REF] Gómez | A fracture criterion for sharp V-notched samples[END_REF]. We performed a series of numerical simulations of Pac-Man tests using the material properties reported in [START_REF] Gómez | A fracture criterion for sharp V-notched samples[END_REF] and listed in Table 2.2. In all cases, the internal length was computed using (2.6). Plexiglass DENT (Seweryn) numerical simulations with experimental values reported in the literature for V-notch with varying aperture. The definition (2.11) for k is used. For the AT 1 model, we observe a good agreement for the entire range of notch openings, as long as damaged notch conditions are used for small notch angles and undamaged notch conditions for large notch angles. For the AT 2 model, the same is true, but the agreement is not as good for large notch angles, due to the presence of large areas of distributed damage prior to crack nucleation. Effect of stress concentrations Material E ν K Ic σ c source [MPa] [MPa √ m] [MPa] Al 2 O 3 - k c [MPa.m 1-λ ] Steel SENT (Strandberg) AT 1 -U AT 1 -D AT 2 -U AT 2 -D 0 AT 1 -U AT 1 -D AT 2 -U AT 2 -D The numerical values of the critical generalized stress intensity factors for the AT 1 models and the experiments from the literature are included in Tables 2.4, 2.5, 2.6, and 2.7 using the convention of (2.11) for k. As suggested by Figure 2.5 and reported in the literature see [START_REF] Klinsmann | An assessment of the phase field formulation for crack growth[END_REF], nucleation is best captured if damaged notch boundary conditions are used for sharp notches and undamaged notch conditions for flat ones. These examples strongly suggest that variational phase-field models of fracture are capable of predicting mode-I nucleation in stress and toughness dominated situations, as seen above, but also in the intermediate cases. Conceptually, toughness and strength (or equivalently internal length) could be measured by matching generalized stress intensity factors in experiments and simulations. When doing so, however, extreme care has to be exerted in order to ensure that the structural geometry has no impact on the measured generalized stress. Similar experiments were performed in [START_REF] Dunn | Fracture initiation at sharp notches: correlation using critical stress intensities[END_REF][START_REF] Yosibash | Failure criteria for brittle elastic materials[END_REF] for three and four point bending experiments on PMMA and Aluminum oxide-Zirconia ceramics samples. While the authors kept the notch angle fixed, they performed three and four point bending experiments or varied the relative depth of the notch as a fraction of the sample height (see Figure 2.9). Figure 2.9: Schematic of the geometry and loading in the four point bending experiments of [START_REF] Yosibash | Failure criteria for brittle elastic materials[END_REF] (left) and three point bending experiments of [START_REF] Dunn | Fracture initiation at sharp notches: correlation using critical stress intensities[END_REF] (right). The geometry of the three point bending experiment of [START_REF] Yosibash | Failure criteria for brittle elastic materials[END_REF] is identical to that of their four point bending, up to the location of the loading devices. Figure 2.10 compares numerical values of the generalized stress intensity factor using the AT 1 model with experimental measurements, and the actual numerical values are included in Table 2.8 and 2.9. For the Aluminum oxide-Zirconia ceramic, we observe that the absolute error between measurement and numerical prediction is typically well within the standard deviation of the experimental measurement. As expected, damaged notch boundary conditions lead Chapter 2. Crack nucleation in variational phase-field models of brittle fracture to better approximation of k c for small angles, and undamaged notches are better for larger values of ω. k c [MPa.m 1-λ ] Al 2 O 3 -7%ZrO 2 FPB (Yoshibash) Al 2 O 3 -7%ZrO 2 TPB (Yoshibash) AT 1 -U AT 1 -D 20 For the three point bending experiments in PMMA of [START_REF] Dunn | Fracture initiation at sharp notches: correlation using critical stress intensities[END_REF] later reported in [START_REF] Yosibash | Failure criteria for brittle elastic materials[END_REF], the experimental results suggest that the relative depth a/h of the notch has a significant impact on k c . We therefore performed full-domain numerical simulation using the geometry and loading from the literature, and compared the critical force upon which a crack nucleates in experiments and simulations. All computations were performed using the AT 1 model in plane strain with undamaged notch boundary conditions. Figure 2.11 compares the experimental and simulated value of the critical load at failure, listed in Table 2.10 and 2.11. These simulations show that a robust quantitative prediction of the failure load in geometries involving a broad range of stress singularity power can be achieved numerically with the AT 1 model, provided that the internal length be computed using (2.6), which involves only material properties. In other words, our approach is capable of predicting crack nucleation near a weak stress singularity using only elastic properties, fracture toughness G c , the tensile strength σ c , and the local energy minimization principle (2.3). In light of Figure 2.11, we suggest that both toughness and tensile strength (or equivalently toughness and internal length) can be measured by matching full domain or Pac-Man computations and experiments involving weak elastic singularity of various power (TPB, FPB, SENT, DENT with varying notch depth or angle) instead of measuring σ c directly. We expect that this approach will be much less sensitive to imperfections than the direct measurement of tensile strength, which is virtually impossible. Furthermore, since our criterion is not based on crack tip asymptotics, using full domain computations do not require that the experiments be specially designed to isolate the notch tip singularity from structural scale deformations. PMMA TPB (Dunn) Al 2 O 3 -7%ZrO 2 FPB (Yoshibash) Al 2 O 3 -7%ZrO 2 TPB (Yoshibash) AT 1 -U FPB AT 1 -U TPB a h = .1 PMMA TPB (Dunn) a h = .2 PMMA TPB (Dunn) a h = .3 PMMA TPB (Dunn) a h = .4 AT 1 -U a h = .1 AT 1 -U a h = .2 AT 1 -U a h = .3 AT 1 -U a h = .4 Figure Initiation near a stress concentration: the U-notch Crack nucleation in a U-shaped notch is another classical problem that has attracted a wealth of experimental and theoretical work. Consider a U-shaped notch of width ρ and length a ρ subject to a mode-I local loading (see Figure 2.12 for a description of notch geometry in the context of a double edge notch tension sample). Assuming "smooth" loadings and applied boundary displacements, elliptic regularity mandates that the stress field be non-singular near the notch tip, provided that ρ > 0. Within the realm of Griffith fracture, this of course makes crack nucleation impossible. As it is the case for the Vnotch, introducing a nucleation principle based on a critical stress is also not satisfying as it will lead to a nucleation load going to 0 as ρ → 0, instead of converging to that of an infinitely thin crack given by Griffith's criterion. There is a significant body of literature on "notch mechanics", seeking to address this problem introducing stress based criteria, generalized stress intensity factors, or intrinsic material length and cohesive zones. A survey of such models, compared with experiments on a wide range of brittle materials is given [START_REF] Gómez | Failure criteria for linear elastic materials with U-notches[END_REF]. In what follows, we study crack nucleation near stress concentrations in the AT 1 and AT 2 models and compare with the experiments gathered in [START_REF] Gómez | Failure criteria for linear elastic materials with U-notches[END_REF]. The core of their analysis consist in defining a generalized stress intensity factor K U = K t σ ∞ c πρ 4 , (2.13) where K t , the notch stress concentration factor, is a parameter depending on the local (a and ρ), as well as global sample geometry and loading. Through a dimensional analysis, they studied the dependence of the critical generalized stress intensity factor at the onset Chapter 2. Crack nucleation in variational phase-field models of brittle fracture 25, and 0.5 for which the value K t , computed in [START_REF] Lazzarin | A generalized stress intensity factor to be applied to rounded v-shaped notches[END_REF] is respectively 5.33, 7.26, and 11.12. In each case, we leveraged the symmetries of the problem by performing computations with the AT 1 and AT 2 models on a quarter of the domain for a number of values of the internal length corresponding to ρ/ ch between 0.05 and 20. In all cases, undamaged notch boundary conditions were used. In Figure 2.13, we overlay the outcome of our simulations over the experimental results gathered in [START_REF] Gómez | Failure criteria for linear elastic materials with U-notches[END_REF]. As for the V-notch, we observe that the AT 2 model performs poorly for weak stress concentrations (large values of ρ/ ch ), as the lack of an elastic phase leads to the creation of large partially damaged areas. For sharp notches (ρ 0), our simulations concur with the experiments in predicting crack nucleation when K U = K Ic . As seen earlier, the AT 1 slightly overestimates the critical load in this regime when undamaged notch boundary conditions are used. In light of Figure 2.13, we claim that numerical simulations based on the variational phase-field model AT 1 provides a simple way to predict crack nucleation that does not require the computation of a notch stress concentration factors K t or the introduction of an ad-hoc criterion. Size effects in variational phase-field models Variational phase-field models are characterized by the intrinsic length , or ch . In this section, we show that this length-scale introduces physically pertinent scale effects, corroborating its interpretation as a material length. To this end, we study the nucleation of a crack in the uniaxial traction of a plate (-W, W ) × (-L, L) with a centered elliptical hole with semi-axes a and ρa (0 ≤ ρ ≤ 1) along the x-and y-axes respectively, see Figure 2.14. In Section 2.3.1, we study the effect of the size and shape of the cavity, assumed to be small with respect to the dimension of the plate (a W, L). In Section 2.3.2, we investigate material and structural size effects for a plate of finite width in the limit case of a perfect crack (ρ = 0). For a small hole (a W, L), up to a change of scale, the problem can be fully characterized by two dimensionless parameters: a/ , and ρ. For a linear elastic and isotropic material occupying an infinite domain, a close form expression of the stress field as a function of the hole size and aspect ratio is given in [START_REF] Inglis | Stresses in plates due to the presence of cracks and sharp corners[END_REF]. The stress is maximum at the points A = (a, 0) and A = (-a, 0), where the radial stress is zero and the hoop stress is given by: σ max = t 1 + 2 ρ , (2.14) t denoting the applied tensile stress along the upper and lower edges of the domain, i.e. the applied macroscopic stress at infinity. We denote by ū the corresponding displacement field for t = 1, which is given in [START_REF] Gao | A general solution of an infinite elastic plate with an elliptic hole under biaxial loading[END_REF]. As for the case of a perfect bar, (2.14) exposes a fundamental issue: if ρ > 0, the stress remains finite, so that Griffith-based theories will only predict crack nucleation if ρ = 0. In that case the limit load given by the Griffith's criterion for crack nucleation is t = σ G := G c E aπ . (2.15) However, as ρ → 0, the stress becomes singular so that the critical tensile stress σ c is exceeded for an infinitesimally small macroscopic stress t. Following the findings of the previous sections, we focus our attention on the AT 1 model only, and present numerical simulations assuming a Poisson ratio ν = 0.3 and plane-stress conditions. We perform our simulations in domain of finite size, here a disk of radius R centered around the defect. Along the outer perimeter of the domain, we apply a boundary displacement u = tū, where ū is as in [START_REF] Inglis | Stresses in plates due to the presence of cracks and sharp corners[END_REF], and we use the macroscopic stress t a loading parameter. Assuming a symmetric solution, we perform our computations on a quarter domain. For the circular case ρ = 1, we use a reference mesh size h = min /10, where min is the smallest value of the internal length of the set of simulations. For ρ < 1, we selectively refine the element size near the expected nucleation site (see Figure 2.14-right). In order to minimize the effect of the finite size of the domain, we set R = 100a. We performed numerical simulations varying the aspect ratio a/ from 0.1 to 50 and the ellipticity ρ from 0.1 to 1.0. In each case, we started from an undamaged state an monotonically increased the loading. In all numerical simulations, we observe two critical loading t e and t c , the elastic limit and structural strength, respectively. For 0 ≤ t < t e the solution is purely elastic, i.e. the damage field α remains identically 0 (see Figure 2.15left). For t e ≤ t < t c , partial distributed damage is observed. The damage field takes its maximum value α max < 1 near point A (see Figure 2.15-center). At t = t c , a fully developed crack nucleates, then propagates for t > t c (see Figure 2.15-right). As for the Pac-Man problem, we identify the crack nucleation with a jump in surface energy, and focus on loading at the onset of damage. From the one-dimensional problem of Section 2.1.2 and [START_REF] Pham | Gradient damage models and their use to approximate brittle fracture[END_REF][START_REF] Pham | The issues of the uniqueness and the stability of the homogeneous response in uniaxial tests with gradient damage models[END_REF], we expect damage nucleation to take place when the maximum stress σ max reaches the nominal material strength σ c = 3G c E /8 (see Table 2.1), i.e. for a critical load t e = ρ 2 + ρ σ c = ρ 2 + ρ 3G c E 8 . (2.16) Figure 2.16-left confirms this expectation by comparing the ratio t e /σ c to its expected value ρ/(2 + ρ) for ρ ranging from 0.1 to 1. Figure 2.16-right highlights the absence of size effect on the damage nucleation load, by comparing t e /σ c for multiple values of a/ while keeping ρ fixed at 0.1 and 1. Figure 2.17 focuses on the crack nucleation load t c , showing its dependence on the defect shape (left) and size (right). Figure 2.17-right shows the case of circular hole (ρ = 1) and an elongated ellipse, which can be identified to a crack (ρ = 0.1). It clearly highlights a scale effect including three regimes: i. For "small" holes (a ), crack nucleation takes place when t = σ c , as in the uniaxial traction of a perfect bar without the hole: the hole has virtually no effect on crack nucleation. In this regime the strength of a structure is completely determined by that of the constitutive material. Defects of this size do not reduce the structural strength and can be ignored at the macroscopic level. ii. Holes with length of the order of the internal length (a = O( )), have a strong impact on the structural strength. In this regime the structural strength can be approximated by log(t c /σ c ) = D log(a/ ) + c, (2.17) where D is an dimensionless coefficient depending on the defect shape. For a circular hole ρ = 1, we have D ≈ -1/3. iii. When a , the structural failure is completely determined by the stress distribution surrounding the defect. We observe that for weak stress singularities (ρ ≡ 1), nucleation takes place when the maximum stress reaches the elastic limit σ e , whereas the behavior as ρ ≡ 0 is consistent with Griffith criterion, i.e. the nucleation load scales as 1/ √ a. Figure 2.17-right shows that the shape of the cavity has a significant influence on the critical load only in the latter regime, a . Indeed, for a/ of the order of the unity or smaller, the critical loads t c for circular and highly elongated cavities are almost indistinguishable. This small sensitivity of the critical load on the shape is the result of the stress-smoothing effect of the damage field, which is characterized by a cut-off length of the order of . Figure 2.17-left shows the critical stress t c at nucleation when varying the aspect ratio ρ for a/ = 48, for which σ G /σ c = 2/15. As expected, the critical stress varies smoothly from the value σ G (2.15) predicted by the Griffith theory for a highly elongated cavity identified to a perfect crack, to t e (2.16) for circular cracks, where the crack nucleates as soon as the maximum stress σ max attains the elastic limit. This series of experiments is consistent with the results of Section 2.2.2 showing that variational phase-field models are capable of simultaneously accounting for critical elastic energy release rate and critical stress. Furthermore, they illustrate how the internal length can be linked to critical defect size as the nucleation load for a vanishing defect of size less than approaches that of a flawless structure. Competition between material and structural size effects We can finally conclude the study of size effects in variational phase-field models by focusing on the competition between material and structural size effects. For that matter, we study the limit case ρ = 0 of a perfect crack of finite length 2a in a plate of finite width 2W (see Figure 2.18-left). Under the hypotheses of LEFM, the critical load upon which the crack propagates is σ G (a/ ch , a/W ) = G c E cos( aπ 2W ) aπ = σ c 1 π ch a cos aπ 2W , (2.18) which reduces to (2.15) for large plate (W/a → ∞). As before, we note that σ G /σ c → ∞ as a/ ch → 0, so that for any given load, the material's tensile strength is exceeded for short enough initial crack. We performed series of numerical simulations using the AT 1 model on a quarter of the domain with W = 1, L = 4, ν = 0.3, = W/25, h = /20, and the initial crack's halflength a ranging from from 0.025 to 12.5 (i.e. 0.001W to 0.5W ). The pre-existing crack was modeled as a geometric feature and undamaged crack lip boundary conditions were prescribed. The loading was applied by imposing a uniform normal stress of amplitude t to its upper and lower edge. theories linking size-effect on the strength of the material [START_REF] Bažant | Scaling of Structural Strength[END_REF]. When a , i.e. when the defect is large compared to the material's length, crack initiation is governed by Griffith's criterion (2.18). As noted earlier, the choice of undamaged notch boundary conditions on the damage fields leads to slightly overestimating the nucleation load. Our numerical simulations reproduce the structural size effect predicted by LEFM when the crack length is comparable to the plate width W . When a , we observe that the macroscopic structural strength is very close to the material's tensile strength. Again, below the material's internal length, defects have virtually no impact on the structural response. LEFM and Griffith-based models cannot account for this material size-effect. These effects are introduced in variational phase-field model by the additional material parameter . In the intermediate regime a = O( ), we observe a smooth transition between strength and toughness criteria, where the tensile strength is never exceeded. When a , our numerical simulations are consistent with predictions from Linear Elastic Fracture Mechanics shown as a dashed line in Figure 2.18, whereas when a , the structural effect of the small crack disappear, and nucleation takes place at or near the material's tensile strength, i.e. t c /σ c 1. Conclusion In contrast with most of the literature on phase-field models of fracture focusing validation and verification in the context of propagation "macroscopic" cracks [START_REF] Mesgarnejad | Validation simulations for the variational approach to fracture[END_REF][START_REF] Pham | Experimental validation of a phase-field model for fracture[END_REF], we have studied crack nucleation and initiation in multiple geometries. We confirmed observations reported elsewhere in the literature that although they are mathematically equivalent in the limit of → 0, damaged notch boundary conditions lead to a more accurate computation near strong stress singularities whereas away from singularities, undamaged notch boundary conditions are to be used. Our numerical simulations also highlight the superiority of phase-field models such as AT 1 which exhibit an elastic phase in the one-dimensional tension problem over those who don't (such as AT 2 ), when nucleation away from strong singularity is involved. Our numerical simulations suggest that it is not possible to accurately account for crack nucleation near "weak" singularities using the AT 2 model. We infer that a strictly positive elastic limit σ e is a required feature of a phase-field model that properly account for crack nucleation. We have shown that as suggested by the one-dimensional tension problem, the regularization parameter must be understood (up to a model-dependent multiplicative constant) as the material's characteristic or internal length ch = G c E/σ 2 c , and linked to the material strength σ c . With this adjustment, we show that variational phasefield models are capable of quantitative prediction of crack nucleation in a wide range of geometries including three-and four-point bending with various type of notches, single and double edge notch tests, and a range of brittle materials, including steel and Duraluminium at low temperatures, PVC foams, PMMA, and several ceramics. We recognize that measuring a material's tensile strength is difficult and sensitive to the presence of defect, so that formulas (2.6) may not be a practical way of computing a material's internal length. Instead, we propose to perform a series of experiments such as three point bending with varying notch depth, radius or angle, as we have demonstrated in Figure 2.11 that with a properly adjusted internal length, variational phase-field models are capable of predicting the nucleation load for any notch depth or aperture. Furthermore, since variational phase-field models do not rely on any crack-tip asymptotic, this identification can be made even in a situation where generalized stress or notch intensity factors are not known or are affected by the sample's structural geometry. We have also shown that variational phase-field models properly account for size effects that cannot be recovered from Griffith-based theories. By introducing the material's internal length, they can account for the vanishing effect of small defects on the structural response of a material, or reconcile the existence of a critical material strength with the existence of stress singularity. Most importantly, they do not require introducing ad-hoc criteria based on local geometry and loading. On the contrary, we see that in most situation, criteria derived from the asymptotic analysis of a micro-geometry can be recovered a posteriori. Furthermore, variational phase-field models are capable of quantitative prediction of crack path after nucleation. Again, they do so without resolving to introduce additional ad-hoc criteria, but only rely on a general energy minimization principle. In short, we have demonstrated that variational phase-field models address some of the most vexing issues associated with brittle fracture: scale effects, nucleation, existence of a critical stress, and path prediction. Of course, there are still remaining issues that need to be addressed. Whereas the models are derived from irreversibility, stability and energy balance, our numerical simulations do not enforce energy balance as indicated by a drop of the total energy upon crack nucleation without string singularities. Note that to this day, devising an evolution principle combining the strength of (2.3) while ensuring energy balance is still an open Appendix B Tables of experimental an numerical data for V-notch experiments [START_REF] Yosibash | Failure criteria for brittle elastic materials[END_REF] using three point bending experiments of a PMMA sample compared to full domain numerical simulations using the AT 1 model with undamaged notch boundary conditions. The value a/h refers to the ratio depth of the notch over sample thickness. See Figure 2.9 for geometry and loading. ω λ k c k c k c k c (AT 1 -U) (AT 1 -D) (AT 2 -U) (AT 2 -D) 0 01°0. Chapter 3 A phase-field model for hydraulic fracturing in low permeability reservoirs: propagation of stable fractures Hydraulic fracturing is a process to initiate and to extend fractures by injecting fluid into subsurface. Mathematical modeling of hydraulic fracturing requires coupled solution of models for fluid flows and reservoir-fracture deformation. The governing equations for these processes are fairly well understood and includes, for example, the Reynold's equation, cubic law, diffusivity equation and Darcy's law for fluid flow modeling, linear poro-elasticity equation for reservoir-fracture deformation and Griffith's criterion for fracture propagation. Considering that fracture propagation is a moving boundary problem, the numerical and computational challenges of solving these governing equations on the fracture domain limit the ability to comprehensively model hydraulic fracturing. These challenges include but are not limited to, finding efficient ways of representing numerically the fracture and reservoir domains in the same computational framework while still ensuring hydraulic and mechanical coupling between both subdomains. To address these issues, several authors have assumed a known propagation path that is limited to a coordinate direction of the computational grid [START_REF] Carrier | Numerical modeling of hydraulic fracture problem in permeable medium using cohesive zone model[END_REF][START_REF] Boone | A numerical procedure for simulation of hydraulically driven fracture propagation in poroelastic media[END_REF] while some others simply treated fractures as external boundaries of the reservoir computational domain [START_REF] Ji | A novel hydraulic fracturing model fully coupled with geomechanics and reservoir simulation[END_REF][START_REF] Dean | Hydraulic-fracture predictions with a fully coupled geomechanical reservoir simulator[END_REF]. Special interface elements called zero-thickness elements have also been used to handle fluid flow in fractures embedded in continuum media [START_REF] Carrier | Numerical modeling of hydraulic fracture problem in permeable medium using cohesive zone model[END_REF][START_REF] Segura | On zero-thickness interface elements for diffusion problems[END_REF][START_REF] Segura | Coupled hm analysis using zero-thickness interface elements with double nodes. part I: Theoretical model[END_REF][START_REF] Segura | Coupled hm analysis using zero-thickness interface elements with double nodes part II: Verification and application[END_REF][START_REF] Boone | A numerical procedure for simulation of hydraulically driven fracture propagation in poroelastic media[END_REF][START_REF] Lobão | Modelling of hydrofracture flow in porous media[END_REF]. Despite the simplicity of these approaches and contrary to field evidence of complex fracture geometry and propagation paths, they have limited ability to reproduce realistic fracture behaviors. Where attempts have been made to represent fractures and reservoir in the same computational domain, for instance using the extended finite element method (XFEM) [START_REF] Mohammadnejad | An extended finite element method for hydraulic fracture propagation in deformable porous media with the cohesive crack model[END_REF][START_REF] Dahi | Analysis of hydraulic fracture propagation in fractured reservoirs: an improved model for the interaction between induced and natural fractures[END_REF] and the generalized finite element method (GFEM) [START_REF] Gupta | Simulation of non-planar three-dimensional hydraulic fracture propagation[END_REF], the computational cost is high and the numerics cumbersome, characterized by continuous remeshing to provide grids 3.1. A phase fields model for hydraulic fracturing that explicitly match the evolving fracture surface. Some of these challenges can be overcome using a phase field representation for fractures as evident in the work of [START_REF] Bourdin | A variational approach to the numerical simulation of hydraulic fracturing[END_REF] and [START_REF] Bourdin | A variational approach to the modeling and numerical simulation of hydraulic fracturing under in-situ stresses[END_REF]. This chapter extends the works of [START_REF] Bourdin | A variational approach to the numerical simulation of hydraulic fracturing[END_REF] by applying the variational phase field model to a network of fractures. The hydraulic fracture model is developed by incorporating fracturing fluid pressure in Francfort and Marigo's variational approach to fracture [START_REF] Bourdin | The Variational Approach to Fracture[END_REF]. Specifically, the fracture model recast Griffith's propagation criteria into a total energy minimization problem, where the global energy is the sum of the elastic and fracture surface energies, the fracturing fluid pressure force and the work done by in-situ stresses. We assume quasi static fracture propagation and in this setting, the fractured state of the reservoir is the solution of a series of minimizations of this total energy with respect to any kinematically admissible crack sets and displacement field. Numerical implementation of the model is based on a phase field representation of the fracture and subsequent regularization of the total energy function. The phase field technique avoids the need for explicit knowledge of fracture location, it permits the use of a single computational domain for fracture and reservoir representation. The strength of this method is to provide a unified setting for handling path determination, nucleation and growth of arbitrary number of stable cracks in any dimensions based on the energy minimization principle. This work focuses on the fracture propagation stability through various examples such as, a pressurized single fracture stimulated by a controlled injected volume in a large domain, a network of multiple parallel fractures and a pressure driven laboratory experiment to measure rocks toughness. The Chapter is organized as follows: Section 3.1 is devoted to recall phase field models for hydraulic fracturing in the toughness dominated regime with no fluid loss to the impermeable elastic reservoir [START_REF] Detournay | The near tip region of a fluid driven fracture propagating in a permeable elastic solid[END_REF]. Then, our numerical implementation scheme and algorithm for volume driven hydraulic fracturing simulations is exposed in section 3.1.3. Tough the toughness dominated regime may not cover the whole spectrum of fracture propagation but provides an appropriate framework for verifications since it does not require the solution of a flow model. Therein, section 3.2 is concerned with comparisons between our numerical results and the closed form solutions provided by Sneddon [START_REF] Sneddon | The opening of a griffith crack under internal pressure[END_REF][START_REF] Sneddon | Crack problems in the classical theory of elasticity[END_REF] for the fluid pressure, fracture length/radius and fracture volume in a single crack case. Section 3.3 focuses on the propagation of infinite pressurized parallel fractures and it is compared with the derived solution. Section 3.4 is devoted to study the pre-fracture stability in the burst experiment at a controlled pressure. This test proposed by Abou-Sayed [START_REF] Abou-Sayed | An Experimental Technique for Measuring the Fracture Toughness of Rocks under Downhole Stress Conditions[END_REF] is designed to measure the fracture toughness of the rock and replicates situations encountered downhole with a borehole and bi-wing fracture. A phase fields model for hydraulic fracturing A variational model of fracture in a poroelastic medium Consider a reservoir consisting of a perfectly brittle isotropic homogeneous linear poroelastic material with A the Hooke's law tensor and G c the critical energy release rate Chapter 3. A phase-field model for hydraulic fracturing in low permeability reservoirs: propagation of stable fractures occupying a domain Ω ⊂ R n , n = 2 or 3 in its reference configuration. The domain is partially cut by a sufficiently regular crack set Γ ⊂ Ω with Γ ∩ ∂Ω = ∅. A uniform pressure denoted by p applies on both faces of the fracture lips i.e. Γ = Γ + ∪ Γ -and pore pressure denoted by p p applies in the porous material which follows the Biot poroelastic coefficient λ. The sound region Ω \ Γ is subject to a time independent boundary displacement ū(t) = 0 on the Dirichlet part of its boundary ∂ D Ω and time stress dependent g(t) = σ • ν on the remainder ∂ N Ω = ∂Ω \ ∂ D Ω, where ν denotes the appropriate normal vector. For the sake of simplicity body forces are neglected such that at the equilibrium, the stress satisfies, div σ = 0 where the Cauchy stress tensor follows Biot's theory [START_REF] Biot | General theory of three-dimensional consolidation[END_REF], i.e. σ = σλp p I, σ being the effective stress tensor. The infinitesimal total deformation e(u) is the symmetrical part of the spatial gradient of the displacement field u, e(u) = ∇u + ∇ T u 2 . The stress-strain relation is σ = Ae(u), so that, σ = A e(u) - λ 3κ p p I , where 3κ is the material's bulk modulus. Those equations can be rewritten in a variational form, by multiplying the equilibrium by the virtual displacement v ∈ H 1 0 (Ω \ Γ; R n ) and using Green's formula over Ω \ Γ. After calculation, we get that, Ω\Γ σ : e(v) dx - ∂ N Ω g(t) • v dH n-1 - Γ p v • ν dH n-1 = 0 (3.1) where H n-1 denotes the n -1-dimensional Hausdorff measure, i.e. its aggregate length in 2 dimensions and surface area in 3 dimensions. Finally, we remark that the above equation (3.1) can be seen as the Euler-Lagrange equation for the minimization of the elastic energy, E(u, Γ) = Ω\Γ 1 2 A e(u) - λ 3κ p p I : e(u) - λ 3κ p p I dx - ∂ N Ω g(t) • u dH n-1 - Γ p u • ν dH n-1 (3.2) amongst all displacement fields u ∈ H 1 (Ω \ Γ; R n ) such that u = 0 on ∂ D Ω. A phase fields model for hydraulic fracturing Remark 2 Of course, fluid equilibrium mandates continuity of pressure so that p p = p along Γ. Our choice to introduce two pressure fields is motivated by our focus on lowpermeability reservoirs. In this situation, assuming very small leak-off, it is reasonable to assume that for short injection time, the pore pressure is "almost" constant away from the crack, hence that p = p p . We follow the formalism of [START_REF] Francfort | Revisiting brittle fracture as an energy minimization problem[END_REF][START_REF] Bourdin | The variational approach to fracture[END_REF] and propose a time-discrete variational model of crack propagation. To any crack set Γ ⊂ Ω and any kinematically admissible displacement field u, we associate the fracture energy, E(u, Γ) = Ω\Γ 1 2 A e(u) - λ 3κ p p I : e(u) - λ 3κ p p I dx - ∂ N Ω g(t) • u dH n-1 - Γ p u • ν dH n-1 + G c H n-1 (Γ) (3.3) Considering then a time interval [0, T ] and a discrete set of time steps 0 = t 0 < t 1 < • • • < t N = T , and denoting by p i , p p i and g i , the crack pressure, pore pressure and external stress at time t i (i > 0), we postulate that the displacement and crack set (u i , Γ i ) are minimizers of E amongst all kinematically admissible displacement fields u and all crack sets Γ satisfying a growth condition Γ j ⊂ Γ for all j < i, with Γ 0 possibly representing pre-existing cracks. It is worth emphasizing that in this model, no assumptions are made on the crack geometry Γ i . As in Francfort and Marigo's pioneering work [START_REF] Francfort | Revisiting brittle fracture as an energy minimization problem[END_REF], minimization of the total fracture energy is all that is needed to fully identify the crack geometry (path) and topology (nucleation, merging, branching). Variational phase-field approximation Several techniques have been proposed for the numerical implementation of the fracture energy E, the main difficulty being to handle discontinuous displacements along unknown surfaces. In recent years, variational phase-field models, originally devised in [START_REF] Ambrosio | On the approximation of free discontinuity problems[END_REF][START_REF] Ambrosio | Approximation of functionals depending on jumps by elliptic functionals via Γ-convergence[END_REF], and extended to brittle fracture [START_REF] Bourdin | Numerical experiments in revisited brittle fracture[END_REF] have become very popular. We follow this approach by introducing a regularization length , an auxiliary field α with values in [0, 1] representing the unknown crack surface, and the regularized energy. E (u, α) = Ω 1 2 A (1 -α)e(u) - λ 3κ p p I : (1 -α)e(u) - λ 3κ p p I dx - ∂ N Ω g(t) • u dH n-1 + Ω pu • ∇α dx + 3G c 8 Ω α + |∇α| 2 dx (3.4) where α = 0 is the undamaged state material and α = 1 refers to the broken part. One can recognize the AT 1 model introduced in the Chapter 1 which differs from one used in [START_REF] Chukwudozie | Application of the Variational Fracture Model to Hydraulic Fracturing in Poroelastic Media[END_REF]. Chapter 3. A phase-field model for hydraulic fracturing in low permeability reservoirs: propagation of stable fractures At each time step, the constrained minimization of the fracture energy E is then replaced with that of E , with respect to all (u i , α i ) such that u i is kinematically admissible and 0 ≤ α i-1 ≤ α i ≤ 1. The Γ-convergence of (3.4) to (3.3), which constitutes the main justification of variational phase-field models is a straightforward extension of [START_REF] Chambolle | An approximation result for special functions with bounded variations[END_REF][START_REF] Chambolle | Addendum to "An Approximation Result for Special Functions with Bounded Deformation[END_REF], or [START_REF] Iurlano | A density result for gsbd and its application to the approximation of brittle fracture energies[END_REF]. It is quite technical and not quoted here. The form of the regularization of the surface energy in (3.4) is slightly different from the one originally proposed in [START_REF] Bourdin | A variational approach to the modeling and numerical simulation of hydraulic fracturing under in-situ stresses[END_REF][START_REF] Bourdin | A variational approach to the numerical simulation of hydraulic fracturing[END_REF] but this choice is motivated by the work of [START_REF] Tanné | Crack nucleation in variational phase-field models of brittle fracture[END_REF][START_REF] Bourdin | Morphogenesis and propagation of complex cracks induced by thermal shocks[END_REF]. In the context of poro-elasticity, the regularization of the elastic energy of the form of, Ω 1 2 A (1 -α)e(u) - λ 3κ p p I : (1 -α)e(u) - λ 3κ p p I dx is different from that of [START_REF] Mikelic | A quasistatic phase field approach to fluid filled fractures[END_REF] and follow-up work, or [START_REF] Miehe | Minimization principles for the coupled problem of Darcy-Biot-type fluid transport in porous media linked to phase field modeling of fracture[END_REF][START_REF] Wilson | A phase-field model for fracture in piezoelectric ceramics[END_REF] which use a regularization of the form Ω 1 2 (1 -α) 2 A e(u) - λ 3κ p p I : e(u) - λ 3κ p p I dx. This choice is consistent with the point of view that damage takes place at the sub-pore scale, so that the damage variable α should impact the Cauchy stress and not the effective poro-elastic stress. Note that as → 0, both expressions will satisfy Γ-convergence to E. A fundamental requirement of hydraulic fracturing modeling is volume conservation, that is the sum of the fracture volume and fluid lost to the surrounding reservoir must equal the amount of fluid injected denoted V . In the K-regime, the injected fluid is inviscid and no mass is transported since the reservoir is impermeable. Of course, reservoir impermeability means no fluid loss from fracture to reservoir and this lack of hydraulic communication means that the reservoir pressure p p and fracture fluid pressure p are two distinct and discontinuous quantities. Furthermore, the zero viscosity of the injected fluid is incompatible with any fluid flow model, leaving global volume balance as the requirement for computing the unknown fracturing fluid pressure p. In the sequel we set aside the reservoir pressure p p and consider this as a hydrostatic stress offset in the domain, which can be recast by applying a constant pressure on the entire boundary of the domain. Numerical implementation The numerical implementation of the variational phase-field model is well established. In the numerical simulations presented below, we discretized the regularized fracture energy using linear or bilinear finite elements. We follow the classical alternate minimizations approach of [START_REF] Bourdin | Numerical experiments in revisited brittle fracture[END_REF] and adapt to volume-driven fractures where main steps are: i. For a given (α, p) the minimization of E with respect to u is an elastic problem with the prescribed boundary condition. To solve this, we employed preconditioned conjugate gradient methods solvers. 3.2. Numerical verification case of a pressurized single fracture in a two and three dimensions ii. The minimization of E with respect to α for fixed (u, p) and subject to irreversibility (α ≥ α i-1 ) is solved using variational inequality solvers provided by PETCs [START_REF] Balay | Efficient management of parallelism in object oriented numerical software libraries[END_REF][START_REF] Balay | PETSc users manual[END_REF][START_REF] Balay | PETSc Web page[END_REF]. iii. For a fixed (u, α), the total volume of fluid can be computed, such that, V = - Ω u • ∇α dx. The idea is to rescale the fluid pressure using the secant method (a root-finding algorithm) based on a recurrence relation. A possible algorithm to solve volume-driven hydraulic fracturing is to use nested loops. The inner loop solves the elastic problem i. and rescale the pressure iii. until the error between the target and the computed volume is below a fixed tolerance. The outer loop is composed of ii. and the previous procedure and the exit is triggered once the damage has converged. This leads to the following Algorithm 2 where δ V and δ α are fixed tolerances. Remark that the inner loop solves a linear problem, hence, finding the pressure p associated to the target volume V should converge in strictly less than four iterations. All computations were performed using the open source mef90 1 . In-situ stresses play a huge role in hydraulic fracture propagation and the ability to incorporate them in a numerical model is an important requirement for robust hydraulic fracturing modeling. Our numerical model easily accounts for these compressive stresses on boundaries of the reservoir. However in-situ stresses simulated cannot exceeded the maximum admissible stress of the material given by σ c = 3EG c /8 . We run a series of two-and three-dimensions computations to verify our numerical model and investigate stability of fractures. Numerical verification case of a pressurized single fracture in a two and three dimensions Using the Algorithm 2 a pressurized line and penny shape fractures have been respectively simulated in two-and three-dimensions, and their results compared with the closed form solutions. Both problems have a symmetric axis, i.e. its aggregate a reflexion axis in 2d and a rotation in 3d, leading to a invariant geometry drawn on Figure 3.1. Also, all geometric and material parameters are identically set up for both problems and summarized in the Table 3.1. The closed form solutions provided by Sneddon in [START_REF] Sneddon | Crack problems in the classical theory of elasticity[END_REF][START_REF] Sneddon | The opening of a griffith crack under internal pressure[END_REF] are recalled in the Appendix C and assume an infinite domain with vanishing stress and displacement at the boundary. To satisfy those boundary conditions we performed simulations on a huge domain clamped at the boundary, where the reservoir size is 100 times larger than the pre-fracture length as reported in the Table 3.1. To moderate the number of elements in the domain, a casing (W, H) with a constant refined mesh size of resolution h is encapsulated around the fracture. Outside the casing a coarsen mesh is spread out see Figure 3.1. Chapter 3. A phase-field model for hydraulic fracturing in low permeability reservoirs: propagation of stable fractures Algorithm 2 Volume driven hydraulic fracturing algorithm at the step i 1: Let j = 0 and α 0 := α i-1 2: repeat 3: Set, p k-1 i = p k i and V k-1 i = V k i 4: p k+1 i := p k i -V k i (p k i -p k-1 i )/(V k i -V k-1 i ) 6: Compute the equilibrium, u k+1 := argmin u∈C i E (u, α j ) 7: Compute volume of fractures, V k+1 i := - Ω u k+1 • ∇α j dx 8: k := k + 1 9: until V k i -V i L ∞ ≤ δ V 10: Compute the damage, α j+1 := argmin α∈D i α≥α i-1 E (u j+1 , α) 11: j := j + 1 12: until α j -α j-1 L ∞ ≤ δ α 13: Set, u i := u j and α i := α j refined mesh, size h coarsen mesh symmetry axis 3.1: Parameters used for the simulation of a single fracture in two and three dimensions. A loading cycle is preformed by pressurizing the fracture until propagation, then, pumping all the fluid out of the crack. The pre-fracture of length l 0 is measured by a isovalues contour plot for α = .8 before refilling the fracture of fluid again. The reason of this is we do not have an optimal damage profile at the fracture tips, leading to underestimate the critical pressure p c . Similar issues have been observed during the nucleation process in [START_REF] Tanné | Crack nucleation in variational phase-field models of brittle fracture[END_REF] where G c is overshoot due to the solution stability. Snap-shots of the damage before and after the loading cycle in the Figure 3.3 illustrate differences between damage profiles at the crack tips. Since the critical crack pressure is a decreasing function with respect to the crack length, the maximum value is obtained at the loading point when the crack initiates (for the pre-fracture). One can see on the Figure 3.4 that the penny shape fracture growth is not necessary symmetrical with respect to the geometry but remains a disk shape which is consistent with the invariant closed form solution. We know from prior work see [START_REF] Bourdin | The variational approach to fracture[END_REF] that the "effective" numerical toughness is quantified by (G c ) eff = G c (1 + 3h/(8 ) ) in two dimensions. However, for the penny shape crack (G c ) eff = G c (1 + 3h/(8 ) + 2h/l ), where 2h is the thickness of the crack and l the radius. The additional term of 2h/l comes from the lateral surface contribution which becomes negligible for thin fractures. The fluid pressure p and the fracture length l closed form solution with respect to the total injected volume of fluid V is provided by [START_REF] Sneddon | Crack problems in the classical theory of elasticity[END_REF] and is recalled in the Appendix C. Figure 3.2 shows a perfect match between the numerical results and the closed solution for the line fracture and penny shape crack. In both cases as long as the V ≤ V c the crack does not grow, and since V > V c the pressure drop as p ∼ V -1/3 (line fracture) and p ∼ V -1/5 (penny shape crack). Notice that the pressure decreases when the crack grows, therein a pressure driven crack is necessary unstable, indeed there is no admissible pressure over the maximum value p c . Remark 3 The Griffith regime requires σ c = 3E G c /(8 ) ≥ πE G c /(4l) = p c in two dimensions, leading to l ≥ 2π /3. Therefore, the pre-fracture must be longer than twice the material internal length to avoid any size effects phenomena as reported in Chapter 2. Those simulations show that the variational phase field model to hydraulic fracturing recovers Griffith's initiation and propagation for a single pressurized crack. Even if this can be seen as a toy example because the fracture propagation is rectilinear, without any changes on the implementation multi-fracking can be simulated as illustrated in the Figure 3.5. Fracture paths are obtained by total energy minimization and satisfies Griffith's propagation criterion. Multi fractures in two dimensions Multi fractures in two dimensions One of the most important features of our phase field hydraulic fracturing model is its ability to handle multiple fractures without additional computational or modeling effort than is required for simulating single fracture. This capability is highlighted in the following study of the stimulation of a network of parallel fractures. All cracks are subject to the same pressure and we control the total amount of fluid injected into cracks, i.e. fluid can migrate from a crack to another via a wellbore. The case where all fractures of a parallel network propagate (multi fracking scenario) is often postulated. However, considering the variational structure of Griffith leads to a different conclusion. For the sake of simplicity consider only two parallel fractures. A virtual extension of one of the cracks (variational argument) induces a drop of pressure in both fractures. Consequently the shorter fracture is sub-critical and remains unchanged since the pressure p < p c . Moreover the longer fracture requires less pressure to propagate than the shorter because the critical pressure decreases with the crack length. Finally the longer crack continues to propagate. This non restrictive situation can be extended to multiple fractures (parallel and the same size). In the sequel, we propose to revisit the hypothesis of multi-fracking by performing numerical simulations using the Algorithm 2. Consider a network of infinite parallel cracks with the same pressure p where their individual length is l and the spacing between cracks is δ drawn in the Figure 3.6 (left). At the initial state all pre-cracks have the same length denoted l 0 and no in-situ stresses is applied on the reservoir domain. Multi-fracking closed form solution This network of parallel cracks is a duplication of an invariant geometry, precisely a strip domain Ω = (-∞, +∞) × [-δ, δ] cut in the middle by a fracture Γ = [-l, l] × {0}. An asymptotic solution of this cell domain problem is provided by Sneddon in [START_REF] Sneddon | Crack problems in the classical theory of elasticity[END_REF] V (ρ) = 8pδ 2 E π ρ 2 f (ρ), (3.5) where the density of fractures ρ = lπ/(2δ) and f (ρ) = 1 -ρ 2 /2 + ρ 4 /3 + o(ρ 6 ). The Taylor series of f (ρ) in 0 provided by Sneddon differs from one given in the reference [START_REF] Murakami | Handbook of stress intensity factors[END_REF] where f (ρ) = 1ρ 2 /2 + 3ρ 4 /8 + o(ρ 6 ). The latter is exactly the first three terms of the expansion of f (ρ) = 1 1 + ρ 2 . (3.6) The critical pressure satisfying Griffith propagation for this network of fractures problem is p(ρ) = E G c δ(ρ 2 f (ρ)) (3.7) Of course the closed form expression consider that all cracks grow by symmetry. It is convenient for numerical reason to consider an half domain and an half fracture (a crack lip) of the reference geometry such that we have (Ω 1 , Γ 1 ) and by symmetry expansion (Ω 2 , Γ 2 ), (Ω 4 , Γ 4 ),..,(Ω 2n , Γ 2n ) illustrated in the Figure 3.6 (right). Numerical simulation of multi-fracking by computation of unit cells construction The idea is to reproduce numerically multi-fracking scenario, thus simulation is performed on stripes of length 2L with pre-fractures of length 2l 0 such that, geometries considered are: Ω 2n = [-L, L] × [0, (2n -2)δ] Γ 0, 2n = [-l 0 , l 0 ] × n k=1 {2(k -1)δ} (3.8) for n ≥ 1, n being the number of crack lips. Naturally a crack is composed of two lips. The prescribed boundary displacement on the top-bottom extremities is u y (0) = u y (2(n -1)δ) = 0, and on the left-right is u(±L) = 0. All numerical parameters used are set up in the Table 3.2. h L δ l 0 E ν G c 0.005 10 1 0.115 1 0 1 3h Table 3.2: Parameters used in the numerical simulation for infinite cracks Using the same technique of loading cycle as in section 3.2 and after pumping enough fluid into the system of cracks we observed in all simulations performed that only one fracture grows, precisely the one at the boundary as illustrated in the Figure 3.7. By Multi fractures in two dimensions using reflexion symmetry we have a periodicity of large fractures of 1/n. We notice that simulations performed never stimulate middle fracture. Indeed, by doing so after reflexions this will lead to an higher periodicity cases. The pseudo-color blue is for undamaged material and turns white when (α ≤ .01) (visibility reason). The full colors correspond to numerical simulations cells domain see table 3.2, and opacity color refers to the rebuild solution using symmetries. In all simulations only one crack propagates in the domain. Using the multiplicity pictures from left to right we obtain a fracture propagation periodicity denoted period. of 6/6, 3/6, 1.5/6 and 1/6. To compare the total injected fluid V between simulations, we introduce the fluid volume density i.e the fluid volume for a unit geometry cell given by 2V /n. The evolution of normalized pressure, volume of fluid per cell and length are plotted in Figure 3.8 and show that the multi-fracking situation (one periodic) match perfectly with the close form solution provided by the equations (3.7),(3.5) and (3.6). Also, one can see that Sneddon approximation is not accurate for dense fractures. We can observe from simulations in Figure 3.8 that a lower periodicity (1/n) of growing cracks implies a reduction in pressure evolution. Also notice that the rate of pressure drop increases when the number of long cracks decrease, so that rapid pressure drop may indicate a poor stimulation. Also this loss of multi fracking stimulation decreases the fracture surface are for resource recovery. All cracks propagating simultaneously case is not stable in the sense that there exits a lower energy state with fewer growing crack. However as we will be discussed in the section 3.3.3 multi fracking may work for low fracture density since their interactions are negligible. Chapter 3. A phase-field model for hydraulic fracturing in low permeability reservoirs: propagation of stable fractures Multi-fracking for dense fractures In the following we investigate critical pressure with respect to the density of fracture for different periodicity. Colored plot are numerical results for different domain sizes Ω 1 , Ω 2 , Ω 4 and Ω 6 . The solid black line is the closed form solution and gray the approximated solution given by Sneddon [START_REF] Sneddon | Crack problems in the classical theory of elasticity[END_REF]. Let us focus on fractures propagation when their interactions become stronger i.e. higher fracture density ρ = lπ/(2δ). We start by normalizing the pressure relation for multi-fracking equation (3.7) with p = E G c /(lπ) which is a single fracture problem studied in section 3.2. r p (ρ) = 2ρ (ρ 2 f (ρ)) = 2(ρ 2 +1) 3/2 ρ 2 +2 . (3.9) Remark that r p (0) = 1 means that critical pressure for largely spaced fractures are identical to a line fracture in a infinite domain problem, thus cracks behave without interacting each other. We run a set of numerical simulations using the same set of parameters than previously recalled in the Table 3.2 except that δ varies. For a high fractures density we 3.4. Fracture stability in the burst experiment with a confining pressure discovered another loss of symmetry shown on Figure 3.9 such that the fracture grows only in one direction. Figure 3.9: Domains in the deformed configuration for respectively Ω 2 and Ω 4 with 2δ = .5. The pseudo-color blue is for undamaged material and turns white when (α ≤ .01) (visibility reason). The full colors correspond to numerical simulation domain and opacity color refers to the rebuild solution using symmetries. In all simulations only one crack tip propagates in one direction in the simulated domain. We report pressures obtained numerically depending on the fracture density in the Figure 3.10 and comparison with (3.9). One can see that the closed form solution is in good agreement with numerical simulation for the periodicity one and also lower periodicity obtained by doing ρ ← ρ/n in the equation (3.9). We see that for low fractures density the critical pressure is equal to a line fracture in a infinite domain. For higher fractures density, interactions become stronger and propagating all fractures require a high pressure compare to grow only one of them. As an example, a network of pre-fractures of length l = 6.36m and spaced δ = 10m thus ρ = 1, in this situation the required pressure is equal to r(1)K Ic / √ lπ with r(1) = 1.4 to propagate all cracks together compare to r(1) = 1 for only one single fracture. Naturally the system bifurcate to less fractures propagation leading to a drop of the fluid pressure. Fracture stability in the burst experiment with a confining pressure This section focuses on the stability of fractures propagation in the burst experiment. This laboratory experiment was conceived to measure the resistance to fracturing K Ic (also called the fracture toughness) of rock under confining pressure which is a critical parameter to match the breakdown pressure in mini-frac simulation. The idea is to provide a value of K Ic for hydraulic fracturing simulations in the K-regime [START_REF] Detournay | Propagation regimes of fluid-driven fractures in impermeable rocks[END_REF]. However, past experimental studies suggest that the fracture toughness of rock is dependent on the confining pressure under which the rock is imposed. Various methodologies exist for the Black dash line is r p (ρ/n) with 1/n the periodicity. Colored line is numerical results for respectively a periodicity 6/6, 3/6 and 1.5/6. measurement of K Ic under confining pressure and results differ in each study. The most accepted methodology in petroleum industry is the so called burst experiment, which was proposed by Abou-Sayed [START_REF] Abou-Sayed | An Experimental Technique for Measuring the Fracture Toughness of Rocks under Downhole Stress Conditions[END_REF], as the experimental geometry replicates a situation encountered downhole with a borehole and bi-wing fracture. Under linear elastic fracture mechanics, stable and unstable crack growth regime have been calculated depending on the confining pressure and geometry. During unstable crack propagation the phase-field models for hydraulic fracturing do not bring information. Instead we perform a Stress Intensity Factor (SIF) analysis along the fracture path to determine propagation stability regimes, herein this section is different from the phase-field sprite of the dissertation. However at the end we will verify the ability of the phase-field model to capture fracture stability transition from stable to unstable. The burst experiment The effect of confining pressure on the fracture toughness was first studied by Schmidt and Huddle [START_REF] Schmidt | Effect of Confining Pressure on Fracture Toughness of Indiana Limestone[END_REF] on Indiana limestone using single-edge-notch samples in a pressure vessel. In their experiments, increase in the fracture toughness up to four fold have been reported. Other investigations to quantify the confining pressure dependency were performed on the three point bending [START_REF] Müller | Brittle crack growth in rocks[END_REF][START_REF] Vásárhelyi | Influence of pressure on the crack propagation under mode i loading in anisotropic gneiss[END_REF], modified ring test [START_REF] Thiercelin | Fracture Toughness and Hydraulic Fracturing[END_REF], chevron notched Brazillian disk [START_REF] Roegiers | Rock fracture tests in simulated downhole conditions[END_REF], cylinder with a partially penetrating borehole [START_REF] Holder | Measurements of effective fracture toughness values for hydraulic fracture: Dependence on pressure and fluid rheology[END_REF][START_REF] Sitharam Thallak | The pressure dependence of apparent hydrofracture toughness[END_REF], and thick wall cylinder 3.4. Fracture stability in the burst experiment with a confining pressure with notches [START_REF] Abou-Sayed | An Experimental Technique for Measuring the Fracture Toughness of Rocks under Downhole Stress Conditions[END_REF][START_REF] Chen | Laboratory measurement and interpretation of the fracture toughness of formation rocks at great depth[END_REF] and without notches [START_REF] Stoeckhert | Mode i fracture toughness of rock under confining pressure[END_REF]. Published results on Indiana limestone are shown in Figure 3.11 and the data suggest the fracture toughness dependency on the confining pressure with a linear relationship. Provided increasing reports on confining pressure dependent fracture toughness, theoretical works to describe the mechanisms focus mainly on process zones ahead of the fracture as a culprit of the "apparent" fracture toughness including Dugdale type process zone [START_REF] Zhao | Determination of in situ fracture toughness[END_REF][START_REF] Sato | Cohesive crack analysis of toughness increase due to confining pressure[END_REF], Barenblatt cohesive zone model [START_REF] Allan M Rubin | Tensile fracture of rock at high confining pressure: implications for dike propagation[END_REF], and Dugdale-Barenblatt tension softening model [START_REF] Hashida | Numerical simulation with experimental verification of the fracture behavior in granite under confining pressures based on the tension-softening model[END_REF][START_REF] Fialko | Numerical simulation of high-pressure rock tensile fracture experiments: Evidence of an increase in fracture energy with pressure?[END_REF]. The burst experiment developed by Abou-Sayed [START_REF] Abou-Sayed | An Experimental Technique for Measuring the Fracture Toughness of Rocks under Downhole Stress Conditions[END_REF] is one of the most important methods to determine the critical stress intensity factor of rocks subject to confining pressure in the petroleum industry as the geometry closely represents actual downhole conditions of hydraulic fracturing stimulation (Figure 3.12). A hydraulic internal pressure is applied on a jacketed borehole of the thick-walled cylinder with pre-cut notches. Also, a confining pressure is applied on the outer cylinder. The inner and the outer pressures increase keeping a constant ratio of the outer to the inner pressure until the complete failure of the sample occurs and the inner and outer pressures will equilibrate to the ambient pressure abruptly. This test has great advantages in sample preparation, no fluid leak off to the rock, and easeness of measurement with various confining pressures. In this section, we firstly revisit the derivation of the stress intensity factor and analyze stabilities of fracture growth from actual burst experiment results. Subsequent analytical results indicate that fracture growth is not necessarily unstable and can have a stable phase in our experiments. In fact, stable fracture propagation has been observed also in past studies with PMMA samples [START_REF] Clifton | Determination of the critical-stress-intensity factor KIc from internally pressurized thick-walled vessels -The critical-stress-intensity factor of thick-walled rock specimens is determined from the pressure at failure[END_REF] and sandstone and shale rocks without confining pressure [START_REF] Zhixi | Determination of rock fracture toughness and its relationship with acoustic velocity[END_REF]. Evaluation and computation of the stress intensity factor for the burst experiment Under Griffith's theory and for a given geometry (a, b, L) see Figure 3.12, the fracture stability is governed by, K I (P i , L, b, a, P o ) ≤ K Ic where K Ic is a material property named the critical fracture toughness. The stress intensity factor (SIF) denoted K I is such that, K I < 0 when crack lips interpenetrate and K I ≥ 0 otherwise. Let us define dimensionless parameters as, w = b a , l = L b -a , r = P o P i (3.10) Hence, the dimensionless crack stability becomes K * I (1, l, w, r) ≤ K Ic (P i √ aπ) (3.11) where K * I (1, l, w, r) = K I (1, l, w, r)/ √ aπ. Necessarily, the inner pressure must be positive P i > 0 to propagate the crack. For a given thick wall ratio w and pressure confinement r, we are able to evaluate the fracture toughness of the material by computing K * I if the experiment provides a value of the inner pressure P i and the crack length L at the time when the fracture propagates. The difficulty is to measure the fracture length in-situ during the experiment whose technique is yet to be established. However the burst experiment should be designed for unstable crack propagation. The idea is to maintain the crack opening by keeping the tensile load at the crack tips all along the path, so that the sample bursts (unstable crack propagation) after initiation. Therefore the fracture toughness is computed for the pre-notch length and the critical pressure measured. 3.4. Fracture stability in the burst experiment with a confining pressure Let us study the evolution of K * I (1, l, w, r) with the crack length l for the parameter analysis (w, r) to capture stability crack propagation regimes. Using Linear Elastic Fracture Mechanics (LEFM) the burst problem denoted (B) is decomposed into the following elementary problems: a situation where pressure is applied only on the inner cylinder called the jacketed problem (J) and a problem with only a confining pressure applied on the outer cylinder problem named (C). This decomposition is illustrated in Figure 3.13. Therefore, the SIF for (B) can then be superposed as K B * I (1, l, w, r) = K J * I (1, l, w) -rK C * I (1, l, w) (3.12) where K C * I (1, l, w) is positive for negative applied external pressure P o . In Abou-Sayed [START_REF] Abou-Sayed | An Experimental Technique for Measuring the Fracture Toughness of Rocks under Downhole Stress Conditions[END_REF] the burst problem is decomposed following the Figure 3.14 such that, the decomposition is approximated by the jacketed problem (J) and the unjacketed problem (U) in which the fluid pressurized all internal sides. We get the following SIF, K B * I (1, l, w, r) ≈ K J * I (1, l, w) -rK U * I (1, l, w) (3.13) where K U * I (1, l, w) ≥ 0 for a positive P o applied in the interior of the geometry. Note that in our decomposition, no pore pressure (P p ) is considered in the sample, i.e. a drain evacuates the embedded pressure in the rock. L 2a 2b L L 2a 2b L L 2a 2b L L 2a 2b L = + L 2a 2b L + = = L 2a 2b L L 2a 2b L L 2a 2b L - (B) (J) (C) (U) (B) (J) (B) (M) + = = L 2a 2b L L 2a 2b L L 2a 2b L - (B) (J) (C) (U) (B) (J) (B) (M) Normalized stress intensity factor for the jacketed and unjacketed problems have been derived in Clifton [START_REF] Clifton | Determination of the critical-stress-intensity factor KIc from internally pressurized thick-walled vessels -The critical-stress-intensity factor of thick-walled rock specimens is determined from the pressure at failure[END_REF]. The Figure 3.15 shows a good agreement between our results (computational SIF based on the G θ methods) and one provided by Clifton [START_REF] Clifton | Determination of the critical-stress-intensity factor KIc from internally pressurized thick-walled vessels -The critical-stress-intensity factor of thick-walled rock specimens is determined from the pressure at failure[END_REF]. The G θ technique [START_REF] Destuynder | Sur une interprétation mathématique de l'intégrale de Rice en théorie de la rupture fragile[END_REF][START_REF] Suo | On the application of g (θ) method and its comparison with de lorenzi's approach[END_REF] is an estimation of the second derivatives of the potential energy with respect to the crack length, i.e. to make a virtual perturbation of the domain (vector θ) in the crack propagation direction. Then, the SIF is calculated using Irwin formula K I = EG/(1ν 2 ) based on the computed G. Influence of the confinement and wall thickness ratio on stability of the initial crack Based on the above result we compare K C * I with K U * I (Figures 3.13 and 3.14), and we found out their relative error is less than 15% for l ∈ [.2, .8] and w ∈ {3, 7, 10}. So, in a first approximation both problems are similar. For the burst experiment, the fracture propagation occurs when (3.11) becomes an equality, thus we have P i = K Ic /(K B * I √ aπ). A decreasing K B * I induces a growing P i , a contrario a growing K B * I implies to decrease the inner pressure which contradicts the burst experiment set up (monotonic increasing pressure). Consequently the fracture growth is unstable (brutal) for a growing K B * I , and vice versa. In the Figure 3.16 we show different evolutions of the stress intensity factor with the crack length for various wall thickness ratio and confinement. We observe that when the confining pressure r increases fractures propagation are contained and the same effect is noticed for larger thick wall ratio w. Depending where the pre-fracture tip is located we can draw different fracture regime summarized in three possible evolutions (see Figure 3.17 (a) For this evolution K B * I is strictly increasing thus for any pre-fracture length l 0 the sample will burst. The idea is the fracture initiates once the pressure is critical, then propagates along the sample until the failure. A sudden drop of the pressure is measured signature of the initiation pressure. By recording this pressure P i the fracture toughness K Ic is calculated using equation (3.11). (b) By making a pre-fracture l 0 ≥ l SU , this leads to the same conclusion than (a). However for l U S ≤ l 0 ≤ l SU the fracture propagation is stable. To get an estimation of the fracture toughness, we need to track the fracture and to measure its length otherwise is vain. A risky calculation is to assume the fracture initiation length be at the inflection point l SU before the burst. Reasons are the critical point can be a plateau shape leading to imprecise measure of l SU , secondly, since the rock is not a perfect brittle materials the l SU can be slightly different. (c) For Griffith and any cohesive models which assume compressive forces in front of the notch tips, the fracture propagation is not possible. Of course others initiation criterion are possible as critical stress as an example. Application to sandstone experiments A commercial rock mechanics laboratory provided fracture toughness results for different pressure ratios on sandstones and the geometries summarized in the Table 3.3. As their end-caps and hardware are built for 0.25' center hole diameter with 2.5" diameter sample, w values are restricted to 9. Considering no pore pressure and applying stricto sensu the following equation . l U S is a critical point from unstable to stable crack propagation, vice versa for l SU . The fracture does not propagates at stop point denoted l ST by taking l equals to the dimensionless pre-notch length l 0 and the critical pressure recorded P i = P ic , we obtain that the fracture toughness K Ic is influenced by the confining pressure r as reported in the last column of the Table 3.3. However, the evolutions of K B * I with respect to l in the Figure 3.18 (right) shows that all confining experiments (Id 1-5) have a compressive area in front of the fracture tips. Moreover pre-fractures are located in the stable propagation regime, in fine the sample cannot break according to Griffith's theory. P i √ aπK B * I (1, l, w, r) = K Ic , Chapter 3. Sample ID 2a [in] w The wall thickness cylinder w and the confining pressure ratio r play a fundamental role in the crack stability regime, to obtain a brutal fracture propagation after initiation smaller (w, r) is required. A possible choice is to take w = 3 for r = {1/8, 1/6} as shown in Figure 3. [START_REF] Bažant | Scaling of Structural Strength[END_REF]. P ic [Psi] r l 0 K Ic [Psi √ in] Id 0 0. A stable-unstable regime is observed for (r = 1/6, w = 5). We performed a numerical simulation with the phase-field model to hydraulic fracturing to verify the ability of the simulation to capture the bifurcation point. For that we fix K Ic = 1, the geometric parameters (a = 1, b = 5, l 0 = .15, r = 1/5) and the internal length = 0.01. Then, by pressuring the sample (driven-pressure) damage grows until the critical point. After this 3.4. Fracture stability in the burst experiment with a confining pressure loading, the damage jumps to the external boundary and break the sample. The normalized SIF is computed using the K Ic /(P i √ aπ) for different fracture length and reported in the Figure 3.19 Remark 4 Stability analysis can be also done by volume-driven injection into the inner cylinder using phase-field models. This provides stable fracture propagation, and normalized stress intensity factor can be rebuild using simulations outputs. Conclusion Through this chapter we have shown that the phase-field models for hydraulic fracturing is a good candidate to simulate fractures propagation in the toughness dominated regime. The verification is done for a single-fracture and multi-fracking propagation scenario. Simulations show that the multi-fractures propagation is the worst case energetically speaking contrary to the growth of a single fracture in the network which is the best total energy minimizer. Moreover the bifurcation to a loss of symmetries (e.g. single fracture tip propagation) is intensified by the density of fractures in the network. The pressure-driven burst experiment focuses on fracture stability. The confining pressure and the thickness of the sample might contain fractures growth. By carefully selecting those two parameters (confinement pressure ratio and the geometry) the experiment can be designed to calculate the fracture toughness for rocks. In short those examples illustrate the potential of the variational phase-field models for hydraulic fracturing associated with the minimization principle to account for stable volume-driven fractures. The loss of symmetry in the multi-fracking scenario is a relevant example to illustrate the concept of variational argument. Same results is confirmed by coupling this model with fluid flow as detailed in Chukwudozie [START_REF] Chukwudozie | Application of the Variational Fracture Model to Hydraulic Fracturing in Poroelastic Media[END_REF]. Substituting (3.19) into (3.14), the fluid pressure is obtained. p = 3 2 G 2 c E π V (3.20) Similarly, the fracture length during propagation is obtained by substituting (3.16) into (3.14). l = 3 E V 2 4π G c (3.21) Penny-Shaped Fracture (3d domain): For a penny-shaped fracture in a 3d domain, the fracture volume is V = 16pl 3 3E (3.22) where l denotes the radius, while the critical fluid pressure [START_REF] Sneddon | The opening of a griffith crack under internal pressure[END_REF] is p c = πG c E 4l 0 (3.23) For an initial fracture radius l 0 , the critical volume is, V c = 64πl 5 0 G c 9E (3.24) If one follows a procedure similar to that for the line fracture, we will obtain the following relationships for the evolution of the fluid pressure and fracture radius p c = 5 π 3 G 3 c E 2 12V l = 5 9E V 2 64πG c (3.25) Chapter 4. Variational models of perfect plasticity Definition 7 (Generalized standard plasticity models) i. A choice of independent states variables which includes one or multiple internal variables. ii. Define a convex set where thermodynamical forces lie in. Concerning i. we choose the plastic strain tensor (symmetric) p and the infinitesimal total deformation denoted e(u). The total strain is the symmetrical part of the spatial gradient of the displacement u, i.e. e(u) = ∇u + ∇ T u 2 . The kinematic admissibility is the sum of the plastic and elastic strains denoted ε, given by, e(u) = ε + p. For ii. consider a free energy density ψ a differentiable convex state function which depends on internal variables. Naturally, thermodynamical forces are defined from the free energy by σ = ∂ψ ∂e (e, p), τ = - ∂ψ ∂p (e, p). (4.1) Commonly the free energy takes the form of ψ(e, p) = 1 2 A(e(u)p) : (e(u)p), where A is the Hooke's law tensor. It follows that, σ = τ = A(e(u)p). However for clarity we continue to use τ instead. Internal variables (e, p) and their duals (σ, τ ) are second order symmetric tensors and become n × n symmetric matrices denoted M n s after a choice of an orthonormal basis and space dimension of the domain (Ω ⊂ R n ). To complete the second statement ii., let K be a non empty closed convex subset of M n s where τ lies in. This subset is called elastic domain for τ . Assume that K is fixed and time independent, such that its boundary is the convex yield surface f Y : M n s → R, defined by, τ ∈ K = {τ * ∈ M n s : f Y (τ * ) ≤ 0} (4.2) Precisely, for any τ that lies in the interior of K denoted int(K) the yield surface is strictly negative. Otherwise, τ belongs to the boundary noted ∂K and the yield function vanishes: f Y (τ ) < 0, τ ∈ int(K) f Y (τ ) = 0, τ ∈ ∂K . (4.3) Let us apply the normality rule on it to get the plastic evolution law. In the case where ∂K is differentiable the plastic flow rule is defined as, 4.1. Ingredients for generalized standard plasticity models ṗ = η ∂f Y ∂τ (τ ), with η = 0 if f Y (τ ) < 0 ≥ 0 if f Y (τ ) = 0 (4.4) where η is the Lagrange multiplier. Sometimes the convex K has corners and the outer normal cannot be defined (f Y is not differentiable), thus, the normality rule is written using Hill's principle, also known as maximum dissipation power principle, i.e., τ ∈ K, (τ -τ * ) : ṗ ≥ 0, ∀τ * ∈ K. (4.5) This is equivalent to say that ṗ lies in the outer normal cone of K in τ , ṗ ∈ N K (τ ) := { ṗ : (τ * -τ ) ≤ 0 ∀τ * ∈ K}. (4.6) However we prefer to introduce the indicator function of τ ∈ K, and write equivalently the normality rule as, ṗ lies in the subdifferential set of the indicator function. For that, the indicator function is, I K (τ ) = 0 if τ ∈ K +∞ if τ / ∈ K (4.7) and is convex by construction. The normality rule is recovered by applying the definition of subgradient, such that, ṗ is a subgradient of I K at a point τ ∈ K for any τ * ∈ K, given by, τ ∈ K, I K (τ * ) ≥ I K (τ ) + ṗ : (τ * -τ ), ∀τ * ∈ K ⇔ ṗ ∈ ∂I K (τ ), τ ∈ K (4.8) where the set of all sub-gradients at τ is the sub-differential of I K at τ and is denoted by ∂I K (τ ). At this stage of the analysis, Hill's principle is equivalent to convex properties of the elastic domain K and the normality plastic strain flow rule. For τ ∈ K, Hill ⇔ ṗ ∈ N K (τ ) ⇔ ṗ ∈ ∂I K (τ ) (4.9) Dissipation of energy during plastic deformations All ingredients are settled, such as, we have the variable set (u, p) and their duals (σ, τ ) which lie in the convex set K. Also, the plastic evolution law is given by ṗ ∈ ∂I K (τ ). It is convenient to compute the plastic dissipated energy during a plastic deformation process. For that, the dissipated plastic power density can be constructed from the Clausius-Duhem inequality. To construct such dissipation energy let us define first the support function H(q), q ∈ M 3 s → H(q) := sup τ ∈K {τ • q} ∈ (-∞, +∞] (4.10) The support function is convex, 1-homogeneous, Chapter 4. Variational models of perfect plasticity H(λq) = λH(q), ∀λ > 0, ∀q ∈ M n s (4.11) and it follows the triangle inequality, i.e., H(q 1 + q 2 ) ≤ H(q 1 ) + H(q 2 ), for every q 1 , q 2 ∈ M n s . (4.12) The support function of the plastic strain rate H( ṗ) is null if the plastic flow is zero, non negative when 0 ∈ K, and takes the value +∞ when K is not bounded. Using Clausius-Duhem inequality for an isotherm transformation, the dissipation power is defined by D = σ : ė -ψ, (4.13) and the second law of thermodynamics enforce the dissipation to be positive or null, D = τ : ṗ ≥ 0. (4.14) Using Hill's principle, the definition of the support function and some convex analysis, one can show that the plastic dissipation is equal to the support function of the plastic flow. D = H( ṗ) (4.15) The starting point to prove (4.15) is the Hill's principle or equivalently the plastic strain flow rule. For τ ∈ K, τ : ṗ ≥ τ * : ṗ, ∀τ * ∈ K. By passing the right term to the left and taking the supremum over all ṗ ∈ M n s , we get, sup ṗ∈M n s {τ : ṗ -H( ṗ)} ≥ 0. (4.18) Since K is a non empty close convex set, H( ṗ) is convex and lower semi continuous, we have built the convex conjugate function of H( q) in the sense of Legendre-Fenchel. Moreover, one observes that the conjugate of the support function is the indicator function, given by, I K (τ ) := sup ṗ∈M n s {τ : ṗ -H( ṗ)} = 0 if τ ∈ K + ∞ if τ / ∈ K (4.19) Hence, the following equality holds for τ ∈ K, D = τ : ṗ = H( ṗ). (4.20) Variational formulation of perfect plasticity models Remark 5 The conjugate subgradient theorem says that, for τ ∈ K a non empty closed convex set, ṗ ∈ ∂I K (τ ) ⇔ D = τ : ṗ = H( ṗ) + I K (τ ) ⇔ τ ∈ ∂H( ṗ) Finally, once the plastic dissipation power defined, by integrating over time [t a , t b ] for smooth evolution of p, the plastic dissipated energy is, D(p; [t a , t b ]) = t b ta H( ṗ(s)) ds (4.21) This problem is rate independent because the dissipation does not depend on the strain rate , i.e. D( ė, ṗ) = D( ṗ) and is 1-homogeneous. Variational formulation of perfect plasticity models Consider a perfect elasto-plastic material with a free energy ψ(e, p) occupying a smooth region Ω ⊂ R n , subject to time dependent boundary displacement ū(t) on a Dirichlet part ∂ D Ω of its boundary. For the sake of simplicity the domain is free of stress and no body force applies on it, such that, σ • ν = 0 on the complementary portion ∂ N Ω = ∂Ω \ ∂ D Ω, where ν denotes the appropriate normal vector. Assume the initial state of the material being (e 0 , p 0 ) = (0, 0) at t = 0. Internal variables e(u) and p are supposed to be continuous-time solution of the quasi-static evolution problem. At each time the body is in elastic equilibrium with the prescribed loads at that time, such as it satisfies the following equations,                  σ = ∂ψ ∂e (e, p) in Ω τ = - ∂ψ ∂p (e, p) ∈ ∂H( ṗ) in Ω div(σ) = 0 in Ω u = ū(t) on ∂ D Ω σ • ν = 0 on ∂ N Ω We set aside problems where plasticity strain may develop at the interface ∂ D Ω. The problem can be equivalently written in a variational formulation, which is based on two principles, i. Energy balance ii. Stability condition Let the total energy density be defined as the sum of the elastic energy and the dissipated plastic energy, E t (e(u), p) = ψ(e(u), p)ψ(e 0 , p 0 ) + D(p; [0, t]) Energy balance The concept of energy balance is related to the evolution of state variables in a material point, and enforce the total energy rate be equal to the mechanical power energy at each time, i.e. Ėt = σ t : ėt . ( The total energy rate is, Ėt = ∂ψ ∂e (e t , p t ) : ėt + ∂ψ ∂p (e t , p t ) : ṗt + H( ṗt ), (4.23) and using the definition of τ = -∂ψ/∂e and σ = ∂ψ/∂e, we obtain, τ t • ṗt = sup τ ∈K {τ : ṗt } (4.24) Stability condition for the plastic strain The stability condition for p is finding stable p t ∈ M n s for a given loading deformation e t . We propose to approximate the continuous time evolution by a time discretization, such that, 0 = t 0 < • • • < t i < • • • < t N = t b and at the limit max i |t it i-1 | → 0. At the current time t i = t, let the material be at the state e t i = e and p t i = p and the previous state (e t i-1 , p t i-1 ). The discretized plastic strain rate is ṗt (p-p t i-1 )/(t-t i-1 ). During the laps time from t i-1 to t the increment of plastic energy dissipated is t t i-1 H( ṗt )ds H(pp t i-1 ). Hence taking into account all small previous plastic dissipated energy events, the total dissipation is approximated by, D(p) := H(p -p t i-1 ) + D(p t i-1 ) (4.25) At the current time, a plastic strain perturbation is performed for a fixed total strain changing the system from (e, p) to (e, q). The definition of the stability condition adopted here is written as a variation of the total energy between this two states, p stable, e given ⇔ ψ(e, q) + H(qp t i-1 ) ≥ H(pp t i-1 ) + ψ(e, p), ∀q ∈ M 3 s (4.26) We wish to highlight the stability definition adopted, which is for infinitesimal transformations the flow rule. H(qp t i-1 ) ≥ H(pp t i-1 ) -ψ(e, q)ψ(e, p) qp : (qp), ∀q ∈ M n s , q = p (4.27) Consider small variations of the plastic strain p in the direction p for a growing total energy, such that for some h > 0 small enough and p + hp ∈ M n s we have, Using the Legendre transform, we get, τ ∈ ∂H(p -p t i-1 ) ⇔ (p -p t i-1 ) ∈ ∂I K (τ ). (4.29) To recover the continuous-time evolution stability for p, divide by δt = tt i-1 and pass δt to the limit. We recover the flow rule ṗ ∈ ∂I K (τ ), or equivalently in the conjugate space τ ∈ ∂H( ṗ). Let us justify the definition adopted of the stability by showing that there is no lowest energy that can be found for a given e t . Without loss of any generality assume a continuous straight smooth path p(t) starting at p(0) = p and finishing at p(1) = q, such as, (4.31) t ∈ [0, 1] → p(t) = (1 -t)p + tq, ∀q ∈ M n s (4. The right hand side is path independent, by taking the infimum over all plastic strain paths, we get, inf t →p(t) p(0)=p, p(1)=q 1 0 H( ṗ(s))ds ≥ τ * : (q -p) (4.32) The left hand side does not depends on τ * , taking the supremum for all τ * ∈ K, and applying the triangle inequality for any p t i-1 , one obtains, inf t →p(t) p(0)=p, p(1)=q 1 0 H( ṗ(s))ds ≥ H(q -p) ≥ H(q -p t i-1 ) -H(p -p t i-1 ). (4.33) which justifies the a posteriori adopted definition of the stability. The stability condition for the displacement is performed on the first chapter and we simply recover the equilibrium constitutive equations for the elastic problem with the prescribed boundary conditions. Numerical implementation and verification of perfect elasto-plasticity models - ∂ N Ω g(t) • u dH n-1 , where H n-1 denotes the Hausdorff n -1-dimensional measure of the boundary. Typical plastic yields criterion used for metal are Von Mises or Tresca, which are well known to have only a bounded deviatoric part of the stress, thus they are insensitive to any stress hydrostatic contributions. Consequently, the plastic strain rate is also deviatoric ṗ ∈ dev(M n s ) and it is not restrictive to assume that p ∈ dev(M n s ). For being more precise but without going into details, existence and uniqueness is given for solving the problem in the stress field, σ ∈ L 2 (Ω; M n s ) (or e(u) ∈ L 2 (Ω; M n s ) ) with a yield surface constraint σ ∈ L ∞ (Ω; dev(M n s )). Experimentally it is observed that plastic strain deformations concentrate into shear bands, as a macroscopic point of view this localization creates sharp surface discontinuities of the displacement field. In general the displacement field cannot be solved in the Sobolev space, but find a natural representation in a bounded deformation space u ∈ BD(Ω) when the plastic strain becomes a Radon measure p ∈ M(Ω ∪ ∂ D Ω; dev(M 3 s )). The problem of finding (u, p) minimizing the total energy and satisfying the boundary conditions is solved by finding stable states variables trajectory i.e. stationary points. This quasi-static evolution problem, is numerically approximated by solving the incremental time problem, i.e. for a given time interval [0, T ] subdivided into (N + 1) steps we have, 0 = t 0 < t 1 < • • • < t i-1 < t i < • • • < t N = T . The discrete problem converges to the continuous time evolution provided max i (t it i-1 ) → 0, and the total energy at the time t i in the discrete setting is, E t i (u i , p i ) = Ω 1 2 A(e(u i ) -p i ) : (e(u i ) -p i ) + D i (p i ) dx - ∂ N Ω g(t i ) • u i dH n-1 where, 4.3. Numerical implementation and verification of perfect elasto-plasticity models D i (p i ) = H(p i -p i-1 ) + D i-1 (4.34) for a prescribed u i = ūi on ∂ D Ω. Let i be the the current time step, the problem is finding (u i , p i ) that minimizes the discrete total energy, i.e (u i , p i ) := argmin u∈C i p∈M(Ω∪∂ D Ω;dev(M 3 s )) E t i (u, p) (4.35) where p = (ū iu) • ν on ∂ D Ω and C i is the set of admissible displacement, C i = {u ∈ H 1 (Ω) : u = ūi on ∂ D Ω}. The total energy E(u, p) is quadratic and strictly convex in u and p separately. For a fixed u or p, the minimizer of E(•, p) or E(u, •) exists, is unique and can easily be computed. Thus, a natural algorithm technique employed is the alternate minimization detailed in Algorithm 3, where δ p is a fixed tolerance. More precisely, at the loading time t i , for a given p j i , let find u j i that minimizes E(u, p j i ), notice that the plastic dissipation energy does not depend on the strain e(u), thus, u j i := argmin u∈C i Ω 1 2 A(e(u)p j i ) : (e(u)p j i ) dx - ∂ N Ω g(t) • u dH n-1 (4.36) This is a linear elastic problem. Then, for a given u j i let find p on each element cell, such as it minimizes E(u j i , p). This problem is not easy to solve in the primal formulation, p j i := argmin p∈M(Ω∪∂ D Ω;dev(M n s )) 1 2 A(e(u j i )p) : (e(u j i )p) + H(pp i-1 ) but from the previous analysis, the stability condition of this problem is A(e(u j i )p) ∂H(pp i-1 ). Using the Legendre-transform, the stability of the conjugate problem is given by (p -p i-1 ) ∈ ∂I K (A(e(u j i ) -p)). One can recognize the flow rule in the discretized time. This is the stability condition of the problem, p j i := argmin p∈M(Ω∪∂ D Ω;dev(M n s )) A(e(u j i )-p)∈K 1 2 A(p -p i-1 ) : (p -p i-1 ). The minimization with respect to u is a simple linear problem solved using preconditioned conjugated gradient while minimization with respect to p can be reformulated Solve the equilibrium, u j+1 := argmin u∈C i E i (u, p j ) 4: Solve the plastic strain projection on each cell, p j+1 := argmin j := j + 1 6: until p jp j-1 L ∞ ≤ δ p 7: Set, u i := u j and p i := p j Numerical verifications A way to do a numerical verification is to recover the closed form solution of a bi-axial test in 3D provided in [START_REF] Gilles A Francfort | A case study for uniqueness of elasto-plastic evolutions: The bi-axial test[END_REF]. In the fixed orthonormal basis (e 1 , e 2 , e 3 ), consider a domain Ω = (-d/2, d/2) × (-l/2, l/2) × (0, l), (d < l), with the boundary conditions:    σ 11 = 0 on x 1 = ±d/2 σ 22 = g 2 on x 1 = ±l/2 σ 13 = σ 23 = 0 on x 3 = 0, l and, add u 3 = 0 on x 3 = 0 u 3 = tl on x 3 = l. Considering the classical problem to solve,    div(σ) = 0 in Ω σ = Ae(u) in Ω e(u) = (∇u + ∇ T u)/2 in Ω constrained by a Von Mises plasticity yield criterion, 3 2 dev(σ) : dev(σ) ≤ σ p It is shown in [START_REF] Gilles A Francfort | A case study for uniqueness of elasto-plastic evolutions: The bi-axial test[END_REF] that the domain remains elastic until the plasticity is triggered at a critical loading time t c as long as 0 ≤ g 2 ≤ σ p / √ 1 -ν + ν 2 , t c = 1 2E (1 -2ν)g 2 + 4σ 2 p -3g 2 2 where (E, ν) denote respectively the Young's modulus and the Poisson ratio. For 0 ≤ t ≤ t c the elastic solution stands for        σ(t) = g 2 e 2 ⊗                              σ(t) =g 2 e 2 ⊗ e 2 + σ3 e 3 ⊗ e 3 , σ3 = 1 2 g 2 + 4σ 2 p -3g 2 2 e(t) = -ν(1 + ν) g 2 E e 1 ⊗ e 1 + (1 -ν 2 ) g 2 E e 2 ⊗ e 2 + t(-νe 1 ⊗ e 1 -νe 2 ⊗ e 2 + e 3 ⊗ e 3 ) p(t) =(t -t c ) - g 2 + σ3 2 σ3 -g 2 e 1 ⊗ e 1 + 2g 2 -σ3 2 σ3 -g 2 e 2 ⊗ e 2 + e 3 ⊗ e 3 u(t) = -ν(1 + ν) g 2 E -νt c - g 2 + σ3 2 σ3 -g 2 (t -t c ) x 1 e 1 + (1 -ν 2 ) g 2 E -νt c + 2g 2 -σ3 2 σ3 -g 2 (t -t c ) x 2 e 2 + tx 3 e 3 (4.38) A numerical simulation has been performed on a domain parametrized by l = .5 and d = .2, pre-stressed on opposite faces by g 2 = .5 with the material parameters E = 1,σ p = 1 and a Poisson ratio set to ν = .3. For those parameters, numerical results and exact solution have been plotted see Figure 4.1, and matches perfectly. One difficulty is to get closed form for different geometry and plasticity criterion. Alternate minimization technique converge to the exact solution on this example for Von Mises in 3D Conclusion The adopted strategy to model a perfect elasto-plastic material is to prescribe the elastic stress domain set (closed convex) with plastic yields functions without dealing with corners and approximate the continuous evolution problem by discretized time steps. The 99 Chapter 4. Variational models of perfect plasticity implemented algorithm solves alternately the elastic problem and the plastic projection onto the yield surface. Hence, there is no difficulty to implement other perfect plastic yield criteria. A verification is performed on the biaxial test for Von Mises plastic yield criteria. Chapter 5 Variational phase-field models of ductile fracture by coupling plasticity with damage Phase-field models referred to as gradient damage models of brittle fracture are very efficient to predict cracks initiation and propagation in brittle and quasi-brittle materials [START_REF] Pham | Approche variationnelle de l'endommagement : I. les concepts fondamentaux[END_REF][START_REF] Pham | Approche variationnelle de l'endommagement : Ii. les modèles à gradient[END_REF][START_REF] Bourdin | Numerical experiments in revisited brittle fracture[END_REF][START_REF] Bourdin | Numerical implementation of a variational formulation of quasi-static brittle fracture[END_REF][START_REF] Bourdin | The variational approach to fracture[END_REF]. They were originally conceived as an approximation of Francfort Marigo's variational formulation [START_REF] Francfort | Revisiting brittle fracture as an energy minimization problem[END_REF] which is based on Griffith's idea of competition between elastic and fracture energy. Their model inherits a fundamental limitation of Griffith's theory which is a discontinuity of the displacement belongs to the damage localization strip, and this is not observed during fractures nucleation in ductile materials. Moreover, they cannot be used to predict cohesive-ductile fractures since no permanent deformations are accounted for. Plasticity models [START_REF] Suquet | Sur les équations de la plasticité: existence et régularité des solutions[END_REF][START_REF] Salençon | Elasto-plasticité[END_REF][START_REF] Halphen | Sur les matériaux standard généralisés[END_REF][START_REF] Maso | Quasistatic crack growth in elasto-plastic materials: The two-dimensional case[END_REF][START_REF] Babadjian | Quasi-static evolution in nonassociative plasticity: the cap model[END_REF] are widely used to handle with the aforementioned effects by the introduction of the plastic strain variable. To capture ductile fracture patterns the idea is to couple the plastic strain coming from plasticity models with the damage in the phase-field approaches to fracture. The goal of this chapter is to extend the Alessi-Marigo-Vidoli work [START_REF] Alessi | Variational approach to fracture mechanics with plasticity[END_REF][START_REF] Alessi | Gradient damage models coupled with plasticity: variational formulation and main properties[END_REF][START_REF] Alessi | Gradient damage models coupled with plasticity and nucleation of cohesive cracks[END_REF][START_REF] Alessi | Coupling damage and plasticity for a phase-field regularisation of brittle, cohesive and ductile fracture: One-dimensional examples[END_REF] by considering any associated perfect plasticity and to provide a general algorithm to solve the problem for any dimensions. We provide a qualitative comparison of crack nucleation in various specimen with published experimental results on metals material. We show capabilities of the model to recover cracks patterns characteristics of brittle and ductile fractures. After the set of parameters being adjusted to recover ductile fracture we focus solely on such regime to study cracks nucleation and propagation phenomenology in mild notched specimens. The chapter is organized as follow: Section 5.1.1 starts by aggregating some experiments illustrating mechanisms of ductile fracture which will constitute basis of numerical comparisons provided in the last part of this chapter. Section 5.1.2 is devoted to the introduction of variational phase-field models coupled with perfect plasticity and to recall some of their properties. Section 5.1.3 focuses on one dimension bar in traction to provide the cohesive response of the material and draw some fundamental properties similarly 5.1. Phase-field models to fractures from brittle to ductile to [START_REF] Alessi | Variational approach to fracture mechanics with plasticity[END_REF][START_REF] Alessi | Gradient damage models coupled with plasticity: variational formulation and main properties[END_REF][START_REF] Alessi | Gradient damage models coupled with plasticity and nucleation of cohesive cracks[END_REF][START_REF] Alessi | Coupling damage and plasticity for a phase-field regularisation of brittle, cohesive and ductile fracture: One-dimensional examples[END_REF]. A numerical implementation technique to solve such coupled models is provided in section 5.2. For the remainder we investigate ductile fracture phenomenology by performing simulations on various geometries such as, rectangular specimen, a mild notch 2d plane strain and 3d round bar respectively exposed in sections 5. Numerous experimental evidences show a common phenomenology of fracture nucleation in a ductile materials. To illustrate this, we have selected relevant experiments showing fracture nucleation and propagation in a plate and in a round bar. For instance in [START_REF] Spencer | The influence of iron content on the plane strain fracture behaviour of aa 5754 al-mg sheet alloys[END_REF] the role of ductility with the influence of the iron content in the formation of shear band have been investigated. Experiments on Aluminum alloy AA 5754 Al -Mg show fractures nucleation and evolution in the thickness direction of the plate specimen illustrated in Figure 5.2. The tensile round bar is another widely used test to investigate ductile fractures. However, tracking fractures nucleation inside the material is a challenging task and requires special equipment like tomography imaging to probe. Nevertheless Benzerga [START_REF] Benzerga | Ductile fracture by void growth to coalescence[END_REF][START_REF] Amine Benzerga | Synergistic effects of plastic anisotropy and void coalescence on fracture mode in plane strain[END_REF] and Luu [START_REF] Luu | Déchirure ductile des aciers à haute résistance pour gazoducs (X100)[END_REF] results show pictures of cracks nucleation and propagation inside those types of samples see Figure 5.19. A simpler method is the fractography which consists in studding fracture surfaces of materials after failure of the samples. Typical ductile fractures 5.1. Phase-field models to fractures from brittle to ductile and powerful approach to study theoretically and solve numerically those problems. The coupling between both models is done at the proposed total energy level. We start by recalling some important properties of variational phase-field models interpreted as gradient-damage models and variational perfect plasticity. Consider an elasto-plastic-damageable material with A the Hooke's law tensor occupying a region Ω ⊂ R n in the reference configuration. The region Ω is subject to a time dependent boundary displacement ū(t) on a Dirichlet part of its boundary ∂ D Ω and time stress dependent g(t) = σ • ν on the remainder ∂ N Ω = ∂Ω \ ∂ D Ω, where ν denotes the appropriate normal vector. A safe load condition is required for g(t) to set aside issues in plasticity theory. For the sake of simplicity body forces are neglected such that at the equilibrium, the stress satisfies, div(σ) = 0 in Ω The infinitesimal total deformation e(u) is the symmetrical part of the spatial gradient of the displacement field u, i.e. e(u) = ∇u + ∇ T u 2 Since the material has permanent deformations, it is usual in small deformations plasticity to consider the plastic strain tensor p (symmetric) such that the kinematic admissibility is an additive decomposition, e(u) = ε + p where ε is the elastic strain tensor. The material depends on the damage variable denoted α which is bounded between two extreme states, α = 0 is the undamaged state material and α = 1 refers to the broken part. Let the damage deteriorate the material properties by making an isotropic modulation of the Hooke's law tensor a(α)A, where the stiffness function a(α) is continuous and decreasing such that a(0) = 1, a(1) = 0. In linearized elasticity the recoverable energy density of the material stands for, ψ(e(u), α, p) := 1 2 a(α)A(e(u)p) : (e(u)p) Consequently the relation which relates the stress tensor σ to the strain is, One can recognize the Hill's principle by applying the definition of subdifferential and the indicator function. Since b(α)K is a none empty closed convex set, using Legendre-Fenchel, the conjugate of the plastic flow is σ ∈ b(α)∂H( ṗ), where the plastic dissipation potential H(q) = sup τ ∈K {τ : q} is convex, subadditive, positively 1-homogeneous for all q ∈ M n×n s . The dissipated plastic energy is obtained by integrating the plastic dissipation power over time, such that, φ p := t 0 b(α)H( ṗ(s)) ds (5.1) This dissipation is not unique and we have to take into account the surface energy produced by the fracture. Inspired by the phase-field models to brittle fracture [START_REF] Ambrosio | On the approximation of free discontinuity problems[END_REF][START_REF] Bourdin | The variational approach to fracture[END_REF][START_REF] Pham | Gradient damage models and their use to approximate brittle fracture[END_REF][START_REF] Pham | Stability of homogeneous states with gradient damage models: Size effects and shape effects in the three-dimensional setting[END_REF] we define the surface dissipation term as, φ d := t 0 σ 2 c 2Ek w (α) α + 2 ∇α • ∇ α + b (α) α t 0 H( ṗ(s)) ds dt (5.2) where the first term is the classical approximated surface energy in brittle fracture and the last term is artificially introduced to be combined with φ p . Precisely, after summation of the free energy ψ(e(u), α, p), the work force, the dissipated plastic energy φ p and the dissipated damage energy φ d , the total energy has the following form, E t (u, α, p, p) = Ω 1 2 a(α)A(e(u) -p) : (e(u) -p) dx - ∂ N Ω g(t) • u dH n-1 + Ω b(α) t 0 H( ṗ(s)) ds dx + σ 2 c 2Ek Ω w(α) + 2 |∇α| 2 dx (5.3) where p = t 0 ṗ(s) ds is the cumulated plastic strain which is embedded in the cumulated plastic dissipation energy t 0 H( ṗ(s)) ds. The surface dissipation potential w(α) is a continuous increasing function such that w(0) = 0 and up to a rescaling, w(1) = 1. Since the damage is a dimensionless variable, the introduction of ∇α enforce to have > 0 a regularized parameter which has a dimension of the length. Note that the total energy (5.3) is composed of two dissipations potentials ϕ p and ϕ d coupled where, ϕ p = Ω b(α) t 0 H( ṗ(s)) ds dx, ϕ d = σ 2 c 2Ek Ω w(α) + 2 |∇α| 2 dx. (5.4) 5.1. Phase-field models to fractures from brittle to ductile Taking p = 0 in (5.3), the admissible stress space is bounded by, A -1 σ : σ ≤ σ 2 c Ek max α w (α) c (α) where E is the Young's modulus, the compliance function is c(α) = 1/a(α) and let k = max α w (α) c (α) . Therefore, without plasticity in one dimensional setting an upper bound of the stress is σ c . A first conclusion is the total energy (5.3) is composed of two coupled dissipation potentials associated with two yields surfaces and their evolutions will be discussed later. In the context of smooth triplet state variable ζ = (u, α, p) and since the above total energy (5.3) must be finite, we have α ∈ H 1 (Ω) and e(u), p belong to L 2 (Ω). However, experimentally it is observed that plastic strain concentrates into shear bands. In our model since ṗ ∈ b(α)K, the plastic strain concentration is driven by the damage localization and both variables intensifies on the same confined region denoted J(ζ), where J is a set of "singular part" which a priori depends on all internal variables. Also, the damage is continuous across the normal surfaces of J(ζ) but not the gradient damage term which may jump. Accordingly, the displacement field cannot be solved in the Sobolev space, but find a natural representation in special bounded deformation space SBD if the Cantor part of e(u) vanishes, so that the strain measure can be written as, e(u) = e(u) + u ν H n-1 on J(ζ(x)) where e(u) is the Lebesgue continuous part and denotes the symmetrized tensor product. For the sake of simplicity, consider the jumps set of the displacement being a smooth enough surface, i.e the normal ν is well defined, and there is no intersections with boundaries such that J(ζ)∩∂Ω = ∅. The plastic strain turns into a Dirac measure on the surface J(ζ). Without going into details, the plastic strain lies in a non-conventional topological space for measures called Radon space denoted M. Until now, the damage evolution have not been set up and the plastic flow rule is hidden in the total energy adopted. Let us highlight this by considering the total energy be governed by three principles; damage irreversibility, the stability of E t (u, α, p) with respect to all admissible variables (u, α, p) and the energy balance. We focus on the time-discrete evolution, by considering a time interval [0, T ] subdivided into (N + 1) steps such that, 0 = t 0 < t 1 < • • • < t i-1 < t i < • • • < t N = T . The following discrete problem converges to the continuous time evolution provided max(t it i-1 ) → 0. At any time t i , the sets of admissible displacement, damage and plastic strain fields respectively denoted C i , D i and Q i are: 107 Chapter 5. Variational phase-field models of ductile fracture by coupling plasticity with damage C i = u ∈ SBD(Ω) : u = ū(t i ) on ∂ D Ω , D i = α ∈ H 1 (Ω) : α i-1 ≤ α < 1 in Ω , Q i = p ∈ M( Ω; M n×n s ) such that, p = u ν on J ζ(x) (5.5) and because plastic strains may develop at the boundary, we know from prior works on plasticity [START_REF] Dal Maso | Quasistatic evolution problems for linearly elastic-perfectly plastic materials[END_REF] that we cannot expect the boundary condition to be satisfied, thus we will have to set up p = (ū(t i )u) ν on ∂ D Ω. It is convenient to introduce Ω ⊃ Ω a larger computational domain which includes the jump set and ∂ D Ω, this will become clearer. Note that the damage irreversibility is in the damage set D i . The total energy of the time-discrete problem is composed of (5.3) on the regular part and b(α)D ( u ν, [0, t i ]) on the singular part, such that, E t i (u, α, p) = Ω\J(ζ) 1 2 a(α)A(e(u) -p) : (e(u) -p) dx - ∂ N Ω g(t i ) • u dH n-1 + Ω b(α)D i (p) dx + σ 2 c 2Ek Ω\J(ζ) w(α) + 2 |∇α| 2 dx (5.6) where D i (p) = H(p -p i-1 ) + D i-1 (5.7) the total energy is defined over the regular and singular part of the domain, and the evolution is governed by, Definition 8 (Time discrete coupled plasticity-damage evolution by local minimization) At every time t i find stable variables trajectory (u i , α i , p i ) ∈ C i × D i × Q i that satisfies the variational evolution: i. Initial conditions: u 0 = 0, α 0 = 0 and p 0 = 0 ii. Find the triplet ζ i = (u i , α i , p i ) which minimizes the total energy, E t i (u, α, p) iii. Energy balance, E t i (u i , α i , p i ) =E t 0 (u 0 , α 0 , p 0 ) + i k=1 ∂ D Ω (σ k ν) • (ū k -ūk-1 ) dH n-1 - ∂ N Ω (g(t k ) -g(t k-1 )) • u k dH n-1 (5.8) The damage and plasticity criterion are obtained by writing the necessary first order optimality condition of the minimizing problem E t i (u, α, p). Explicitly, there exists h > 0 small enough, such that for (u i + hv, α i + hβ, p i + hq) ∈ C i × D i × Q i , E t i (u i + hv, α i + hβ, p i + hq) ≥ E t i (u i , α i , p i ) (5.9) Consider that the displacement at u i in the direction v might extend the jump set of J(v). The variation of the total energy E t i (u i + hv, α i + hβ, p i + hq) is equal to, Ω\(J(ζ i )∪J(v)) 1 2 a(α i + hβ)A e(u i + hv) -(p i + hq) : e(u i + hv) -(p i + hq) dx - ∂ N Ω g(t i ) • (u i + hv) dH n-1 + Ω\(J(ζ i )∪J(v)) b(α i + hβ)D i (p i + hq) dx + J(ζ i )∪J(v) b(α i + hβ)D i (( u i + hv) ν) dH n-1 + σ 2 c 2Ek Ω\(J(ζ i )∪J(v)) w(α i + hβ) + 2 |∇(α i + hβ)| 2 dx (5.10) Note that the plastic dissipation term is split over the regular part and the singular part and for simplicity we set aside the plastic strain localization on the Dirichlet boundary. Equilibrium and kinematic admissibility: Take β = 0 and q = 0 in (5.9) and (5.10) such that E t i (u i +hv, α i , p i ) ≥ E t i (u i , α i , p i ). Using (5.7) we just have to deal with the current plastic potential H which is subadditive and 1-homogeneous. Hence, the fourth term in (5.10) becomes, 5.1. Phase-field models to fractures from brittle to ductile 2. The damage yield criteria in the bulk, f D (σ t , α t (x), pt (x)) := - 1 2 c (α t (x)) E σ 2 t + σ 2 c 2kE w (α t (x)) -2 2 α t (x) + b (α t (x))σ p pt (x) ≥ 0 (5.31) 3. The damage yield criteria on x 0 , b (α t (x 0 )) u(x 0 ) σ p - 2 σ 2 c kE α t (x 0 ) ≥ 0 (5.32) 4. The damage yield criteria on ±L, α t (-L) ≥ 0, α t (L) ≤ 0 (5.33) 5. The plastic yield criteria in the bulk and on the jump, We restrict our study to r Y = σ c /σ p > 1, meaning that the plastic yield surface is below the damage one. Consequently after the elastic stage, the bar will behave plastically. During the plastic stage, the cumulation of plastic strain decreases f D until the damage yield criteria is reached. On the third stage both damage and plasticity evolves simultaneously such that f D = 0 and f Y = 0 on the jumps x 0 . Of course there is no displacement jump on the bar before the third stage. Let expose the solution (u, α, p) Chapter 5. Variational phase-field models of ductile fracture by coupling plasticity with damage for the elastic, plastic and plastic damage stages. f Y (σ t , α t (x)) := |σ t | -b(α t (x))σ p ≤ 0 ( 5 The elastic response of the bar ends once the tension reached u t = σ p /E. During this regime the damage and plastic strain remain equal to zero. After this loading point, the plasticity stage begins and we have a uniform p = p = u tσ p /E and α = 0 in Ω. Since b (α) < 0 and p increases during the plastic stage, the damage yield criteria f D decreases until the inequality (5.31) becomes an equality. At this loading time both criterion are satisfied, such that, f Y = 0 and f D = 0. Hence, plugging the equation (5.34) into (5.31), we get, -b (α t (x))p t (x) = σ p E - 1 2 c (α t (x))b 2 (α t (x)) + r 2 Y 2k w (α t (x)) -2 2 α t (x) (5.39) By taking α t (x) = 0 in the above equation, we get the condition when the plastic stage ends, for a uniform plastic strain, p = u t - σ p E = σ p (-b (0))E r 2 Y 2k w (0) - 1 2 c (0)b 2 (0) (5.40) The last stage is characterized by the evolution of the damage. For a given x 0 take L long enough to avoid any damage perturbation at the boundary such that, the damage remains equal to zero at the extremities of the bar α(±L) = 0 and assume being maximum at x 0 , α(x 0 ) = β. Let α ≥ 0 over [-L, x 0 ) with α (-L) = 0, multiplying the equation (5.31) by 2α and integrate over [-L, x 0 ), we get, - 2E σ p x 0 -L b (α t (x))α t (x)p t (x) dx = c(β) -c(0) σ 2 t σ 2 p + r 2 Y k w(β) -2 β 2 (5.41) A priori, the cumulated plastic strain evolves along the part of the bar [-L, x 0 ), but since the maximum damage value β is reached on x 0 and the stress is uniform in the bar we have σ t (x) ≤ b(β)σ p . In other words the plasticity does not evolve anymore in the bar except on x 0 , and p is equal to (5.40). We obtain a first integral of the form of, 2 β 2 = k r 2 Y c(β) -c(0) b 2 (β) + w(β) + 2 b(β) -b(0) p Ek σ p r 2 Y (5.42) We know that on the jump set, we have, b (β) u(x 0 ) σ p - 2 σ 2 c kE β = 0 (5.43) Since β is known, the stress on the bar and the displacement jump on x 0 can be computed. We define the energy release rate as the energy dissipated by the damage process, 5.2. Numerical implementation of the gradient damage models coupled with perfect plasticity G t := Ω\{x 0 } σ c 2kE w(α t (x)) + 2 α 2 t (x) + b(α t (x))σ p p dx + b(α t (x 0 ))σ p u(x 0 ) and the critical value is given for complete damage localization once σ = 0. Let us recall some fundamental properties for a(α), b(α) and w(α) to satisfy. Naturally the stiffness function must satisfy a (α) < 0, a(0) = 1 and a(1) = 0, and the damage potential function w (α) > 0, w(0) = 0 and up to a rescaling w(1) = 1. The required elastic phase is obtained for α → -a 2 (α)w (α)/a (α) is strictly increasing. The coupling function b (α) < 0 ensure that the damage yield surface decreases with the cumulated plastic strain and b(0) = 1. For numerical reason (a, b, w) must be convex with respect to α which is not the case for the provided closed from solution in [START_REF] Alessi | Gradient damage models coupled with plasticity and nucleation of cohesive cracks[END_REF] for AT k see Table 5.1. Consequently, we prefer the model named AT 1 where a 1d computed solution example (dark lines) is compared with the numerical simulation (colored lines) see Figure 5.3. The numerical implementation is detailed in the following section 5.2. For this 1d example, we see the three phases described below in the stress-displacement plot, precisely the stress softening leads to a localization of the damage in which a cohesive response is obtained at the center. Name a(α) w(α) b(α) AT 1 (1 -α) 2 α a(α) + η b AT k 1 -w(α) 1 + (c 1 -1)w(α) 1 -(1 -α) 2 (1 -w(α)) c 2 Numerical implementation of the gradient damage models coupled with perfect plasticity In the view to numerically implement the gradient damage model coupled with perfect plasticity it is common to discretized in time and space. For the time discretization evolution we refer to the Definition 8. However in the numerical implantation we do not enforce energy balance condition justified by following the spirit of [START_REF] Bourdin | The variational formulation of brittle fracture: numerical implementation and extensions[END_REF][START_REF] Bourdin | The Variational Approach to Fracture[END_REF]. Functions space are discretized through standards finite elements methods over the domain. Both damage and displacement fields are projected over linear Lagrange elements. Whereas the plastic strain tensor is approximated by piecewise constant element. By doing so we probably use the simplest finite element to approximate the evolution problem. Conversely, the chosen finite element space cannot describe the jumps set of u and the localization of p, however it might be possible to account such effects by using instead discontinuous Galerkin methods. Nevertheless, as you will see on numerical simulations performed, the plasticity concentrates in a strip of few elements once the damage localizes. Numerically 5.2. Numerical implementation of the gradient damage models coupled with perfect plasticity we are not restricted to Von Mises plastic criterion only but any associated plasticity. Since a(α), b(α) and w(α) are convex the total energy is separately convex with respect to all variables (u, α, p) but that is not convex. A proposed algorithm to solve the evolution is alternate minimization which guarantees a path decreasing of the energy, but the solution might not be unique. At each time step t i , the minimization for each variables are performed as follows: i. For a given (α, p) the minimization of E with respect to u is an elastic problem with the prescribed boundary condition. To solve this we employed preconditioned conjugate gradient methods solvers. ii. The minimization of E with respect to α for fixed (u, p) and subject to irreversibility (α ≤ α i-1 ) is solved using variational inequality solvers provided by PETCs [START_REF] Balay | Efficient management of parallelism in object oriented numerical software libraries[END_REF][START_REF] Balay | PETSc users manual[END_REF][START_REF] Balay | PETSc Web page[END_REF]. iii. For a fixed (u, α) the minimization of E with respect to p is not straight forward the raw formulation however reformulated as a constraint optimization problem turns being a plastic strain projection onto a convex set which is solved using SNLP solvers provided by the open source snlp 1 . Boundaries of the stress elastic domain is constrained by a series of yields functions describing the convex set without dealing with none differentiability issues typically corners. The retained strategy to solve the evolution problem is to use nested loops. The inner loop solves the elasto-plastic problem by alternate i. and iii. until convergence. Then, the outer loop is composed of the previous procedure and ii., the exit is triggered once the damage has converged. This leads to the following Algorithm 4, where δ α and δ p are fixed tolerances. Argument in favor of this strategy is the elasto-plastic is a fast minimization problem, whereas compute ii. is slow, but changing loops orders haven't be tested. All computations were performed using the open source mef90 2 . Verifications of the numerical implementation have been performed on the elastodamage problem and elasto-plasticity problem separately considering three and two dimensions cases. The plasticity is verified with the existence and uniqueness of the bi axial test for elasto-plasticity in [START_REF] Gilles A Francfort | A case study for uniqueness of elasto-plastic evolutions: The bi-axial test[END_REF]. The implementation of the damage have been checked with propagation of fracture in Griffith regime, the optimal damage profile in 2d and many years of development by Bourdin. The verification of the coupling is done by comparison with the one dimensional setting solution in section 5.1.3. Solve the equilibrium, u k+1 := argmin u∈C i E t i (u, α j , p k ) 6: Solve the plastic strain projection on each cells, p k+1 := argmin p∈M n s a(α j )A(e(u k+1 )-p)∈b(α j )K 1 2 A(p -p i-1 ) : (p -p i-1 ) 7: k := k + 1 8: until p k -p k-1 L ∞ ≤ δ p 9: Set, u j+1 := u k and p j+1 := p k 10: Compute the damage, α j+1 := argmin α∈D i α≥α i-1 E t i (u j+1 , α, p j+1 ) 11: j := j + 1 12: until α jα j-1 L ∞ ≤ δ α 13: Set, u i := u j , α i := α j and p i := p j 5.3 Numerical simulations of ductile fractures Plane-strain ductility effects on fracture path in rectangular specimens The model offer a large variety of possible behaviors depending on the choice of functions a(α), b(α), w(α) and the plastic yield function f Y (τ ) considered. From now, the presentation is limited to AT 1 in Table 5.1 and Von Mises plasticity such that, f Y (σ) = ||σ|| eq -σ p where ||σ|| eq = n n-1 dev(σ) : dev(σ) and dev(σ) denotes the deviatoric stresses. Considering an isotropic material, the set of parameters to calibrate is (E, ν, σ p , σ c , ) where the Young's modulus E, the Poisson ratio ν and the plastic yield stress σ p can be easily characterized by experiments. However, σ c and are still not clear but in brittle fracture nucleation they are estimated by performing experiments on notched specimen 5.3. Numerical simulations of ductile fractures see [START_REF] Tanné | Crack nucleation in variational phase-field models of brittle fracture[END_REF]. Hence, a parameter analysis for our model is to study influences of the ratio r Y = σ c /σ p and independently. Consider a rectangular specimen of length (L = 2) and width (H = 1) in plane strain setting, made of a sound material with the set up E = 1, ν = . Let first performed numerical simulations by varying the stress ratio of initial yields surfaces r Y ∈ [. [START_REF] Alessi | Gradient damage models coupled with plasticity: variational formulation and main properties[END_REF][START_REF] Alessi | Coupling damage and plasticity for a phase-field regularisation of brittle, cohesive and ductile fracture: One-dimensional examples[END_REF] with an internal length equal to = .02 smaller than the geometric parameters (L, H) and let others parameter unchanged. The damage fields obtained after failure of samples are summarized on the Figure 5.5. A transition from a straight to a slant fracture for an increasing r Y is observed similarly to the Ti glass alloy in the Figure 5.1. A higher initial yields stress ratio induces a larger plastic strain accumulation leading to a thicker damage localization strip. The measure of the fracture angle reported in Figure 5.5 does not take into account the turning crack path profile around free surfaces caused by the damage condition ∇α • ν = 0. Clearly, for the case σ c < σ p the fracture is straight and there is mostly no accumulation of plastic strain. However due to plasticity, damage is triggered along one of shears bands, resulting of a slant fracture observation in both directions but never two at the same time. Now, let us pick up one of this stress ratio r Y = 5 for instance and vary the internal length ∈ [0.02, 0.2]. The stress vs. displacement is plotted in Figure 5.6 and shows various stress jumps amplitude during the damage localization due to the snap-back intensity. This effect is well known in phase-field models to brittle fracture and pointed out by [START_REF] Alessi | Variational approach to fracture mechanics with plasticity[END_REF][START_REF] Alessi | Coupling damage and plasticity for a phase-field regularisation of brittle, cohesive and ductile fracture: One-dimensional examples[END_REF][START_REF] Alessi | Gradient damage models coupled with plasticity and nucleation of cohesive cracks[END_REF][START_REF] Pham | Approche variationnelle de l'endommagement : I. les concepts fondamentaux[END_REF][START_REF] Pham | Approche variationnelle de l'endommagement : Ii. les modèles à gradient[END_REF]. A consequence of this brutal damage localization is a sudden drop of the stress, when this happens the energy balance is not satisfied. Continuous and discontinuous energies evolution is observed for respectively = 0.2 and = 0.02 plotted on Figure 5.7. The attentive reader may notice that the plastic energy decreases during the damage localization which contradicts the irreversibility hypothesis of the accumulation of dissipated plastic energy. Actually the plotted curve is not accurately representative of the dissipated plasticity energy but it is a combination of damage and plasticity such that a part of this energy is transformed into a surface energy contribution. Hence, those dis- Snap-shots of damage, accumulated plastic strain and damage in a deformed configuration fields are illustrated in Figure 5.9 for different loading time (a, b, c, d) shown in the Figure 5.6. The cumulated plastic strain is concentrated in few mesh elements across the surface of discontinuity (fracture center). Because damage and plasticity evolve together along this strip it is not possible to dissociate mechanism coming from pure plasticity or damage independently. It can be interpreted as a mixture of permanent deformation and voids growing with mutual cause and effects relationship. Plane-strain simulations on two-dimensional mild notched specimens In the sequel we restrict our scope to study fractures nucleation and propagation in ductile regime (r Y = σ c /σ p large enough) for a mild notched specimen. Experimentally this design shape samples favor fractures around the smallest cross section size. Necking is a well known instability phenomena during large deformations of a ductile material. A consequence of the necking on a specimen is a cross sectional reductions which implies a curved profile to the deformed sample. Since we are in small deformations setting, necking cannot be recovered, thus we artificially pre-notch the geometry (sketched in Figure 5.10 with the associated Table 5.2) to recover a plastic strain concentrations. For more realistic numerical simulations and comparisons with pictures of the experiments on Aluminum alloy AA 5754 Al -Mg in Figure 5.2, we set material properties (see Table 5.3) such that the internal length is in the range of grain size, σ c is chosen to recover 7% elongation and (E, ν, σ p ) are given. We assume that the material follows Von Mises perfect plasticity criteria and the elastic stress domain shrinks from σ p to the lower limit of 15% of σ p . The experiments are built such that displacements are controlled at the extremities of the plate and observations are in the sheet thickness direction. Hence, the 2d plane strain theory is adopted for numerical simulations. Also we have studied two types of boundaries conditions, clamped and rollers boundary condition respectively named set-up A and set-up B. .10: Specimen geometry with nominal dimensions, typical mesh are ten times smaller that the one illustrated above. Note that meshes in the center area of the geometry are refined with a constant size h. Also a linear growing characteristic mesh size is employed from the refined area to a coarsen mesh at the boundary. considered, such that, the set-up B provides a slant fracture shear dominating with nucleation at the center and propagation along one of the shear band, and for the set-up A, the fracture nucleates at the center, propagates along the specimen section and bifurcate following shear bands. Final crack patterns are pure shear configuration and a slant-flatslant path. Again some snap shots of damage, cumulated plastic strain and damage in deformed configuration are presented in Figure 5.12 and Figure 5.13 for respectively the set-up A and B. Time loadings highlighted by letter are reported in the stress vs. strain plot in Figure 5.11. Main phenomenon are: (a) during the pure plastic phase there is no damage and the cumulated plastic strain is the sum of two large shear bands where the maximum value is located at the center, (b) the damage is triggered on the middle and develops following shear bands as a "X" shape, (c) a macro fracture nucleates at the center but stiffness remained and the material is not broken, (d) failure of the specimen with the final crack pattern. Close similarities between pictures of ductile fracture nucleations from simulations and experimental observations can be drawn. However, we were not able to capture cup-cones fractures. To recover the desired effect we introduced a perturbation in the geometry such that the parabola shape notch is no more symmetric along the shortest cross section axis, i.e. an eccentricity is introduced by taking ρ < 1 see the Figure 5.10. In a sense there is no reason that necking induces a perfectly symmetric mild notch specimen. Leaving all parameters unchanged and taking ρ = .9 we observed two cracks patterns: a shear dominating and cup-cones for respectively set-up B and set-up A illustrated in Figure 5.14. This type of non-symmetric profile with respect to the shortest cross section axis implies a different stress concentration between the right and the left side of the sample which consequently leads to unbalance the plastic strain concentrations intensity on both parts. Since damage is guided by the dissipated plastic energy we have recovered this cup cones fracture with again a macro fracture has nucleated at the center. Also the set-up B with ρ = .9 is not significantly perturbed to get a new crack path but still in the shear dominating mode. Ductile fracture in a round notched bar A strength of the variational approach is that it will require no modification to perform numerical simulations in three dimensions. Also this part is devoted to recover common observations made on ductile fracture in a round notched bar such as cup-cones and shear dominating fractures shapes. The ductile fracture phenomenology for low triaxility (defined as the ratio of the hydrostatic over deviatoric stresses) have been investigated by Benzerga [START_REF] Benzerga | Ductile fracture by void growth to coalescence[END_REF], relevant pictures of cracks nucleation and propagation into a round bar Chapter 5. Variational phase-field models of ductile fracture by coupling plasticity with damage with none destructive techniques is summarized in the Figure 5.19 . Since we focus on the fracture phenomenology we do not attribute physical values to material parameter but give attentions to the yield stress ratio r Y and the internal length . The internal length governs the thickness of the localization which has to be small enough compared to the specimen radius to observe a distinct fracture. In the other sides, drives the characteristics mesh size, typically ∼ 3h which constraint the numerical cost. For clarity the cumulated plastic strain will not be shown anymore since it does not provide further information on the fracture path than the damage. Based on the above results, boundary conditions play a fundamental role in our simulations so we will consider two cases: an eccentric mild notched shape (ρ = .7) specimens in the set-up A and B respectively associated to clamped and rollers boundary conditions. Both geometries are solids of revolution (tensile axis revolution) based on the sketch Figure 5.10 and Table 5 Those simulations were performed with 48 cpus during 48 hours on a 370 000 mesh nodes for 100 time steps with the provided resources of high performance computing of Louisiana State University3 . Results of numerical simulations are shown on the Figures 5.17 The ductile fracture phenomenology is presented by Benzerga-Leblond [START_REF] Benzerga | Ductile fracture by void growth to coalescence[END_REF], and shows the voids growing and coalescence during the early stage of stress softening, then a macro fracture nucleates at the center end propagates following shear lips formations. Numerical simulations at the loading time (a) for the set-up A and B show a diffuse damage in the middle of the specimen which is exactly a loss of stiffness in the material. This can be interpreted as an homogenization of voids density. A sudden macro crack appears around the loading time (b) which corresponds to the observation made. From (b) to (c) the crack follows shear lips formation in a shear dominating or cup-cones crack patterns depending on the prescribed boundary conditions clamped (set-up A) or rollered (set-up B). These numerical examples suggest that variational phase-field models of ductile fracture are capable of predicting crack nucleation and propagation in low triaxiality specimen for the 2d plane strain specimen and round bar for a simple model considered. Conclusion In contrast with most of literature on ductile fracture we proposed a variational model by coupling gradient damage models and perfect plasticity following seminal papers of [START_REF] Alessi | Variational approach to fracture mechanics with plasticity[END_REF][START_REF] Alessi | Gradient damage models coupled with plasticity and nucleation of cohesive cracks[END_REF][START_REF] Alessi | Gradient damage models coupled with plasticity: variational formulation and main properties[END_REF][START_REF] Alessi | Coupling damage and plasticity for a phase-field regularisation of brittle, cohesive and ductile fracture: One-dimensional examples[END_REF]. In this chapter, we have investigated crack nucleation and propagation in multiple geometries in simple case of functions under Von Mises perfect plasticity. We confirmed observations reported elsewhere in the literature that fracture nucleates at the center of the specimen and propagates following shear bands before reaching free surfaces for low triaxiality configuration in ductile materials. Our numerical simulations also highlight that crack patterns observed is strongly dependent of the prescribed boundary conditions and geometry which leads to a plastic dissipated energy concentrations path. The strength of the proposed phase-field model is the ability to handle with both ductile and brittle fractures which mostly have been separated like oil and water. The key parameter to capture this transition is the ratio of initial yields surfaces of damage over plastic one. We show that variational phase-field models are capable of qualitative predictions of crack nucleation and propagation in a mild notch range of geometries including two and three dimensions, hence, this model is a good candidate to address the aforementioned issues. Also, the energy balance is preserved since the fracture evolution is smooth driven by and internal length. Of course, there are still many investigations to performed before claiming the superiority of the model such that, fracture nucleation at a notch of a specimen (high triaxiality) which due to the unbounded hydrostatics pressure for the plasticity criteria (Von Mises for instance) leads to hit the damage yield surface first, consequently a brittle response is attended. To get a cohesive response a possible choice of plastic yield surface is to consider a cap model closing the hydrostatic pressure in the stress space domain. Chapter 6 Concluding, remarks and recommended future work In this dissertation, we studied the phenomena of fracture in various structures using phase-field models. The phase-field models have been derived from Francfort Marigo's variational models to fracture which have been conceived as an approximation of Griffith's theory. In Chapter 1 we exposed a complete overview and main properties of the model. In Chapter 2, we applied the phase-field models to study fracture nucleation in a V-and U-notches geometries. Supported by numerous validation we have demonstrated the ability of the model to make quantitative prediction of crack nucleation in mode I. The model is based on general energy minimization principle and does not require any ad-hoc criteria, just to adjust the internal length. Moreover the model properly accounts for size effects that cannot be recovered from Griffith-based theory. In Chapter 3 we have shown that the extended model to hydraulic fracturing satisfies Griffith's propagation criterion and there is no issues to handle with multi-fracking scenario. The fracture path is dictated by the minimization principle of the total energy. A loss of crack symmetry is observed in the case of a pressurized network of parallel fractures. In Chapter 4, we solely focused on the perfect elasto-plasticity models and we started by the classical approach to its variational formulation. A verification of the alternated algorithm technique is exposed. The last chapter was devoted to combine models exposed in the first and the fourth chapter to perform cohesive and ductile fractures. Our numerical simulations have shown the capability of the model to retrieve main features of ductile fractures in a mild notch specimen, precisely nucleation and propagation phenomenon. Also, we have observed that crack paths are sensitive to the geometry and boundary conditions applied on it. In short, we have demonstrated that variational phase-field models address some of vexing issues associated with brittle fractures: scale effects, nucleation, existence of a critical stress and path prediction. By a simple coupling with the well known perfect plasticity theory, we recovered phenomenology of ductile fractures patterns. Of course, there are still remaining issues that need to be addressed. Our numerical simulations do not enforce energy balance as indicated by a drop of the total energy upon crack nucleation without string singularities illustrated in Chapter 2. Perhaps extensions into phase field models of dynamic fracture will address this issue. Also fracture in compression remains an issue in variational phase-field models. It is not clear of either of this models is capable of simultaneously accounting for nucleation under compression and self-contact. A recommended future work is to study ductile fractures following the spirit of Chapter 2. The idea is by varying the yields stress ratio recover first the brittle initiation criterion and then study the ductile fracture for different notch angles. Primal feasibility l ≥ l(t) 3. Dual feasibility λ ≥ 0 4. Complementary slackness λ(ll(t)) = 0 10 ) 10 Splitting the second integral over ∂Ω = (∂ N Ω ∪ ∂ D Ω) \ Γ(l) into the Dirichlet and the remainder Neumann boundary part and using the condition v = 0 on ∂ D Ω, the Gateaux derivative of E becomes, Figure 1 . 1 : 11 Figure 1.1: Sketch on the left shows the evolution of the crack (red curve) for a strict decreasing function G(1, l) subject to the irreversibility (l ≥ l 0 ) and G(1, l) ≤ G c /t 2 . Picture on the right shows the crack evolution for a local minimality principle (red curve) and for a global minimality (blue curve) without taking into account the energy balance. w 1 . 3 .Remark 1 131 Limit of the damage energy The above expression (1.68) is invariant by a change of variable x = x, thus β(x) = β( x) Chapter 1 . 1 Combining this two statements, we deduce that there exists a , b , c in I such that, a ≤ b ≤ c , and lim →0 α (a ) = lim →0 α (c ) = 0 and lim →0 α (b ) = 1 thus, I w(α ) + (α ) 2 dx = b a w(α ) + (α ) 2 dx + c b w(α ) + (α ) 2 dx (1.78) Again using the identity, a 2 + b 2 ≥ 2|ab|, we have that, Variational phase-field models of brittle fracture where (x) := x 0 w(s)ds. Using the substitution rule we then get, b a w(α ) + (α ) 2 dx ≥ 2 | (b ) -(a )| , (1.80) and since (0) = 0 and (1) = c w , we obtain, ) + (α ) 2 dx ≥ 2c w , 82) and α = 1 for |x| ≤ b , we get that, b -b 0 H n- 1 2 x∈ΩChapter 1 . 0121 ({x; d(x) = y}) dy = 2H n-1 (J(u)) (1.100) and for the second term, b ≤d(x)≤δ w(α (d(x))) + |∇α (d(x))| 2 dx = w(α (d(x))) + α (d(x))∇d(x) 2 dH n-1 (x) dy = δ b x∈Ω w(α (y)) + α (y)∇d(x) 2 dH n-1 ({x; d(x) = y}) dy = δ b w(α (y)) + α (y) dH n-1 ({x; d(x) = y}) dy (1.101) Making the change of variable y = x , Variational phase-field models of brittle fracture b ≤d(x)≤δ w(α (d(x))) + |∇α (d(x))| 2 dx = δ b w(α (y)) + α (y) 2 H n-1 ({x; d(x) = y}) dy = δ b w(α (y)) + α (y) 2 s (y) dy ) 1 . 4 .Definition 5 ( 145 Numerical implementationand the discrete time evolution problem is given by, Damage discrete evolution by local minimizers) Figure 2.1(left) shows the outcome of a surfing experiment on a rectangular domain Ω = [0, 5] × [-1 2 , 1 2 Figure 2 . 1 : 21 Figure 2.1: Mode-I "surfing" experiment along straight (left) and circular (right) paths. Dependence of the crack length and elastic energy release rate on the loading parameter for multiple values of . Figure 2 . 2 : 22 Figure 2.2: Pac-man geometry for the study of the crack nucleation at a notch. Left: sketch of the domain and notation. Right: relation between the exponent of the singularity λ and the notch opening angle ω determined by the solution of equation (2.10). For any opening angle ω we apply on ∂ D Ω the displacement boundary condition obtained by evaluating on ∂ D Ω the asymptotic displacement (2.12) with λ = λ(ω). The mode-I Pac-Man test Consider a Pac-Man-shaped 3 domain with radius L and notch angle ω as in Figure 2.2(left). In linear elasticity, a displacement field associated with the stress field (2.7) is Figure 2 . 3 :σ 23 Figure 2.3: Pac-Man test with the AT 1 model, L = 1, = 0.015, ω = 0.7π, and ν = 0.3.From left to right: typical mesh (with element size ten times larger than that in typical simulation for illustration purpose), damage field immediately before and after the nucleation of a crack, and plot of the energies versus the loading parameter t. Note the small damaged zone ahead of the notch tip before crack nucleation, and the energetic signature of a nucleation event. Figure 2 . 4 : 24 Figure 2.4: Identification of the generalized stress intensity factor: σ θθ (r,0) (2π r) λ-1 along the domain symmetry axis for the AT 1 (left) and AT 2 (right) models with undamaged notch conditions, and sub-critical loadings. The notch aperture is ω = π/10 Figure 2 . 5 : 25 Figure 2.5: Critical generalized critical stress intensity factor at crack nucleation as a function of the internal length for ω 0 (left) and ω π/2 (right). AT 1 -U, AT 1 -D, AT 2 -U, and AT 2 -D refer respectively to computations using the AT 1 model with damaged notch and undamaged notch boundary conditions, and the AT 2 model with damaged notch and undamaged notch boundary conditions. (K Ic ) eff := G eff E 1-ν 2 denotes the critical mode-I stress intensity factor modified to account for the effective toughness G eff . Figure 2 . 6 : 26 Figure 2.6: Critical generalized stress intensity factor k for crack nucleation at a notch as a function of the notch opening angle ω. Results for the AT 1 and AT 2 models with damaged -D and undamaged -U notch lips conditions. The results are obtained with numerical simulations on the Pac-Man geometry with (K Ic ) eff = 1 and = 0.01 so that σ c = 10 under plane-strain conditions with a unit Young's modulus and a Poisson ratio ν = 0.3. .m 1-λ ] Figure 2 . 7 : 27 Figure 2.7: Critical generalized stress intensity factor k c vs notch angle.Comparison between numerical simulations with the AT 1 and AT 2 models and damaged and undamaged boundary conditions on the notch edges with experiments in steel from[START_REF] Strandberg | Fracture at V-notches with contained plasticity[END_REF] (top-left), and Duraluminium (top-right) and PMMA (bottom) from[START_REF] Seweryn | Brittle fracture criterion for structures with sharp notches[END_REF]. Figure 2 . 8 : 28 Figure 2.8: Critical generalized stress intensity factor k c vs notch angle and depth in PVC foam samples from [94]. Numerical simulations with the AT 1 model with damaged and undamaged notch conditions (left), and AT 2 model with damaged and undamaged notch conditions (right). Figure 2 . 10 : 210 Figure 2.10: Critical generalized stress intensity factor k c vs notch angle for Al 2 O 3 -7%ZrO 2 (left) and PMMA (right). The black markers represents all experimental results. The numerical results are obtained through the Pac-Man test using the AT 1 model. See Tables 2.8-2.9 in the Appendix B for the raw data. Figure 2 . 2 Figure 2.12: DENT geometry 2. 3 .AT 2 - 32 Size effects in variational phase-field models 1 -U ρ = 0.5AT 1 -U ρ = 1.25 AT 1 -U ρ = 2.5 U ρ = 0.5 AT 2 -U ρ = 1.25 AT 2 -U ρ = 2.5 Figure 2 . 13 :h = ρ 2 ah h = R 100 Figure 2 . 14 : 213100214 Figure 2.13: Crack nucleation at U-notches. Comparison between experimental data of [92] and numerical simulations using the AT 1 (top) and AT 2 (bottom) models. Figure 2 . 15 : 215 Figure 2.15: Damage field at the boundary of the hole in the elastic phase 0 < t < t e (left), the phase with partial damage t e < t < t c (center), and after the nucleation of a crack t > t c (right). Blue: α = 0, red: α = 1. The simulation is for ρ = 1.0 and a/ = 5. Figure 2 . 16 : 216 Figure 2.16: Normalized applied macroscopic stress t e /σ c at damage initiation as a function of the aspect ratio ρ for a/ = 1 (left) and of the relative defect sizes a/ for ρ = 1 and ρ = 0.1 (right). 52 2. 3 .Figure 2 . 17 : 523217 Figure2.17: Normalized applied macroscopic stress t c /σ e at crack nucleation for an elliptic cavity in an infinite plate. Left: shape effect for cavities of size much larger than the internal length (a/ = 48); the solid line is the macroscopic stress at the damage initiation t e (see also Figure2.16) and dots are the numerical results for the AT 1 model. Right: size effect for circular (ρ = 1.0) and highly elongated (ρ = 0.1) cavities. Figure 2 . 53 Chapter 2 .Figure 2 . 18 : 2532218 Figure 2.18: Initiation of a crack of length 2a in a plate of finite width 2W . The numerical results (dots) are obtained with the AT 1 model for = W/25. The strength criterion and the Griffith's criterion (2.18). Figure 3 . 1 : 31 Figure 3.1: Sketch of the geometry (invariant). The symmetry axis being a reflection for 2d and a revolution axis in 3d. 3. 2 . 2 Numerical verification case of a pressurized single fracture in a two and three dimensions Chapter 3 .Figure 3 . 2 : 332 Figure 3.2: Evolutions of normalized p, V and l for the line fracture (left column figures) and penny shape crack (right column figures). Colored dots refer to numerical results and solid black lines to the closed form solution given in Appendix C. For the line fracture, V c = 4πl 3 0 (G c ) eff /E and p c = E (G c ) eff /(πl 0 ), where E = E/(1ν 2 ) in plane strain theory and E = E in plane stress. For the penny shape crack, V c = 8/3 πl 5 0 (G c ) eff /E and p c = πE(G c ) eff /(4l 0 ). 3. 2 .Figure 3 . 3 : 233 Figure 3.3: Snap-shots of damage for the line fracture example at different loadings, such that, before the loading cycle (top), before refilling the fracture (middle) and during the propagation (bottom). The red color is fully damage material and blue undamaged. We see the casing mesh which encapsulates the fracture. Figure 3 . 4 :Figure 3 . 5 : 3435 Figure 3.4: Snap shots (view from above) of fracture damage (α ≥ .99) for the penny shape crack example at different loadings, that is before refilling the fracture (left) and during the propagation (right). The solid black lines are the limit of the casing. Figure 3 . 6 : 36 Figure 3.6: Infinite network of parallel cracks domain (left). Domain duplications form the smallest invariant domain (right). Figure 3 . 7 : 37 Figure 3.7: Domains in the deformed configuration for respectively Ω 1 , Ω 2 , Ω 4 and Ω 6 .The pseudo-color blue is for undamaged material and turns white when (α ≤ .01) (visibility reason). The full colors correspond to numerical simulations cells domain see table3.2, and opacity color refers to the rebuild solution using symmetries. In all simulations only one crack propagates in the domain. Using the multiplicity pictures from left to right we obtain a fracture propagation periodicity denoted period. of 6/6, 3/6, 1.5/6 and 1/6. 6 Figure 3 . 8 : 638 Figure 3.8: Plots of normalized variables such that crack pressure, average fracture length and energy density (per Ω) vs. fluid volume density (per Ω) respectively on the (top-left) and (top-right) and (bottom-right). The aperture of the longest crack for 2V /(nV c ) = 13.Colored plot are numerical results for different domain sizes Ω 1 , Ω 2 , Ω 4 and Ω 6 . The solid black line is the closed form solution and gray the approximated solution given by Sneddon[START_REF] Sneddon | Crack problems in the classical theory of elasticity[END_REF]. Chapter 3 . 6 Figure 3 . 10 : 36310 Figure 3.10: Ratio of critical pressures (multi-fracking over single fracture) vs. the inverse of the fracture density (hight density on the left x-axis and low density on the right side).Black dash line is r p (ρ/n) with 1/n the periodicity. Colored line is numerical results for respectively a periodicity 6/6, 3/6 and 1.5/6. Figure 3 . 11 : 311 Figure 3.11: Fracture toughness vs. confining pressure for the Indiana limestone Chapter 3 .LFigure 3 . 12 : 3312 Figure 3.12: Schematic of burst experiment for jacketed bore on the (left). Pre-(middle) and Post-(right) burst experiment photos. Figure 3 . 13 : 313 Figure 3.13: Rigorous superposition of the burst problem.L Figure 3 . 14 : 314 Figure 3.14: Superposition of the burst problem applied in Abou-Sayed (1978). Figure 3 . 15 : 315 Figure 3.15: Comparison of the normalized stress intensity factor for the jacketed and unjacketed problems receptively denoted K J * I and K U * I vs. the normalized crack length l. Numerical computational SIF based on G θ method (colored lines) overlay plots provided by Clifton in [53]. 10 Figure 3 . 16 : 10316 Figure 3.16: Computed normalized SIF vs. normalized crack length l for two confining pressure ratios r = 1/8 (dash lines) and r = 1/6 (solid lines) and various w = {3, 4, 7, 10} (colored lines). Figure 3 . 17 : 317 Figure 3.17: Three possible regime for K B * I denoted (a), (b) and (c). l U S is a critical point from unstable to stable crack propagation, vice versa for l SU . The fracture does not propagates at stop point denoted l ST Figure 3 . 18 : 318 Figure 3.18: Computed normalized SIF vs. normalized crack length for the unconfined (left) and confined (right) burst experiments according to the Table 3.3. 5 r = 1 /6 w = 5 Figure 3 . 19 : 515319 Figure 3.19: Colored lines are computed normalized SIF vs. normalized crack length for unstable propagation (l 0 ≥ .5). Red markers are time step results obtained using the phase-field model. (4. 16 ) 16 By applying the supremum function for all τ * ∈ K, it comes that, for τ ∈ K, τ : ṗ ≥ sup τ * ∈K {τ * : ṗ}.(4.17) 4. 2 . 2 Variational formulation of perfect plasticity models q = p + hp, ∀p ∈ M n s Plug this into (4.27) and send h → 0, then using the definition of Gateaux derivative and the subgradient, the stability definition leads to τ = -∂ ψ ∂p (e, p) = lim h→0 -ψ(e, p + hp) -ψ(e, p) hp ∈ ∂H(pp t i-1 ). (4.28) 30 )τ 30 For any given τ * a fixed element of K, * : p(s) ds = τ * : (qp). Chapter 4 .Algorithm 3 1 : 431 Variational models of perfect plasticity as a constraint optimization problem implemented using SNLP solvers provided by the open source snlp 1 . All computations were performed using the open source mef90 2 . Elasto-plasticity alternate minimization algorithm for the step i Let j = 0 and p 0 := p i-1 2: repeat 3: p i-1 ) : (pp i-1 ) 5: line segment [( d/2, l/2, 0), (d/2, l/2, l)] Figure 4 . 1 : 41 Figure 4.1: The closed-form solution equations (4.37),(4.38) are denoted in solid-lines and dots referred to numerical results is in dots. (Top-left) and (-right) figures show respectively the hydrostatics evolution of stresses and plastic strains with the time loading. The figure in the bottom shows displacements for t = 2.857 along the lineout axis [(-d/2, -l/2, 0) × (d/2, l/2, l)] 3.1, 5.3.2 and 5.3.3. 5.1 Phase-field models to fractures from brittle to ductile 5.1.1 Experimental observations of ductile fractures It is common to separate fractures into two categories; brittle and ductile fractures with different mechanisms. However relevant experiments [110] on Titanium alloys glass show a transition from brittle to ductile fractures response (see Figure 5.1) by varying only one parameter: the concentration of Vanadium. Depending on the Vanadium quantity, they observed a brutal formation of a straight crack, signature of brittle material response for low concentrations. Conversely a smooth stress softening plateau is measured before failure for higher concentrations. The post mortem samples show a shear dominating fracture characteristic of ductile behaviors. / Plot of uniaxial tension test data with optical images of dogbone specimens post-failure. (top eering stress as a function of engineering strain is plotted from tensile tests on dogbone samples o series composites. Samples were loaded until failure at a constant strain rate of 0.2 mm/min. Curv n the x-axis to highlight differences in plastic deformation behavior between alloys. (top right) h of complete V0 dogbone sample after failure in tension. (bottom) Optical microscope images at ilure in deformed alloys V2-V10 and DV1. Figure 5 . 1 : 51 Figure 5.1: Pictures produced by [110] show post failure stretched specimens of Ti-based alloys V x --Ti 53 -x/2 Zr 27 -x/2 Cu 5 Be 15 V x . From left to right: transition from brittle to ductile with a concentration of Vanadium respectively equal to 2%, 6% and 12%. α)A(e(u)p) Plasticity occurs in the material once the stress reaches a critical value defined by the plastic yield function f Y : M n×n s → R convex such that f Y (0) < 0. We proposed to couple the damage with the admissible stress set through the coupling function b(α) such that, the stress is constrained by σ ∈ b(α)K, where K := {τ ∈ M n×n s s.t. f Y (τ ) ≤ 0} is a non empty close convex set. The elastic stress domain is subject to isotropic transformations by b(α) a state function of the damage. Naturally to recover a stress-softening response, Chapter 5. Variational phase-field models of ductile fracture by coupling plasticity with damage the coupling function b(α) is continuous decreasing such that b(0) = 1 and b(1) = η b , where η b is a residual. By considering associated plasticity the plastic potential is equal to the yield function and the plastic flow occurs once the stress hits the yield surface, i.e. σ ∈ b(α)∂K. At this moment, the plastic evolution is driven by the normality rule such that the plastic flow lies in the subdifferential of the indicator function denoted I at σ, written as, ṗ ∈ ∂I b(α)K (σ) .34) 6 . 7 . 2 c( 5 . 36 ) 8 . 6725368 Plastic flow rule in the bulk, b(α t (x))σ p | ṗt (x)|σ t ṗt (x) = 0(5.35) The damage consistency in the bulk, jump and boundary,f D (α t (x), p t (x), pt (x)) αt (x) = 0 b (α t (x 0 )) u(x 0 ) σ p -2 σ kE α t (x 0 ) αt (x) = 0 α t (±L) αt (±L) = 0The energy balance at the boundary, α t (±L) αt (±L) = 0 (5.37)9. The irreversibility which applies everywhere in Ω, 0 ≤ α t (x) ≤ 1, αt (x) ≥ 0 (5.38) Chapter 5 .Figure 5 . 3 : 553 Figure 5.3: Comparisons of the computed solution (dark lines) for AT 1 see Table5.1 with the numerical simulation (colored lines) for parameters E = 1, σ p = 1, = 0.15, σ c = 1.58, L = .5 and η b = 0. The (top-left) picture shows the stress-displacement evolution, the (top-right) plot is the displacement jump vs. the stress during the softening behavior. The (bottom-left) figure shows the damage profile during the localization for three different loadings. The (bottom-right) is the evolution of the energy release vs. the displacement jump also known as the cohesive law(Barenblatt). Chapter 5 .Algorithm 4 1 :: repeat 3 : 5413 Variational phase-field models of ductile fracture by coupling plasticity with damage Alternate minimization algorithm at the step i Let, j = 0 and α 0 := α i-1 , p 0 := p i-1 2Let, k = 0 and p 0 := p j 3 Figure 5 . 4 : 354 Figure 5.4: Rectangular specimen in tensile with rollers boundary condition on the leftright extremities and stress free on the remainder. The characteristic mesh size is h = /5. Figure 5 . 5 : 55 Figure 5.5: Shows fracture path angle vs. the initial yields stress ratio r Y . Transition form straight to slant crack characteristic of a brittle -ductile fracture transition. Figure 5 . 6 : 9 . 5 . 3 .Figure 5 . 7 :Figure 5 . 8 : 569535758 Figure 5.6: Stress vs. displacement plot for σ c /σ p = 5, shows the influence of the internal length on the stress jump amplitude signature of the snap back intensity. Letters on the curve = .1 referees to loading times when snap-shots of α and p are illustrated in Figure 5.9. 123 Chapter 5 .Figure 5 . 9 : 123559 Figure 5.9: Rectangular stretched specimen with rollers boundary displacement for parameters σ c /σ p = 5 and = .1, showing snap-shots of damage, cumulated plastic strain and damage in deformed configuration (the displacement magnitude is 1%) at different loading time refereed to the plot 5.6 for (a, b, c, d). The cumulated plastic strain defined as p = t 0 || ṗ(s)||ds has a piecewise linear color table with two pieces, [0, 14] for the homogeneous state and [14, 600] for visibility during the localization process. Moreover the maximum value is saturated. Figure 5 5 Figure 5.10: Specimen geometry with nominal dimensions, typical mesh are ten times smaller that the one illustrated above. Note that meshes in the center area of the geometry are refined with a constant size h. Also a linear growing characteristic mesh size is employed from the refined area to a coarsen mesh at the boundary. Figure 5 . 11 : 511 Figure 5.11: Plot of the stress vs. strain (tensile axis component) for the mild notch specimen with clamped and rollers interfaces conditions respectively set-up A and set-up B 126 5. 3 .Figure 5 . 12 :Figure 5 . 13 : 1263512513 Figure 5.12: Zoom in the center of mild notched stretched specimen with clamped boundary displacement (set-up A) showing snap-shots of damage, cumulated plastic strain and damage in deformed configuration (the displacement magnitude is 1) at different loading time refers to Figure 5.11 for (a, b, c, d). The cumulated plastic strain color table is piecewise linear with two pieces, [0, .35] for the homogeneous state and [.35, 2.5] for visibility during the localization process. Moreover the maximum value is saturated. The pseudo color turns white when (α ≥ 0.995) for the damage on the deformed configuration figure. 5. 3 .Figure 5 . 14 : 3514 Figure 5.14: Zoom in the center of eccentric mild notched stretched specimen (ρ = .9) showing snap-shots of damage, cumulated plastic strain and damage in deformed configuration (the displacement magnitude is 1) at the failure loading time, for the set-up A and B. The cumulated plastic strain color table is piecewise linear with two pieces, [0, .35] for the homogeneous state and [.35, 2.5] for visibility during the localization process. Moreover the maximum value is saturated. The pseudo color turns white when (α ≥ 0.995) for the damage on the deformed configuration figure. and 5.18 were fractures patterns are similar to one observed in the literature see pictures 5.15 and 5.16. An overview of the fracture evolution in round bar are exposed in the Figures 5.19 . Figure 5 . 15 : 515 Figure 5.15: Photo produced by [107] showing cup cones fracture in a post mortem rounded bar. Figure 5 . 16 : 516 Figure 5.16: Photo produced by [107] showing shear dominating fracture in a post mortem rounded bar. Figure 5 . 5 Figure 5.17: Snap-shot of the damage in deformed configuration for the set-up A after failure, two pieces next to each other. Figure 5 . 5 Figure 5.18: Snap-shot of the damage in deformed configuration for the set-up B after failure, two pieces next to each other. Chapter 5 .Figure 5 . 19 : 5519 Figure 5.19: Picture in Benzerga-Leblond[START_REF] Benzerga | Ductile fracture by void growth to coalescence[END_REF] shows the phenomenology of ductile fracture in round notched bars of high strength steel: damage accumulation, initiation of macroscopic crack, crack growth and shear lip formation. Numerical simulations shows the overlapped stress vs. displacement blue and orange curves for respectively set-up A and setup B, and snap shots of damage slices in the deformed round bar. The hot color table illustrates the damage, the red color turns white for α ≥ 0.95 which correspond to less than 0.25% of stiffness. 2.11: Critical load in the three-and four-point bending experiments of a Al 2 O 3 -7%ZrO 2 sample (left) and four-point bending of a PMMA sample (right) from[START_REF] Yosibash | Failure criteria for brittle elastic materials[END_REF] compared with numerical simulations using the AT 1 model and undamaged notch boundary conditions. Due to significant variations in measurements in the first set of experiments, each data point reported in[START_REF] Yosibash | Failure criteria for brittle elastic materials[END_REF] is plotted. For the PMMA experiments, average values are plotted. See Table2.10 and 2.11 in the Appendix B for raw data. Table 2 . 2 3: Critical generalized stress intensity factor k for crack nucleation at a notch as a function of the notch opening angle ω from Figure2.5. Results for the AT 1 and AT 2 models with damaged -D and undamaged -U notch lips conditions. The results are obtained with numerical simulations on the Pac-Man geometry with (K Ic ) eff = 1 and = 0.01 so that σ c = 10 under plane-strain conditions with a unit Young's modulus and a Poisson ratio ν = 0.3. 500 1.292 1.084 1.349 1.284 10 0°0.500 1.308 1.091 1.328 1.273 20 0°0.503 1.281 1.121 1.376 1.275 30 0°0.512 1.359 1.186 1.397 1.284 40 0°0.530 1.432 1.306 1.506 1.402 50 0°0.563 1.636 1.540 1.720 1.635 60 0°0.616 2.088 1.956 2.177 2.123 70 0°0.697 2.955 2.704 3.287 3.194 80 0°0.819 4.878 4.391 5.629 5.531 85 0°0.900 6.789 5.890 7.643 7.761 89 9°0.998 9.853 8.501 9.936 9.934 Table 2 . 2 4: Generalized critical stress intensity factors as a function of the notch aperture in soft annealed tool steel, (AISI O1 at -50 • C). Experimental measurements from[START_REF] Strandberg | Fracture at V-notches with contained plasticity[END_REF] using SENT and TPB compared with Pac-Man simulations with the AT 1 model. Experiments Undamaged notch Damaged notch 2ω Mat k (exp) c stdev k c (num) rel. error k (num) c rel. error 0°H80 0.14 0.01 0.18 22.91 % 0.15 5.81 % H100 0.26 0.02 0.34 24.62 % 0.28 7.61 % H130 0.34 0.01 0.44 29.34 % 0.36 5.09 % H200 0.57 0.02 0.74 47.60 % 0.61 6.53 % 90°H80 0.20 0.02 0.22 12.65 % 0.21 4.73 % H100 0.36 0.02 0.41 12.29 % 0.38 4.10 % H130 0.49 0.05 0.54 11.33 % 0.50 0.50 % H200 0.81 0.08 0.91 20.54 % 0.83 2.21 % 140°H80 0.53 0.06 0.53 0.37 % 0.48 9.26 % H100 0.89 0.04 0.92 3.43 % 0.84 5.91 % H130 1.22 0.10 1.25 2.95 % 1.13 7.48 % H200 2.02 0.14 2.07 4.92 % 1.89 6.80 % 155°H80 0.86 0.07 0.83 3.63 % 0.75 14.36 % H100 1.42 0.08 1.42 0.14 % 1.29 10.63 % H130 1.90 0.10 1.95 2.82 % 1.76 8.06 % H200 3.24 0.15 3.23 0.89 % 2.92 11.02 % Table 2.5: Generalized critical stress intensity factors as a function of the notch aper- ture in Divinycell® PVC foam. Experimental measurements from [94] using four point bending compared with Pac-Man simulations with the AT 1 model. Table 2 . 2 6: Generalized critical stress intensity factors as a function of the notch aperture in Duraluminium. Experimental measurements from[START_REF] Seweryn | Brittle fracture criterion for structures with sharp notches[END_REF] using single edge notch tension compared with Pac-Man simulations with the AT 1 model. Experiments Undamaged notch Damaged notch ω Type k c (exp) stdev k c (num) rel. error k (num) c rel. error 10°DENT 1.87 0.03 2.50 25.29 % 2.07 10.03 % 20°DENT 1.85 0.03 2.53 26.89 % 2.13 12.97 % 30°DENT 2.17 0.03 2.65 18.17 % 2.33 6.92 % 40°DENT 2.44 0.02 3.07 20.65 % 2.73 10.70 % 50°DENT 3.06 0.05 3.94 22.31 % 3.54 13.63 % 60°DENT 4.35 0.18 5.95 26.97 % 5.41 19.69 % 70°DENT 8.86 0.18 11.18 20.74 % 10.10 12.26 % 80°DENT 28.62 0.68 27.73 3.20 % 24.55 16.56 % 90°DENT 104.85 10.82 96.99 8.11 % 85.37 22.82 % Table 2.7: Generalized critical stress intensity factors as a function of the notch aper- ture in PMMA. Experimental measurements from [165] using single edge notch tension compared with Pac-Man simulations with the AT 1 model. Table 2 . 2 8: Generalized critical stress intensity factors as a function of the notch aperture in Aluminium oxide ceramics. Experimental measurements from[START_REF] Yosibash | Failure criteria for brittle elastic materials[END_REF] using three and four point bending compared with Pac-Man simulations. Experiments Undamaged notch Damaged notch 2ω a/h k c (exp) stdev k c (num) rel. error k c (num) rel. error 60°0.1 1.41 0.02 1.47 4.5% 1.29 9.3% 0.2 1.47 0.04 1.47 0.4% 1.29 14.0% 0.3 1.28 0.03 1.47 13.0% 1.29 0.4% 0.4 1.39 0.04 1.47 5.8% 1.29 7.8% 90°0.1 2.04 0.02 1.98 3.0% 1.81 12.9% 0.2 1.98 0.01 1.98 0.0% 1.81 9.6% 0.3 2.08 0.03 1.98 5.1% 1.81 15.2% 0.4 2.10 0.03 1.98 5.9% 1.81 16.1% 120°0.1 4.15 0.02 3.87 7.3% 3.63 14.3% 0.2 4.03 0.06 3.87 4.2% 3.63 11.0% 0.3 3.92 0.18 3.87 1.4% 3.63 8.0% 0.4 3.36 0.09 3.87 13.0% 3.63 7.4% Table 2.9: Generalized critical stress intensity factors as a function of the notch aperture in PMMA. Experimental measurements from [71] using three and four point bending compared with Pac-Man simulations. The value a/h refers to the ratio depth of the notch over sample thickness. See Figure 2.9 for geometry and loading. Table 2 . 2 10: Critical load reported in[START_REF] Yosibash | Failure criteria for brittle elastic materials[END_REF] using three-and four-point bending experiments of an Al 2 O 3 -7%ZrO 2 sample compared with numerical simulations using the AT 1 model and undamaged notch boundary conditions. TPB and FPB refer respectively to three point bending and four point bending. See Figure2.9 for geometry and loading. 2ω a/h P c (exp) [N] stdev P (num) c [N] rel. error 60°0.1 608.50 6.69 630.81 3.5% 0.2 455.75 12.48 451.51 0.9% 0.3 309.00 8.19 347.98 11.2% 0.4 258.75 6.61 268.69 3.7% 90°0.1 687.33 5.19 668.69 2.8% 0.2 491.00 2.94 491.41 0.1% 0.3 404.33 5.44 383.33 5.5% 0.4 316.00 4.24 297.48 6.2% 120°0.1 881.75 4.60 822.22 7.2% 0.2 657.25 9.36 632.32 3.9% 0.3 499.60 25.41 499.50 0.0% 0.4 336.25 9.09 386.87 13.1% Table 2 . 2 11: Load at failure reported in Table 3 . 3 33 288 10.07 1310 0 0.218 365 Id 1 0.279 8.93 9775 1/8 0.2025 1462 Id 2 0.258 9.65 14907 1/8 0.2060 1954 Id 3 0.273 9.12 11282 1/6 0.2128 1023 Id 4 0.283 8.82 17357 1/6 0.1102 2550 Id 5 0.257 9.70 18258 1/6 0.2022 1508 : Rock specimen dimensions provided by the commercial laboratory and calculated fracture toughness. where A is the Hooke's law tensor. The domain is subject to time dependent stress boundary condition σ • ν = g(t) on ∂ N Ω. A safe load condition g(t) is prescribed to prevent issues in plasticity theory. The total energy is formulated for every x ∈ Ω and every t by E t (u, p) = 4.3.1 Numerical implementation of perfect plasticity models Consider the same problem with stress conditions at the boundary and a free energy of the form of, ψ(e(u), p) = 1 2 A(e(u) -p) : (e(u) -p), Ω 1 2 A(e(u) -p) : (e(u) -p) + 0 t sup τ ∈K {τ : ṗ(s)}ds dx e 2 + νg 2 e 3 ⊗ e 3 + tEe 3 ⊗ e 3 ⊗ e 2 + t(-νe 1 ⊗ e 1νe 2 ⊗ e 2 + e 3 ⊗ e 3 ) x 2 e 2 + t(-νx 1 e 1νx 2 e 2 + x 3 e 3 ) e(t) = -ν(1 + ν) e 2 u(t) = -g 2 E e 1 ⊗ e 1 + (1 -ν 2 ) g 2 E ν(1 + ν) E g 2 x 1 e 1 + 1 -ν 2 E g 2 (4.37) After the critical loading, permanent deformation takes place in the structure and the solution is Table 5 . 5 1: Variety of possible models, where c 1 , c 2 are constants. Table 5 . 5 2: Specimen dimensions. All measures are in [mm]. The internal length is specified in Table 5.3. We observed two patterns of ductile fractures depending on the boundary condition 125 Chapter 5. Variational phase-field models of ductile fracture by coupling plasticity with damage E ν σ p σ c [GPa] [MPa] [GPa] [µm] 70 .33 100 2 400 Table 5 . 5 3: Material parameters used for AA 5754 Al -Mg. Table 5 . 5 .[START_REF] Alessi | Gradient damage models coupled with plasticity and nucleation of cohesive cracks[END_REF]. The smallest damageable plastic yield surface is given for 5% of σ p . 4: Specimen dimensions. For the internal length refer to the Table5.5. L H W r D d l h 4.5 2.2 1.05 .5 1.09 0.98 0.82 /2.5 E ν σ p r Y 1 .3 1 12 .03 Table 5 . 5 5: Parameters used for 3d simulations. Karush-Kuhn-Tucker available at https://www.bitbucket.org/bourdin/mef90-sieve available at https://www.bitbucket.org/bourdin/mef90-sieve available at https://bitbucket.org/cmaurini/gradient-damage https://en.wikipedia.org/wiki/Pac-Man available at https://www.bitbucket.org/bourdin/mef90-sieve available at http://abs-5.me.washington.edu/snlp/ and at https://bitbucket.org/bourdin/ snlp available at https://www.bitbucket.org/bourdin/mef90-sieve http://www.hpc.lsu.edu Remerciements problem. Perhaps extensions into phase field models of dynamic fracture will address this issue. Fracture in compression remains an issue in variational phase-field models. Although several approaches have been proposed that typically consist in splitting the strain energy into a damage inducing and non damage inducing terms, neither of the proposed splits are fully satisfying (see [START_REF] Amor | Regularized formulation of the variational brittle fracture with unilateral contact: Numerical experiments[END_REF][START_REF] Lancioni | The variational approach to fracture: A practical application to the french Panthéon[END_REF][START_REF] Li | Gradient Damage Modeling of Dynamic Brittle Fracture[END_REF] for instance). In particular, it is not clear if either of this models is capable of simultaneously accounting for nucleation under compression and self-contact. Finally, even though a significant amount of work has already been invested in extending the scope of phase-field models of fracture beyond perfectly brittle materials, to our knowledge, none of the proposed extensions has demonstrated its predictive power yet. Appendix C Single fracture in a infinite domain Line Fracture (2d domain): The volume of a line fracture in a 2d domain is where E = E/(1ν 2 ) in plane strain and E = E in plane stress theory. Before the start of propagation, l = l 0 and the fluid pressure in this regime is If we consider an existing line fracture with an initial length of l 0 . Prior to fracture propagation, the fracture length does not change so that l = l 0 . Since fracture length at the onset of propagation is l 0 , the critical fluid pressure [START_REF] Sneddon | The opening of a griffith crack under internal pressure[END_REF] is The critical fracture volume at the critical fluid pressure is obtained by substituting (3.16) into (3.14) During quasi-static propagation of the fracture, l ≥ l 0 and the fracture is always in a critical state so that (3.16) applies. Therefore, the fluid pressure and fracture length in this regime are Chapter 4 Variational models of perfect plasticity Elasto-plasticity is a branch of solid mechanics which deals with permanent deformation in a structure once the stress reached a critical value at a macroscopic level. This topic is a vast research area and it is impossible to cover all contributions. We will focus on recalling basic mathematical and numerical aspects of perfect elasto-plasticity in small strain theory under quasi-static evolution problems. The perfect elasto-plastic materials fall into the theory of generalized standard materials developed by [START_REF] Halphen | Sur les matériaux standard généralisés[END_REF][START_REF] Suquet | Sur les équations de la plasticité: existence et régularité des solutions[END_REF][START_REF] Salençon | Elasto-plasticité[END_REF][START_REF] Marigo | From clausius-duhem and drucker-ilyushin inequalities to standard materials[END_REF][START_REF] Mielke | A mathematical framework for generalized standard materials in the rate-independent case[END_REF]. Recently, a modern formalism of perfect plasticity arose [START_REF] Dal Maso | Quasistatic evolution problems for linearly elastic-perfectly plastic materials[END_REF][START_REF] Solombrino | Quasistatic evolution problems for nonhomogeneous elastic plastic materials[END_REF][START_REF] Babadjian | Quasi-static evolution in nonassociative plasticity: the cap model[END_REF][START_REF] Francfort | Small-strain heterogeneous elastoplasticity revisited[END_REF][START_REF] Gilles A Francfort | A case study for uniqueness of elasto-plastic evolutions: The bi-axial test[END_REF], the idea is to discretize in time and find local minimizers of the total energy. In this chapter we focus only on perfect elasto-plasticity materials and set aside the damage. We start with concepts of generalized standard materials in the section 4.1. Then using some convex analysis [START_REF] Ekeland | Convex analysis and variational problems[END_REF][START_REF] Temam | Mathematical problems in plasticity[END_REF] we show the equivalence with the variational formulation presented in the section 4.2. The last part 4.3 presents an algorithm to solve perfect elasto-plasticity materials evolution problems. A numerical verification example is detailed at the end of the chapter. Ingredients for generalized standard plasticity models For the moment we set aside the evolution problem and we focus on main ingredients to construct standard elasto-plasticity models [START_REF] Germain | Continuum thermodynamics[END_REF][START_REF] Quoc | Stability and nonlinear solid mechanics[END_REF][START_REF] Suquet | Sur les équations de la plasticité: existence et régularité des solutions[END_REF]. This theory requires a choice of internal variables, a recoverable and a dissipation potentials energies where both functionals are convex. The driving forces (conjugate variables) usually the stress and the thermodynamical force lie respectively in the elastic and dissipation potential energies. For smooth evolutions of the internal variables, the material response is dictated by the normality rule of the dissipation potential convex set (flow law rule). By doing so, it is equivalent to find global minimizers of the total energy sum of the elastic and dissipation potential energies. Consider that our material has a perfect elasto-plastic response and can be modeled by the generalized standard materials theory, which is based on two statements. In all of these experiments main observations of fractures nucleation reported are: (i) formations of shear bands in "X" shape intensified by necking effects, (ii) growing voids and coalescence, (iii) macro-crack nucleation at the center of the specimen, (iv ) propagation of the macro crack, straightly along the cross section or following shear bands depending on the experiment and (v ) failure of the sample when the fracture reaches external free surfaces stepping behind shear bands path. Observed fracture shapes are mostly cup-cones or shear dominating. The aforementioned ductile features examples will be investigated through this chapter by considering similar geometries such as, rectangular samples, round notched specimens in plane strain condition and round bars. Pioneers to model ductile fractures are Dugdale [START_REF] Dugdale | Yielding of steel sheets containing slits[END_REF] and Barenblatt [START_REF] Barenblatt | The mathematical theory of equilibrium of cracks in brittle fracture[END_REF] with their contributions on cohesive fractures following Griffith's idea. Later on, a modern branch focused on micro voids nucleations and convalescence as the driven mechanism of ductile fracture. Introduced by Gurson [START_REF] A L Gurson | Continuum Theory of Ductile Rupture by Void Nucleation and Growth: Part I -Yield Criteria and Flow Rules for Porous Ductile Media[END_REF] a yield surface criterion evolves with the micro-void porosity density. Then, came different improved and modified versions of this criterion, Gurson-Tvergaard-Needleman (GTN) [START_REF] Tvergaard | Material failure by void growth to coalescence[END_REF][START_REF] Tvergaard | Analysis of the cup-cone fracture in a round tensile bar[END_REF][START_REF] Needleman | An analysis of ductile rupture in notched bars[END_REF], Rousselier [START_REF] Rousselier | Ductile fracture models and their potential in local approach of fracture[END_REF], Leblond [START_REF] Leblond | An improved gurson-type model for hardenable ductile metals[END_REF] to be none exhaustive. The idea to couple phase-field models to brittle fracture with plasticity to recover cohesive fractures is not new and have been developed theoretically and numerically in [START_REF] Alessi | Variational approach to fracture mechanics with plasticity[END_REF][START_REF] Conti | Phase field approximation of cohesive fracture models[END_REF][START_REF] Ambati | A phase-field model for ductile fracture at finite strains and its experimental verification[END_REF][START_REF] Ambati | Phase-field modeling of ductile fracture[END_REF][START_REF] Crismale | Viscous approximation of quasistatic evolutions for a coupled elastoplastic-damage model[END_REF][START_REF] Miehe | Phase field modeling of ductile fracture at finite strains. a variational gradient-extended plasticity-damage theory[END_REF][START_REF] Wadier | Mécanique de la rupture fragile en présence de plasticité : modélisation de la fissure par une entaille[END_REF][START_REF] Miehe | Phase field modeling of ductile fracture at finite strains. a variational gradient-extended plasticity-damage theory[END_REF]. Gradient damage models coupled with perfect plasticity Our model is settled on the basis of perfect plasticity and gradient damage models which has proved to be efficient to predict cracks initiation and propagation in brittle materials. Both mature models have been developed separately and are expressed in the variational formulation in the spirit of [START_REF] Mielke | Evolution of rate-independent systems[END_REF][START_REF] Bourdin | The variational approach to fracture[END_REF][START_REF] Piero | Variational Analysis and Aerospace Engineering, volume 33 of Optimization and Its Applications[END_REF][START_REF] Pham | Approche variationnelle de l'endommagement : I. les concepts fondamentaux[END_REF][START_REF] Pham | Approche variationnelle de l'endommagement : Ii. les modèles à gradient[END_REF][START_REF] Dal Maso | Quasistatic evolution problems for linearly elastic-perfectly plastic materials[END_REF] which provides a fundamental Chapter 5. Variational phase-field models of ductile fracture by coupling plasticity with damage (5.11) Passing E t i (u i , α i , p i ) to left and dividing by h and letting h → 0 at the limit, we obtain, By integrating by part the integral term in e(v) over Ω \ (J(ζ i ) ∪ J(v)), we get, (5.13) where σ i = a(α i )A e(u i )p i . Without plasticity there is no cohesive effect, hence, σ i ν = 0 and the non-interpenetration condition leads to u i • ν ≥ 0 on J(ζ i ), however for a general cohesive model we do not have information for σ i ν on J(ζ i ). So, to overcome this issue we restrict our study to material with tr (p i ) = 0, consequently on the jump set J(ζ i ) we have tr ( The material can only shear along J(ζ i ) which is commonly accepted for Von Mises and Tresca plasticity criterion. Thus, we have v • ν = 0 on J(ζ i ) and naturally σ i ν = 0 on J(v). The last term of (5.13) stands for, Combining the above equation, (5.12) and (5.13), considering J(v) = ∅ and by a standard localization argument i.e. taking v concentrated around H n-1 and zero 5.1. Phase-field models to fractures from brittle to ductile almost everywhere, we obtain that all the following integrals must vanish, which leads to the equilibrium and the prescribed boundary conditions, Note that the normal stress σ i ν is continuous across J(ζ i ) but the tangential component might be discontinuous. 2. Plastic yield criteria on the jump set: Since the above equation (5.15) holds, for J(v) = ∅ in (5.12) we have, Thus, on each point of the jump set The right hand side of the above inequality, Considering Von Mises criterion we get on the left hand side, Taking the maximum for all ν = 1, and letting σ i = a(α i )ς i we obtain that (5.17) becomes, This condition is automatically satisfied for Von Mises since a(α i )/b(α i ) ≤ 1. We refer the reader to [START_REF] Alessi | Gradient damage models coupled with plasticity: variational formulation and main properties[END_REF][START_REF] Francfort | The elastoplastic exquisite corpse: A suquet legacy[END_REF][START_REF] Gilles A Francfort | A case study for uniqueness of elasto-plastic evolutions: The bi-axial test[END_REF] for more details. Chapter 5. Variational phase-field models of ductile fracture by coupling plasticity with damage 3. Damage yield criteria in the bulk: Taking v = 0 and q = 0 thus J(v) = ∅ in the optimality condition (5.9), such that E t i (u i , α i + hβ, p i ) ≥ E t i (u i , α i , p i ), then dividing by h and passing to the limit, we get, after integrating by parts the ∇α•∇β term over Ω \ J(ζ i ), The above equation holds for any β ≥ 0, hence, all contributions must be positive, such that in Ω \ J(ζ i ), we have, The damage yield criterion is composed of the classical part from gradient damage models and a coupling part in b (α). When the material remains undamaged and plasticity occurs, the cumulation of dissipated plastic energy combined with the property that b (α) < 0 leads to decrease the left hand side which becomes an equality up to a critical plastic dissipation. At this moment the damage is triggered. 4. Damage yield criteria in the jump set: From (5.18) we have, The gradient damage is discontinuous across the jump set J(ζ i ) due to plastic strain concentration and vice versa. 5. Damage boundary condition: From (5.18) we have, 6. Plastic yield criteria in the bulk: Take v = 0 and β = 0 thus J(v) = ∅ in the optimality condition (5.9) such that where Since ψ is differentiable by letting h → 0 and applying the subgradient definition to (5.22), we get -∂ψ/∂p i ∈ b(α i )∂H(p ip i-1 ). We recover the stress admissible constraint provided by the plastic yield surface. The damage state decreases the plastic yield surface leading to a stress softening property. 7. Flow rule in the bulk: Applying the convex conjugate (Legendre-Fenchel) to the above equation we get, which is the flow rule in a discrete settings, by letting max(t it i-1 ) → 0 we get the time continuous one. Damage consistency: The damage consistency is recovered using the energy balance condition which is not fully exposed here. However the conditions obtained are: Damage irreversibility in the domain: The damage irreversibility constraint is, All of this conditions are governing laws of the problem. The evolution of the yields surfaces are given by the equations (5.19) and (5.23). Chapter 5. Variational phase-field models of ductile fracture by coupling plasticity with damage Application to a 1d setting The goal of this section is to apply the gradient damage model coupled with perfect plasticity in 1d setting by considering a bar in traction. Relevant results are obtained through this example such as, the evolutions of the two yields functions, the damage localization process and the role of the gradient damage jump term which governs the displacement jump set. We refer the reader to Alessi-Marigo [START_REF] Alessi | Variational approach to fracture mechanics with plasticity[END_REF][START_REF] Alessi | Gradient damage models coupled with plasticity: variational formulation and main properties[END_REF][START_REF] Alessi | Gradient damage models coupled with plasticity and nucleation of cohesive cracks[END_REF][START_REF] Alessi | Coupling damage and plasticity for a phase-field regularisation of brittle, cohesive and ductile fracture: One-dimensional examples[END_REF] for a complete exposition of this 1d application. In the sequel, we consider a one-dimensional evolution problem of an homogeneous elasto-plastic-damageable bar Ω = [-L, L] stretched by a time controlled displacements at boundaries where damage remains equal to zero. Assume that a unique displacement jump may occur on the bar located at the coordinate x 0 , thus the admissible displacement, damage and plastic strain sets are respectively, The state variables of the sound material is at the initial condition (u 0 , α 0 , p 0 ) = (0, 0, 0). In one dimensional setting the plastic yield criteria is |τ | ≤ σ p , thus the plastic potential power is given by, By integrating over the process, the dissipated plastic energy density is σ p p where the cumulated plastic strain is p = t 0 | ṗs |ds. Since no external force is applied, the total energy of the bar is given by, where E is the Young's modulus and (•) = ∂(•)/∂x. The quadruple state variables (u t , α t , p t , pt ) ∈ C t ×D ×Q×M(Ω, R) is solution of the evolution problem, if the following conditions holds: 1. The equilibrium, The stress is constant along the bar hence it is only function of time. Titre : Modèles variationnels à champ de phase pour la rupture de type fragile et ductile: nucléation et propagation Mots clefs : Modèles à champ de phase pour la rupture, nucléation de fissure, effet d'échelle dans les matériaux fragiles, modèles d'endommagement à gradient, fracturation hydraulique, stabilité des fissures, modèles de plasticités, approche variationnelle, rupture ductile. Résumé : Les simulations numériques des fissures de type fragile par les modèles d'endommagement à gradient deviennent maintenant très répandues. Les résultats théoriques et numériques montrent que dans le cadre de l'existence d'une pré-fissure la propagation suit le critère de Griffith. Alors que pour le problème à une dimension la nucléation de la fissure se fait à la contrainte critique, cette dernière propriété dimensionne le paramètre de longueur interne. Dans ce travail, on s'attarde sur le phénomène de nucléation de fissures pour les géométries communément rencontrées et qui ne présentent pas de solutions analytiques. On montre que pour une entaille en U-et Vl'initiation de la fissure varie continument entre la solution prédite par la contrainte critique et celle par la ténacité du matériau. Une série de vérifications et de validations sur différents matériaux est réalisée pour les deux géométries considérées. On s'intéresse ensuite à un défaut elliptique dans un domaine infini ou très élancé pour illustrer la capacité du modèle à prendre en compte les effets d'échelles des matériaux et des structures. Dans un deuxième temps, ce modèle est étendu à la fracturation hydraulique. Une première phase de vérification du modèle est effectuée en stimulant une pré-fissure seule par l'injection d'une quantité donnée de fluide. Ensuite on étudie la simulation d'un réseau parallèle de fissures. Les résultats obtenus montrent qu'une seule fissure est activée dans ce réseau et que ce type de configuration vérifie le principe de moindre énergie. Le dernier exemple se concentre sur la stabilité des fissures dans le cadre d'une expérience d'éclatement à pression imposée pour l'industrie pétrolière. Cette expérience d'éclatement de la roche est réalisée en laboratoire afin de simuler les conditions de confinement retrouvées lors des forages. La dernière partie de ce travail se concentre sur la rupture ductile en couplant le modèle à champ de phase avec les modèles de plasticité parfaite. Grâce à la structure variationnelle du problème on décrit l'implémentation numérique retenue pour le calcul parallèle. Les simulations réalisées montrent que pour une géométrie légèrement entaillée la phénoménologie des fissures ductiles comme par exemple la nucléation et la propagation sont en concordances avec ceux reportées dans la littérature. Title : Variational phase-field models from brittle to ductile fracture: nucleation and propagation Keywords: Phase-field models of fracture, crack nucleation, size effects in brittle materials, validation & verification, gradient damage models, hydraulic fracturing, crack stability, plasticity model, variational approach, ductile fracture Abstract : Phase-field models, sometimes referred to as gradient damage, are widely used methods for the numerical simulation of crack propagation in brittle materials. Theoretical results and numerical evidences show that they can predict the propagation of a pre-existing crack according to Griffith's criterion. For a one-dimensional problem, it has been shown that they can predict nucleation upon a critical stress, provided that the regularization parameter is identified with the material's internal characteristic length. In this work, we draw on numerical simulations to study crack nucleation in commonly encountered geometries for which closed-form solutions are not available. We use U-and V-notches to show that the nucleation load varies smoothly from the one predicted by a strength criterion to the one of a toughness criterion when the strength of the stress concentration or singularity varies. We present validation and verification of numerical simulations for both types of geometries. We consider the problem of an elliptic cavity in an infinite or elongated domain to show that variational phase field models properly account for structural and material size effects. In a second movement, this model is extended to hydraulic fracturing. We present a validation of the model by simulating a single fracture in a large domain subject to a control amount of fluid. Then we study an infinite network of pressurized parallel cracks. Results show that the stimulation of a single fracture is the best energy minimizer compared to multi-fracking case. The last example focuses on fracturing stability regimes using linear elastic fracture mechanics for pressure driven fractures in an experimental geometry used in petroleum industry which replicates a situation encountered downhole with a borehole called burst experiment. The last part of this work focuses on ductile fracture by coupling phase-field models with perfect plasticity. Based on the variational structure of the problem we give a numerical implementation of the coupled model for parallel computing. Simulation results of a mild notch specimens are in agreement with the phenomenology of ductile fracture such that nucleation and propagation commonly reported in the literature.
295,031
[ "1306573" ]
[ "1167" ]
01758434
en
[ "info" ]
2024/03/05 22:32:10
2015
https://inria.hal.science/hal-01758434/file/371182_1_En_22_Chapter.pdf
Edirlei Soares De Lima Antonio L Furtado email: furtado@inf.puc-rio.br Bruno Feijó email: bfeijo@inf.puc-rio.br Storytelling Variants: The Case of Little Red Riding Hood Keywords: Folktales, Variants, Types and Motifs, Semiotic Relations, Digital Storytelling, Plan Recognition A small number of variants of a widely disseminated folktale is surveyed, and then analyzed in an attempt to determine how such variants can emerge while staying within the conventions of the genre. The study follows the classification of types and motifs contained in the Index of Antti Aarne and Stith Thompson. The paper's main contribution is the characterization of four kinds of type interactions in terms of semiotic relations. Our objective is to provide the conceptual basis for the development of semi-automatic methods to help users compose their own narrative plots. Introduction When trying to learn about storytelling, in order to formulate and implement methods usable in a computer environment, two highly influential approaches come immediately to mind, both dealing specifically with folktales: Propp's functions [START_REF] Propp | Morphology of the Folktale[END_REF] and the comprehensive classification of types and motifs proposed by Antti Aarne and Stith Thompson, known as the Aarne-Thompson Index (heretofore simply Index) [START_REF] Aarne | The Types of the Folktale[END_REF][START_REF] Thompson | The Folktale[END_REF][START_REF] Uther | The Types of International Folktales[END_REF]. In previous work, as part of our Logtell project [START_REF] Ciarlini | Modeling interactive storytelling genres as application domains[END_REF][START_REF] Ciarlini | A logic-based tool for interactive generation and dramatization of stories[END_REF], we developed prototypes to compose narrative plots interactively, employing a plan-generation algorithm based on Propp's functions. Starting from different initial states, and giving to users the power to intervene in the generation process, within the limits of the conventions of the genre on hand, we were able to obtain in most cases a fair number of different plots, thereby achieving an encouraging level of variety in plot composition. We now propose to invest on a strategy that is based instead on the analysis of already existing stories. Though we shall focus on folktales, an analogous conceptual formulation applies to any genre strictly regulated by conventions and definable in terms of fixed sets of personages and characteristic events. In all such genres one should be able to pinpoint the equivalent of Proppian functions, as well as of ubiquitous types and motifs, thus opening the way to the reuse of previously identified narrative patterns as an authoring resource. Indeed it is a well-established fact that new stories often emerge as creative adaptations and combinations of old stories: this is a most common practice among even the best professional authors, though surely not easy to trace in its complex ramifications, as eloquently expressed by the late poststructuralist theoretician Roland Barthes [3,p. 39]: Any text is a new tissue of past citations. Bits of code, formulae, rhythmic models, fragments of social languages, etc., pass into the text and are redistributed within it, for there is always language before and around the text. Intertextuality, the condition of any text whatsoever, cannot, of course, be reduced to a problem of sources or influences; the intertext is a general field of anonymous formulae whose origin can scarcely ever be located; of unconscious or automatic quotations, given without quotation marks. The present study utilizes types and motifs of the Aarne-Thompson's Index, under whose guidance we explore what the ingenuity of supposedly unschooled narrators has legated. We chose to concentrate on folktale type AT 333, centered on The Little Red Riding Hood and spanning some 58 variants (according to [START_REF] Tehrani | The Philogeny of Little Red Riding Hood[END_REF]) from which we took a small sample. The main thrust of the paper is to investigate how such rich diversities of variants of traditional folktales came to be produced, as they were told and retold by successive generations of oral storytellers, hoping that some of their tactics are amenable to semi-automatic processing. An added incentive to work with folktale variants is the movie industry's current interest in adaptations of folktales for adult audiences, in contrast to early Disney classic productions. Related work is found in the literature of computational narratology [START_REF] Cavazza | Narratology for Interactive Storytelling: A Critical Introduction[END_REF][START_REF] Mani | Computational Narratology[END_REF] a new field that examines narratology from the viewpoint of computation and information processingwhich offers models and systems based on tale types/motifs that can be used in story generation and/or story comparison. Karsdorp et al. [START_REF] Karsdorp | In Search of an Appropriate Abstraction Level for Motif Annotations[END_REF] believe that oral transmission of folktales happens through the replication of sequences of motifs. Darányi et al. [START_REF] Darányi | Toward Sequencing 'Narrative DNA': Tale Types, Motif Strings and Memetic Pathways[END_REF] handle motif strings like chromosome mutations in genetics. Kawakami et al. [START_REF] Kawakami | On Modeling Conceptual and Narrative Structure of Fairytales[END_REF] cover 23 Japanese texts of Cinderella tales, whilst Swartjes et al use Little Red Riding Hood as one of their examples [START_REF] Swartjes | Iterative authoring using story generation feedback: debugging or co-creation?[END_REF]. Our text is organized as follows. Section 2 presents the two classic variants of AT 333. Section 3 summarizes additional variants. Section 4 has our analysis of the variant-formation phenomenon, with special attention to the interaction among types, explained in terms of semiotic relations. Section 5 describes a simple plan-recognition prototype working over variant libraries. Section 6 contains concluding remarks. The full texts of the variants cited in the text are available in a separate document. 1 2 The two classic variants In the Index, the type of interest, AT 333, characteristically named The Glutton, is basically described as follows, noting that two major episodes are listed [1, p. 125]: The wolf or other monster devours human beings until all of them are rescued alive from his belly. I. Wolf's Feast. By masking as mother or grandmother the wolf deceives and devours a little girl whom he meets on his way to her grandmother's. II. Rescue. The wolf is cut open and his victims rescued alive; his belly is sewed full of stones and he drowns, or he jumps to his death. The first classic variant, Le Petit Chaperon Rouge (Little Red Riding Hood), was composed in France in 1697, by Charles Perrault [START_REF] Perrault | Little Red Riding Hood[END_REF], during the reign of Louis XIV th . It consists of the first episode alone, so that there is no happy ending, contrary to what children normally expect from nursery fairy tales. The little girl, going through the woods to see her grandmother, is accosted by the wolf who reaches the grandmother's house ahead of her. The wolf kills the grandmother and takes her place in bed. When the girl arrives, she is astonished at the "grandmother"'s large, ears, large eyes, etc., until she asks about her huge teeth, whereat the wolf gobbles her up. Following a convention of the genre of admonitory fables, a "moralité" is appended, to the effect that well-bred girls should not listen to strangers, particularly when they pose as "gentle wolves" The second and more influential classic variant is that of the brothers Grimm (Jacob and Wilhelm), written in German, entitled Rotkäppchen (Little Red Cap) [START_REF] Grimm | The Complete Grimm's FairyTales[END_REF], first published in 1812. The girl's question about the wolf's teeth is replaced by: "But, grandmother, what a dreadful big mouth you have!" This is a vital changenot being bitten, the victims are gobbled up aliveand so the Grimm variant can encompass the two episodes prescribed for the AT 333 type. Rescue is effected by a hunter, who finds the wolf sleeping and cuts his belly, allowing girl and grandmother to escape. The wolf, his belly filled with heavy stones fetched by the girl, wakes up, tries to run away and falls dead, unable to carry the weight. As a moral addendum to the happy ending, the girl promises to never again deviate from the path when so ordered by her mother. Having collected the story from two distinct sources, the brothers wrote a single text with a second finale, wherein both female characters show that they had learned from their experience with the villain. A second wolf comes in with similar proposals. The girl warns her grandmother who manages to keep the animal outside, and eventually they cause him to fall from the roof into a trough and be drowned. Some other variants In [START_REF] Tehrani | The Philogeny of Little Red Riding Hood[END_REF] no less than 58 folktales were examined as belonging to type AT 333 (and AT 123). Here we shall merely add seven tales to the classic ones of the previous section. Since several variants do not mention a red hood or a similar piece of clothing as attribute of the protagonist, the conjecture was raised that this was Perrault's invention, later imitated by the Grimms. However a tale written in Latin by Egbert de Liège in the 11 th century, De puella a lupellis seruata (About a Girl Saved from Wolf Cubs) [START_REF] Ziolkowski | A Fairy Tale from before Fairy Tales: Egbert of Liège's 'De puella a lupellis seruata' and the Medieval Background of 'Little Red Riding Hood[END_REF], arguably prefiguring some characteristics of AT 333, features a red tunic which is not merely ornamental but plays a role in the events. The girl had received it as a baptismal gift from her godfather. When she was once captured by a wolf and delivered to its cubs to be eaten, she suffered no harm. The virtue of baptism, visually represented by the red tunic, gave her protection. The cubs, their natural ferocity sub-dued, gently caressed her head covered by the tunic. The moral lesson, in this case, is consonant with the teaching of the Bible (Daniel VI, 27). Whilst in the variants considered so far the girl is presented as naive, in contrast to the clever villain, the situation is reversed in the Conte de la Mère-grand (The Story of Grandmother), collected by folklorist Achille Millien in the French province of Nivernais, circa 1870, and later published by Paul Delarue [START_REF] Delarue | The Story of Grandmother[END_REF]. In this variant, which some scholars believe to be closer to the primitive oral tradition, the villain is a "bzou", a werewolf. After killing and partly devouring the grandmother's body, he stores some of her flesh and fills a bottle with her blood. When the girl comes in, he directs her to eat and drink from these ghastly remains. Then he tells her to undress and lie down on the bed. Whenever the girl asks where to put each piece of clothing, the answer is always: "Throw it in the fire, my child; you don't need it anymore." In the ensuing dialogue about the peculiar physical attributes of the fake grandmother, when the question about her "big mouth" is asked the bzou gives the conventional reply: "All the better to eat you with, my child!"but this time the action does not follow the words. What happens instead is that the girl asks permission to go out to relieve herself, which is a ruse whereby she ends up outsmarting the villain and safely going back to home (cf. http://expositions.bnf.fr/contes/gros/chaperon/nivers.htm). An Italian variant published by Italo Calvino, entitled Il Lupo e le Tre Ragazze (The Wolf and the Three Girls) [START_REF] Calvino | Italian Folktales[END_REF], adopts the trebling device [START_REF] Propp | Morphology of the Folktale[END_REF] so common in folktales, making three sisters, one by one, repeat the action of taking victuals to their sick mother. The wolf intercepts each girl but merely demands the food and drink that they carry. The youngest girl, who is the protagonist, throws at the wolf a portion that she had filled with nails. This infuriates the wolf, who hurries to the mother's house to devour her and lay in wait for the girl. After the customary dialogue with the wolf posing as the mother, the animal also swallows the girl. The townspeople observe the wolf coming out, kill him and extract mother and girl alive from his belly. But that is not all, as Calvino admits in an endnote. Having found the text as initially collected by Giambattista Basile, he had deliberately omitted what he thought to be a too gruesome detail ("una progressione troppo truculenta"): after killing the mother, the wolf had made "a doorlatch cord out of her tendons, a meat pie out of her flesh, and wine out of her blood". Repeating the strange above-described episode of the Conte de la Mère-grand, the girl is induced to eat and drink from these remains, with the aggravating circumstance that they belonged to her mother, rather than to a more remotely related grandparent. Turning to China, one encounters the tale Lon Po Po (Grammie Wolf), translated by Ed Young [START_REF] Young | Lon Po Po: A Red-Riding Hood Story from China[END_REF], which again features three sisters but, unlike the Western folktale cliché, shows the eldest as protagonist, more experienced and also more resourceful than the others. The mother, here explicitly declared to be a young widow, goes to visit the grandmother on her birthday, and warns Shang, the eldest, not to let anyone inside during her absence. A wolf overhears her words, disguises as an old woman and knocks at the door claiming to be the grandmother. After some hesitation, the girls allow him to enter and, in the dark, since the wolf claims that light hurts his eyes, they go to bed together. Shang, however, lighting a candle for a moment catches a glimpse of the wolf's hairy face. She convinces him to permit her two sisters to go outside under the pretext that one of them is thirsty. And herself is also allowed to go out, promising to fetch some special nuts for "Grammie". Tired of waiting for their return, the wolf leaves the house and finds the three sisters up in a tree. They persuade him to fetch a basket mounted on which they propose to bring him up, in order to pluck with his own hands the delicious nuts. They pull on the rope attached to the basket, but let it go so that the wolf is seriously bruised. And he finally dies when the false attempt is repeated for the third time. Another Chinese variant features a bear as the villain: Hsiung chia P`o (Goldflower and the Bear) [START_REF] Mi | Goldflower and the Bear[END_REF], translated by Chiang Mi. The crafty protagonist, Goldflower, is once again an elder sister, living with her mother and a brother. The mother leaves them for one day to visit their sick aunt, asking the girl to take care of her brother and call their grandmother to keep them company during the night. The bear knocks at the door, posing as the grandmother. Shortly after he comes in, the girlin spite of the darknessends up disclosing his identity. She manages to lock the boy in another room, and then obeys the bear's request to go to bed at his side. The villain's plan is to eat her at midnight, but she asks to go out to relieve her tummy. As distrustful as the werewolf in the before-mentioned French variant, the bear ties one end of a belt to her handan equally useless precaution. Safely outside on top of a tree, Goldflower asks if he would wish to eat some pears, to be plucked with a spear, which the famished beast obligingly goes to fetch in the house. The girl begins with one fruit, but the next thing to be thrown into his widely open gullet is the spear itself. Coming back in the morning, the mother praises the brave little Goldflower. One variant, published in Portugal by Guerra Junqueiro, entitled O Chapelinho Encarnado [START_REF] Guerra Junqueiro | Contos para a Infância[END_REF], basically follows the Grimm brothers pattern. A curious twist is introduced: instead of luring the girl to pick up wild flowers, the wolf points to her a number of medicinal herbs, all poisonous plants in reality, and she mistakes him for a doctor. At the end, the initiative of filling the belly of the wolf with stones is attributed not to the girl, but to the hunter, who, after skinning the animal, merrily shares the food and drink brought by the girl with her and her grandmother. The highly reputed Brazilian folklorist Camara Cascudo included in his collection [START_REF] Camara Cascudo | Contos Tradicionais do Brasil[END_REF] a variant, O Chapelinho Vermelho, which also follows the Grimm brothers pattern. The mother is introduced as a widow and the name of the girl is spelled out: Laura. Although she is known, as the conventional title goes, by a nickname translatable as "Little Red Hat", what she wears every day is a red parasol, given by her mother. One more particularity is that, upon entering her grandmother's house, the girl forgets to close the door, so that finding the door open is what strikes the hunter as suspicious when he approaches the house. The hunter bleeds the wolf with a knife and, noticing his distended belly, proceeds to open it thus saving the two victims. Nothing is said about filling the wolf's belly with stones, the wounds inflicted by the hunter's knife having been enough to kill him. Two prudent lessons are learned: (1) Laura would not forget her mother's recommendation to never deviate from the path, the specific reason being given here that there existed evil beasts in the wood; (2) living alone should no longer be an option for the old woman, who from then on would dwell with her daughter and granddaughter. Comments on the formation of variants It is a truism that people tend to introduce personal contributions when retelling a story. There are also cultural time and place circumstances that require adaptations; for example, in the Arab world the prince would in no way be allowed to meet Cinderella in a ballroomhe falls in love without having ever seen her (cf. "Le Bracelet de Cheville" in the Mardrus translation of One Thousand and One Nights [START_REF] Mardrus | Les Mille et une Nuits[END_REF]). Other differences among variants may result from the level of education of the oral storytellers affecting how spontaneous they are, and the attitude of the collectors who may either prefer to reproduce exactly what they hear or introduce corrections and rational explanations while omitting indecorous or gruesome scenes. On the storyteller's part, however, this tendency is often attenuated by an instinctive pact with the audiencewith children, in specialin favour of faithful repetition, preferably employing the very same words. Indeed the genre of folktales is strongly marked by conventions which, to a remarkable extent, remain the same in different times and places. The folklorist Albert Lord called tension of essences the compulsion that drives all singers (i.e. traditional oral storytellers) to strictly enforce such conventions [29, p. 98]: In our investigation of composition by theme this hidden tension of essences must be taken into consideration. We are apparently dealing here with a strong force that keeps certain themes together. It is deeply imbedded in the tradition; the singer probably imbibes it intuitively at a very early stage of his career. It pervades his material and the tradition. He avoids violating the group of themes by omitting any of its members. [We shall see] that he will even go so far as to substitute something similar if he finds that for one reason or another he cannot use one of the elements in its usual form. The notion of tension of essences may perhaps help explaining not only the total permanence of some variants within the frontiers of a type, but also the emergence of transgressive variants, which absorb features pertaining to other types, sometimes even provoking a sensation of strangeness. When an oral storyteller feels the urge "to substitute something similar" in a story, the chosen "something" should, as an effect of the tension-of-essences forceful compulsion, still belong to the folktale genrebut what if the storyteller's repertoire comprises more than one folktale type? As happens with many classifications, the frontiers between the types in the Index are often blurred, to the point that one or more motifs can be shared and some stories may well be classified in more than one type. So a viable hypothesis can be advanced that some variants did originate through, so to speak, a type-contamination phenomenon. Accordingly we propose to study type interactions as a possible factor in the genesis of variants. We shall characterize the interactions that may occur among types, also involving motifs, by way of semiotic relations, taking an approach we applied before to the conceptual modelling of both literary genres and business information systems [START_REF] Ciarlini | Event relations in plot-based plot composition[END_REF][START_REF] Karlsson | Conceptual Model and System for Genre-Focused Interactive Storytelling[END_REF][START_REF] Furtado | Constructing Libraries of Typical Plans[END_REF]. We distinguish four kinds of semiotic relations, associated with the so-called four master tropes [START_REF] Burke | A Grammar of Motives[END_REF][START_REF] Chandler | Semiotics: the Basics[END_REF], whose significance has been cogently stressed by a literary theory scholar, Jonathan Culler, who regards them "as a system, indeed the system, by which the mind comes to grasp the world conceptually in language" [15, p. 72]. For the ideas and for the nomenclature in the table below, we are mainly indebted to the pioneering semiotic studies of Ferdinand de Saussure [START_REF] Saussure | Cours de Linguistique Générale[END_REF]: The itemized discussion below explores the meaning of each of the four semiotic relations, as applied to the derivation of folktale type variants stemming from AT 333. relation (1) Syntagmatic relation with type AT 123. As mentioned at the beginning of section 2, the Index describes type AT 333 as comprising two episodes, namely Wolf's Feast and Rescue, but the classic Perrault variant does not proceed beyond the end of the first episode. As a consequence, one is led to assume that the Rescue episode is not essential to characterize AT 333. On the other hand the situation created by Wolf's Feast is a long distance away from the happy-ending that is commonly expected in nursery fairy tales. A continuation in consonance with the Rescue episode, exactly as described in the Index, is suggested by AT 123: The Wolf and the Kids, a type pertaining to the group of Animal Tales, which contains the key motif F913: Victims rescued from swallower's belly. The connection (syntagmatic relation) whereby AT 123 complements AT 333 is explicitly declared in the Index by "cf." cross-references [1, p. 50, p. 125]. Moreover the Grimm brothers variant, which has the two episodes, is often put side by side with another story equally collected by them, The Wolf and the Seven Little Kids [START_REF] Grimm | The Complete Grimm's FairyTales[END_REF], clearly of type AT 123. Still it must be noted that several of the variants reported here do not follow the Grimm pattern in the Rescue episode. They diverge with respect to the outcome, which, as seen, may involve the death of the girl, or her rescue after being devoured, or even her being totally preserved from the villain's attempts either by miraculous protection or by her successful ruses. (2) Paradigmatic relation with type AT 311B*. For the Grimm variant, as also for those that follow its pattern (e.g. the Italian and the two Portuguese variants in section 3), certain correspondences or analogies can be traced with variants of type AT 311B*: The Singing Bag, a striking example being another story collected in Brazil by Camara Cascudo [START_REF] Camara Cascudo | Contos Tradicionais do Brasil[END_REF], A Menina dos Brincos de Ouro (The Girl with Golden Earrings). Here the villain is neither an animal nor a werewolf; he is a very ugly old man, still with a fearsome aspect but no more than human. The golden earrings, a gift from her mother, serve as the girl's characteristic attribute and have a function in the plot. As will be noted in the summary below, the villain's bag becomes the wolf's belly of the Grimm variant, and what is done to the bag mirrors the act of cutting the belly and filling it with stones. In this sense, the AT 311B* variant replaces the Grimm variant. One day the girl went out to bring water from a fountain. Having removed her earrings to wash herself, she forgot to pick them up before returning. Afraid to be reprimanded by her mother, she walked again to the fountain, where she was caught by the villain and sewed inside a bag. The man intended to use her to make a living. At each house that he visited, he advertised the magic bag, which would sing when he menaced to strike it with his staff. Everywhere people gave him money, until he came inadvertently to the girl's house, where her voice was recognized. He was invited to eat and drink, which he did in excess and fell asleep, whereat the bag was opened to free the girl and then filled with excrement. At the next house visited, the singing bag failed to work; beaten with the staff, it ruptured spilling its contents. (3) Meronymic relation with type AT 437. In The Story of Grandmother the paths taken by the girl and the werewolf to reach the old lady's house are called, respectively, the Needles Road and the Pins Road. And, strangely enough, while walking along her chosen path, the little girl "enjoyed herself picking up needles" [START_REF] Delarue | The Story of Grandmother[END_REF]. Except for this brief and puzzling mention, these objects remain as meaningless details, having no participation in the story. And yet, browsing through the Index, we see that needles and pins are often treated as wondrous objects (motifs D1181: Magic Needle and D1182: Magic Pin). And traversing the Index hierarchy upwards, from motifs to types, we find them playing a fundamental role in type AT 437: The Needle Prince (also named The Supplanted Bride), described as follows [1, p. 140]: "The maiden finds a seemingly dead prince whose body is covered with pins and needles and begins to remove them ... ". Those motifs are thus expanded into a full narrative in AT 437. Especially relevant to the present discussion is a variant from Afghanistan, entitled The Seventy-Year-Old Corpse reported by Dorson [START_REF] Dorson | Folktales Told Around the World[END_REF], which has several elements in common with the AT 333 variants. An important difference, though, also deserves mention: the girl lives alone with her old father, who takes her to visit her aunt. We are told that, instead of meeting the aunt, the girl finds a seventy year old corpse covered with needles, destined to revive if someone would pick the needles from his body. At the end the girl marries the "corpse", whereas no further news are heard about her old father, whom she had left waiting for a drink of water. One is tempted to say that Bruno Bettelheim would regard this participation of two old males, the father and the daunting corpse, as an uncannily explicit confirmation of the presence in two different formsof the paternal figure, in an "externalization of overwhelming oedipal feelings, and ... in his protective and rescuing function" [4, p. 178]. (4) Antithetic relation with type AT 449. Again in The Story of Grandmother we watch the strange scene of the girl eating and drinking from her grandmother's remains, punctuated by the acid comment of a little cat: "A slut is she who eats the flesh and drinks the blood of her grandmother!" The scene has no consequence in the plot, and in fact it is clearly inconsistent with the role of the girl in type AT 333. It would sound natural, however, in a type in opposition to AT 333, such as AT 449: The Tsar's Dog, wherein the roles of victim and villain are totally reversed. The cannibalistic scene in The Story of Grandmother has the effect of assimilating the girl to a ghoul (motif G20 in the Index), and the female villain of the most often cited variant of type AT 449, namely The Story of Sidi Nouman (cf. Andrew Lang's translation in Arabian Nights Entertainment) happens to be a ghoul. No less intriguing in The Story of Grandmother are the repartees in the ensuing undressing scene, with the villain (a werewolf, as we may recall) telling the girl to destroy each piece of clothing: "Throw it in the fire, my child; you don't need it anymore." This, too, turns out to be inconsequential in the plot, but was a major concern in the werewolf historical chronicles and fictions of the Middle Ages [START_REF] Baring-Gould | The Book of Were-Wolves[END_REF][START_REF] Sconduto | Metamorphoses of the Werewolf: A Literary Study from Antiquity Through the Renaissance[END_REF]. In 1521, the Inquisitor-General for the diocese of Besançon heard a case involving a certain Pierre Bourget [START_REF] Baring-Gould | The Book of Were-Wolves[END_REF]. He confessed under duress that, by smearing his body with a salve given by a demon, he became a wolf, but "the metamorphosis could not take place with him unless he were stark naked". And to recover his form he would "beat a retreat to his clothes, and smear himself again". Did the werewolf in The Story of Grandmother intend to transform the girl into a being of his species? Surely the anonymous author did not mean that, but leaving aside the norms of AT 333 the idea would not appear to be so farfetched. In this regard, also illustrating type AT 449, there are two medieval lays (short narrative poems) that deserve our attention. They are both about noble knights with the ability to transform themselves into wolves. In the two narratives, they are betrayed by their villainous wives, intent on permanently preventing their resuming the human form. In Marie de France's lay of Bisclavret [START_REF] De | The Lais of Marie de France[END_REF] an old Breton word signifying "werewolf"the woman accomplishes this effect by stealing from a secret hiding place the man's clothes, which he needed to put on again to undo the transformation. In the other example, the anonymous lay of Melion [START_REF] Burgess | Eleven Old French Narrative Lays[END_REF], after a magic ring is applied to break the enchantment, the man feels tempted to punish the woman by inflicting upon her the same metamorphosis. In the preceding discussion we purported to show how types can be semiotically related, and argued that such relations constitute a factor to be accounted for in the emergence of variants. We should add that types may be combined in various ways to yield more complex types, whose attractiveness is heightened by the occurrence of unexpected changes. Indeed Aristotle's Poetics2 distinguishes simple and complex plots, characterizing the latter by recognition () and reversal (). Differently from reversal, recognition does not imply that the world changed, but that the beliefs of the characters about themselves and the current facts were altered. In particular, could a legitimate folktale promote the union of monster and girl? Could we conciliate type AT 333 (where the werewolf is a villain) with the antithetically related medieval lays of type AT 449 (where the werewolf is the victim)? Such conciliations of opposites are treated under the topic of blending [START_REF] Fauconnier | Conceptual projection and middle spaces[END_REF], often requiring creative adaptations. A solution is given by type AT 425C: Beauty and the Beast. At first the Beast is shown as the villain, claiming the life of the merchant or else of one of his daughters: "Go and see if there's one among them who has enough courage and love for you to sacrifice herself to save your life" [41, p. 159]but then proves to be the victim of an enchantment. Later, coming to sense his true inner nature (an event of recognition, as in Aristotle), Belle makes him human again by manifesting her love (motif D735-1: Disenchanting of animal by being kissed by woman). So, it is as human beings that they join. Alternatively, we might combine AT 333 and AT 449 by pursuing until some sort of outcome the anomalous passages of The Story of Grandmother, allowing the protagonists to join in a non-human form. The werewolf feeds human flesh of his victim to the girl, expecting that she would transform herself like he did (as Melion for a moment thought to cast the curse upon his wife), thereby assuming a shape that she would keep forever once her clothes were destroyed (recall the concern of Pierre Bourget to "beat a retreat to his clothes", and the knight's need to get back his clothes in Bisclavret). At the end the two werewolves would marry and live happily forever after, as a variant of an admittedly misbegotten new type (of, perhaps, a modern appeal, since it would also include among its variants the story of the happy vampires Edward and Bella in the Twilight Saga: http://twilightthemovie.com/). First steps towards variants in computer-generated stories To explore in a computer environment the variants of folktale types, kept in a library of typical plans, we developed a system in C# that does plan-recognition over the variants of the type indicated (e.g. AT 333), with links to pages of semiotically related types (e.g. AT 123, AT 311B*, AT 437, AT 449). Plan-recognition involves matching a number of actions against a pre-assembled repertoire of plot patterns (cf. [START_REF] Furtado | Constructing Libraries of Typical Plans[END_REF][START_REF] Karlsson | Conceptual Model and System for Genre-Focused Interactive Storytelling[END_REF]). Let P be a set of m variants of a specific tale type that are represented by complete plans, 𝑃 = {𝑃 1 , 𝑃 2 , ⋯ , 𝑃 𝑚 }, where each plan is a sequence of events, i.e.: 𝑃 𝑖 = 〈𝑒 1 𝑖 , 𝑒 2 𝑖 , ⋯ , 𝑒 𝑛 𝑖 𝑖 〉. These events are actions with ground arguments that are story elements (specific names, places, and objects). For instance, P k = go(Abel, Beach), meet(Abel, Cain), kill(Cain, Abel). The library of typical plans is defined by associating each plan P i with the following elements: (1) the story title; (2) a set of parameterized termsakin to those we use in Logtell [START_REF] Ciarlini | A logic-based tool for interactive generation and dramatization of stories[END_REF] to formalize Proppian functionsdescribing the story events; (3) the specification of the characters' roles (e.g. villain, victim, hero) and objects' functions (e.g. wolf's feast place, basket contents); (4) the semiotic relations of the story with other variants of same or different types (Section 4); ( 5) a text template used to display the story as text, wherein certain phrases are treated as variables (written in the format #VAR 1 #); and (6) the comics resources used for dramatization, indicating the path to the folder that contains the images representing the characters and objects of the narrative and a set of event templates to describe the events textually. The library is specified in an XML file. Let T be a partial plan expressed as a sequence of events given by the user. The system finds plans in P that are consistent with T. During the searching process, the arguments of the events in P are instantiated. For example, with the input T = {give(Anne, ring, Little Ring Girl), ask_to_take(Marie, Little Ring Girl, tea, Anne), eat(Joe, Little Ring Girl)}, the following stories are generated: Story 1: give(Anne, ring, Little Ring Girl), ask_to_take(Marie, Little Ring Girl, tea, Anne), go(Little Ring Girl, the woods), meet(Little Ring Girl, Joe), go(Joe, Grandmother's house), eat(Joe, Anne), disguise(Joe, Anne), lay_down(Joe, Grandmother's bed), go(Little Ring Girl, Grandmother's house), delivery(Little Ring Girl, tea), question(Little Ring Girl, Joe), eat(Joe, Little Ring Girl), sleep(Joe), go(Hunter, Grandmother's house), cut(Hunter, Joe, axe), jump_out_of(Little Ring Girl, Joe), jump_out_of(Anne, Joe), die(Joe). Story 2: give(Anne, ring, Little Ring Girl), ask_to_take(Marie, Little Ring Girl, tea, Anne), go(Little Ring Girl, the woods), meet(Little Ring Girl, Joe), go(Joe, Grandmother's house), eat(Joe, Anne), disguise(Joe, Anne), lay_down(Joe, Grandmother's bed), go(Little Ring Girl, Grandmother's house), lay_down(Little Ring Girl, Grandmother's bed), delivery(Little Ring Girl, tea), question(Little Ring Girl, Joe), eat(Joe, Little Ring Girl). which correspond, respectively, to the Grimm and Perrault AT 333 variants, rephrased to display the names of characters and objects given by the user. Our plan recognition algorithm employs a tree structure, which we call generalized plan suffix tree. Based on the suffix tree commonly used for string pattern matching [START_REF] Gusfield | Algorithms on Strings, Trees, and Sequences[END_REF], this trie-like data structure contains all suffixes p k of each plan in P. If a plan P i has a sequence of events 𝑝 = 𝑒 1 𝑒 2 ⋯ 𝑒 𝑘 ⋯ 𝑒 𝑁 , then 𝑝 𝑘 = 𝑒 𝑘 𝑒 𝑘+1 ⋯ 𝑒 𝑁 is the suffix of p that starts at position k (we have dropped the index i of the expressions p and p k for the sake of simplicity). In a generalized plan suffix tree S, edges are labeled with the parameterized plan events that belong to each suffix p k , and the leaves point to the complete plans ending in p k . Each suffix is padded with a terminal symbol $i that uniquely signals the complete plan in the leaf node. Figure 1 shows an example of generalized plan suffix tree generated for the plan sequences The process of searching for plans that match a given partial plan T , expressed as a sequence of input terms, is straightforward: starting from the root node, the algorithm sequentially matches T against the parameterized plan events on the edges of the tree, in chronological but not necessarily consecutive order, instantiating the event variables and proceeding until all input terms are matched and a leaf node is reached. If more solutions are requested, a backtracking procedure tries to find alternative paths matching T. The search process produces a set of complete plans G, with the event variables instantiated with the values appearing in the input partial plan or, for events not present in the partial plan, with the default values defined in the library. After generating G through plan-recognition, the system allows users to apply the semiotic relations (involving connection, similarity, unfolding, and opposition) and explore other variants of same or different types. The process of searching for variants uses the semiotic relations specified in the library of typical plans to create a link between a g i in G and its semiotically related variants. When instantiating one such variant v i , the event variables of v i are instantiated according to the characters and objects that play important roles in the baseline story g i . Characters playing roles in g i that also exist in v i , assume the same role in the variant. For roles that only exist in v i , the user is asked to name the characters who would fulfil such roles. Following the g i →v i links taken from the examples of section 4, the user gains a chance to reinterpret the g i AT 333 variant, in view of aspects highlighted in the semiotically related v i : 1. the wolf's villainy complemented by a rescue act (AT 123); 2. As illustrated in Figure 2, our system supports two dramatization modalities: text and comics. The former uses the original literary rendition of the matched typical plan as a template and represents the generated stories in text format. The latter offers a storyboard-like comic strip representation, where each story event gains a graphical illustration and a short sentence description. In the illustrations, the scene compositing automatic process takes into account the specific object carried by each character and the correct movement directions. More details on the generation of comic strips can be found in our previous work on interactive comics [START_REF] Lima | Non-Branching Interactive Comics[END_REF]. similarly predefined genre, readers have a fair chance to find a given story in a treatment as congenial as possible to their tastes and personality profile. Moreover, prospective amateur authors may feel inspired to put together new variants of their own after seeing how variants can derive from the type and motif interactions that we associate with semiotic relations. They would learn how new stories can arise from episodes of existing stories, through a process, respectively, of concatenation, analogous substitution, expansion into finer grained actions, or radical reversal. Computer-based libraries, such as we described, should then constitute a vital first step in this direction. In special, by also representing the stories as plans, in the form of sequences of terms denoting the story events (cf. the second paragraph of section 5), we effectively started to combine the two approaches mentioned in the Introduction, namely Aarne-Thompson's types and motifs and Proppian functions, and provided a bridge to our previously developed Logtell prototypes [START_REF] Ciarlini | Modeling interactive storytelling genres as application domains[END_REF][START_REF] Ciarlini | A logic-based tool for interactive generation and dramatization of stories[END_REF][START_REF] Furtado | Constructing Libraries of Typical Plans[END_REF][START_REF] Karlsson | Conceptual Model and System for Genre-Focused Interactive Storytelling[END_REF]. We expect that our analysis of variants, stimulated by further research efforts in the line of computational narratology, may contribute to the design of semi-automatic methods for supporting interactive plot composition, to be usefully incorporated into digital storytelling systems. P 1 = {go(A, B), meet(A, C), kill(C, A)} and P 2 = {tell(A, B, C), meet(A, C), go(A, D)}. Fig. 1 . 1 Fig. 1. Generalized plan suffix tree for P 1 = {go(A, B), meet(A, C), kill(C, A)} and P 2 = {tell(A, B, C), meet(A, C), go(A, D)}.. his belly replaced by ugly man and his bag (AT 311B*); 3. the girl's gesture of picking needles expanded to the wider scope of a disenchantment ritual (AT 437); 4. girl and werewolf with reversed roles of villain and victim (AT 449). Fig. 2 . 2 Fig. 2. Plan recognition system: (a) main user interface; (b) comics dramatization; (c) a variant for story 1; and (d) text dramatization. http://www-di.inf.puc-rio.br/~furtado/LRRH_texts.pdf http://www.gutenberg.org/files/1974/1974-h/1974-h.htm Acknowledgements This work was partially supported by CNPq (National Council for Scientific and Technological Development, linked to the Ministry of Science, Technology, and Innovation), CAPES (Coordination for the Improvement of Higher Education Personnel), FINEP (Brazilian Innovation Agency), ICAD/VisionLab (PUC-Rio), and Oi Futuro Institute.
43,316
[ "1011688", "995185", "1011687" ]
[ "362752", "362752", "362752" ]
01758437
en
[ "info" ]
2024/03/05 22:32:10
2015
https://inria.hal.science/hal-01758437/file/371182_1_En_20_Chapter.pdf
Vojtech Cerny Filip Dechterenko email: filip.dechterenko@gmail.com Rogue-like Games as a Playground for Artificial Intelligence -Evolutionary Approach Keywords: artificial intelligence, computer games, evolutionary algorithms, rogue-like Rogue-likes are difficult computer RPG games set in a procedurally generated environment. Attempts have been made at playing these algorithmically, but few of them succeeded. In this paper, we present a platform for developing artificial intelligence (AI) and creating procedural content generators (PCGs) for a rogue-like game Desktop Dungeons. As an example, we employ evolutionary algorithms to recombine greedy strategies for the game. The resulting AI plays the game better than a hand-designed greedy strategy and similarly well to a mediocre player -winning the game 72% of the time. The platform may be used for additional research leading to improving rogue-like games and general PCGs. Introduction Rogue-like games, as a branch of the RPG genre, have existed for a long time. They descend from the 1980 game "Rogue" and some old examples, such as NetHack (1987), are played even to this day. Many more of these games are made every year, and their popularity is apparent. A rogue-like is a single-player, turn-based, highly difficult RPG game, featuring a randomized environment and permanent death 1 . The player takes the role of a hero, who enters the game's environment (often a dungeon) with a very difficult goal. Achieving the goal requires a lot of skill, game experience and perhaps a little bit of luck. Such a game, bordering between RPG and puzzle genres, is challenging for artificial intelligence (AI) to play. One often needs to balance between being reactive (dealing with current problems) and proactive (planning towards the main goal). Attempts at solving rogue-likes by AI have been previously made [START_REF] Mauldin | ROG-O-MATIC: a belligerent expert system[END_REF][START_REF]Tactical Amulet Extraction Bot (TAEB) -Other Bots[END_REF][START_REF] Krajíček | NetHack Bot Framework. Master's thesis[END_REF], usually using a set of hand-coded rules as basic reasoning, and being to some extent successful. On the other hand, the quality of a rogue-like can heavily depend on its procedural content generator (PCG), which usually creates the whole environment. Procedural generation [START_REF] Shaker | Procedural Content Generation in Games: A Textbook and an Overview of Current Research[END_REF] has been used in many kinds of games [START_REF] Togelius | Search-based procedural content generation: A taxonomy and survey[END_REF][START_REF] Hendrikx | Procedural content generation for games: A survey[END_REF], and thus, the call for high-quality PCG is clear [START_REF] Liapis | Towards a Generic Method of Evaluating Game Levels[END_REF]. However, evaluating the PCG brings issues [START_REF] Dahlskog | A Comparative Evaluation of Procedural Level Generators in the Mario AI Framework[END_REF][START_REF] Smith | The Seven Deadly Sins of PCG Research[END_REF], such as how to balance between the criteria of high quality and high variability. But a connection can be made to the former -we could conveniently use the PCG to evaluate the artificial player and similarly, use the AI to evaluate the content generator. The latter may also lead to personalized PCGs (creating content for a specific kind of players) [START_REF] Shaker | Towards Automatic Personalized Content Generation for Platform Games[END_REF]. In this paper, we present a platform for developing AI and PCG for a rogue-like game Desktop Dungeons [11]. It is intended as an alternative to other used AI or PCG platforms, such as the Super Mario AI Benchmark [START_REF] Karakovskiy | The Mario AI Benchmark and Competitions[END_REF] or SpelunkBots [START_REF] Scales | SpelunkBots API -An AI Toolset for Spelunky[END_REF]. AI platforms have even been created for a few rogue-like games, most notably NetHack [START_REF]Tactical Amulet Extraction Bot (TAEB) -Other Bots[END_REF][START_REF] Krajíček | NetHack Bot Framework. Master's thesis[END_REF]. However, Desktop Dungeons has some characteristics making it easier to use than the other. Deterministic actions and short play times help the AI, while small dungeon size simplifies the work of a PCG. And as such, more experimental and resource demanding approaches may be tried. The platform could also aid other kinds of research or teaching AI, as some people create their own example games for this purpose [START_REF] Russell | Artificial Intelligence: A Modern Approach[END_REF]Chapter 21.2], where Desktop Dungeons could be used instead. The outline of this paper is as follows. First, we introduce the game to the reader, then we proceed to describe our platform, and finally, we will show how to use it to create a good artificial rogue-like player using evolutionary algorithms. Desktop Dungeons Description Desktop Dungeons by QCF Design [11] is a single-player computer RPG game that exhibits typical rogue-like features. The player is tasked with entering a dungeon full of monsters and, through careful manipulation and experience gain, slaying the boss (the biggest monster). Disclaimer: The following explanation is slightly simplified. More thorough and complete rules can be found at the Desktop Dungeons wiki page [START_REF]Desktop Dungeons -DDwiki[END_REF]. Dungeon The dungeon is a 20 × 20 grid viewed from the top. The grid cells may contain monsters, items, glyphs, or the hero (player). Every such object, except for the hero, is static -does not move2 . Only a 3 × 3 square around the hero is revealed in the beginning, and the rest must be explored by moving the hero next to it. Screenshot of the dungeon early in the game can be seen in Fig. 1. Hero The hero is the player-controlled character in the dungeon and holds a set of values. Namely: health, mana, attack power, the number of health/mana potions, and his spell glyphs. The hero can also perform a variety of actions. He can attack a monster, explore unrevealed parts of the dungeon, pick up items and glyphs, cast spells or convert glyphs into bonuses. Exploring Unrevealed grid cells can be explored by moving the hero next to them (at least diagonally). Not only does exploration reveal what lies underneath for the rest of the game, but it also serves one additional purpose -restoring health and mana. Every square explored will restore health equal to the hero's level and 1 mana. This means that the dungeon itself is a scarce resource that has to be managed wisely. It shall be noted, though, that monsters heal also when hero explores, so this cannot be used to gain an edge over damaged monsters. Combat Whenever the hero bumps into a monster, a combat exchange happens. The higher level combatant strikes first (monster strikes first when tied). The first attacker reduces his opponent's health by exactly his attack power. The other attacker, if alive, then does the same. No other action causes any monster to attack the hero. Items Several kinds of items can be found lying on the ground. These comprise of a Health Powerup, Mana Powerup, Attack Powerup, Health Potion and a Mana Potion. These increase the hero's health, mana, attack power, and amount of health and mana potions respectively. Glyphs Spell glyphs are special items that each allow the hero to cast one kind of spell for it's mana cost. The hero starts with no glyphs, and can find them lying in the dungeon. Common spells include a Fireball spell, that directly deals damage to a monster (without it retaliating), and a Kill Protect spell, that saves the hero from the next killing blow. Additionally, a spell glyph can be converted to a racial bonus -a specific bonus depending on the hero's race. These are generally small stat increases or an extra potion. The spell cannot be cast anymore, so the hero should only convert glyphs he has little use for. Hero Races and Classes Before entering the dungeon, the player chooses a race (Human, Elf, etc.) and a class (Warrior, Wizard, etc.) of his hero. The race determines only the reward for converting a glyph, but classes can modify the game in a completely unique way. Other The game has a few other unmentioned mechanics. The player can enter special "challenge" dungeons, he can find altars and shops in the dungeon, but all that is far beyond the basics we'll need for our demonstration. As mentioned, more can be found at the Desktop Dungeons wiki [START_REF]Desktop Dungeons -DDwiki[END_REF]. AI Platform Desktop Dungeons has two parameters rarely seen in other similar games. Every action in the game is deterministic3 (the only unknown is the unrevealed part of the dungeon) and the game is limited to 20 × 20 grid cells and never extends beyond. These may allow for better and more efficient AI solutions, and may be advantageously utilized when using search techniques, planning, evaluating fitness functions, etc. On the other hand, Desktop Dungeons is a very interesting environment for AI. It is complex, difficult, and as such can show usefulness of various approaches. Achieving short-term and long-term goals must be balanced, and thus, simple approaches tend to not do well, and must be specifically adjusted for the task. Not much research has been done on solving rogue-like games altogether, only recently was a famous, classic title of this genre -NetHack -beaten by AI [START_REF] Krajíček | NetHack Bot Framework. Master's thesis[END_REF]. From the perspective of a PCG, Desktop Dungeons is similarly interesting. The size of the dungeon is very limited, so attention to detail should be paid. If one has an artificial player, the PCG could use him as a measure of quality, even at runtime, to produce only the levels the artificial player found enjoyable or challenging. This is why we created a programming interface (API) to Desktop Dungeons, together with a Java framework for easy AI and PCG prototyping and implementation. We used the alpha version of Desktop Dungeons, because it is more direct, contains less story content and player progress features, runs in a browser, and the main gameplay is essentially the same as in the full version. The API is a modified part of the game code that can connect to another application, such as our framework, via a WebSocket (TCP) protocol and provide access to the game by sending and receiving messages. A diagram of the API usage is portrayed in Fig. 2. The framework allows the user to focus on high-level programming, and have the technical details hidden from him. It efficiently keeps track of the dungeon elements, and provides full game simulation, assisting any search techniques and heuristics that might be desired. The developed artificial players can be tested against the default PCG of the game, which has the advantage of being designed to provide challenging levels for human players, or one can generate the dungeon on his own and submit it to the game. Intermediate ways can also be employed, such as editing the dungeons generated by the game's PCG to e.g. adjust the difficulty or reduce the complexity of the game. The framework is completely open-source and its repository can be found at https://bitbucket.org/woitee/desktopdungeons-java-framework. Evolutionary Approach To demonstrate the possibilities of the Desktop Dungeons API, we have implemented an evolutionary algorithm (EA) [START_REF] Mitchell | An Introduction to Genetic Algorithms[END_REF] to fine-tune greedy AI. A general explanation of EAs is, however, out of the scope of this paper. Simple Greedy Algorithm The original greedy algorithm was a simple strategy for each moment of the game. It is best described by a list of actions, ordered by priority. 1. Try picking up an item. 2. Try killing a monster (prefer strongest). 3. Explore. The hero tries to perform the highest rated applicable action, and when none exists, the run ends. Killing the monster was attempted by just simulating attacks, fireballs and drinking potions until one of the participants died. If successful, the sequence of actions was acted out. This can be modeled as a similar list of priority actions: 1. Try casting the Fireball spell. 2. Try attacking. 3. Try drinking a potion. Some actions have parameters, e.g. how many potions is the hero allowed to use against a certain level of monster. These were set intuitively and tuned by trial and error. This algorithm has yielded good results. Given enough time (weeks, tens of thousands of runs), this simple AI actually managed to luck out and kill the boss. This was very surprising, we thought the game would be much harder to beat, even with chance on our side. It was probably caused by the AI always calculating how to kill every monster it sees, which is tedious and error-prone for human players to do. Design of the Evolution We used two ordered lists of elementary strategies in the greedy approach, but we hand-designed them and probably have not done that optimally. This would become increasingly more difficult, had we added more strategies to the list. We'll solve this by using evolutionary algorithms. We'll call the strategies used to select actions in the game maingame strategies and the strategies used when trying to kill monsters attack strategies. Each strategy has preconditions (e.g. places to explore exists) and may have parameters. We used as many strategies as we could think of, which resulted in a total of 7 maingame strategies and 13 attack strategies. The evolutionary algorithm was tasked with ordering both lists of strategies, and setting their parameters. It should be emphasized, that this is far from an easy task. Small imperfections in the strategy settings accumulate over the run, and thus only the very refined individuals have some chance of slaying the final boss. However, the design makes the AI ignore some features of the game. It doesn't buy items in shops nor does it worship any gods. These mechanics are nevertheless quite advanced, and should not be needed to win the basic setting of the game. Using them can have back-biting effects if done improperly, so we just decided to ignore them to keep the complexity low. On a side note, this design is to a certain extent similar to linear genetic programming [START_REF] Brameier | Linear Genetic Programming[END_REF]. Fitness Function Several criteria could be considered when designing the fitness function. An easy solution would be to use the game's score, which is awarded after every run. However, the score takes into account some attributes that do not directly contribute towards winning the game, e.g. awarding bonuses for low completion time, or never dropping below 20% of health. We inspired ourselves by the game's scoring, but simplified it. Our basic fitness function evaluates the game's state at the end of the run and looks like this: f itness = 10 • xp + 150 • healthpotions + 75 • manapotions + health The main contributor is the total gained XP (experience points, good runs get awarded over a hundred), and additionally, we slightly reward leftover health and potions. We take these values from three runs and add them together. Three runs are too few to have low variance on subsequent evaluations, but it yields far better results than evaluating only one run, and more runs than three would just take too much time to complete. If the AI manages to kill the boss in any of the runs, we triple the fitness value of that run. This may look a little over the top, but slaying the final monster is very difficult, and if one of the individuals is capable of doing so, we want to spread it's gene in the population. Note, that we don't expect our AI to kill the boss reliably, 5-10% chance is more what we are aiming for. We have tried a variety of fitness functions, taking into account other properties of the game state and with different weights. For a very long time, the performance of the bots was similiar to the hand-designed greedy strategy. But, by analyzing more of the game, we have constructed roughly the fitness function above and the performance has hugely improved. The improvement lies in the observation of how can the bots improve during the course of evolution. Strong bots in the early state will probably just use objectively good strategies, and not make complete blunders in strategy priorities, such as exploring the whole level before trying to kill anything. This should already make them capable of killing quite a few monsters. Then, the bots can improve and fine-tune their settings, to use less and less resources (mainly potions) to kill as many monsters as possible. And towards the late state of evolution, the bots can play the game so effectively, they may still have enough potions and other resources to kill the final boss and beat the game. The current fitness function supports this improvement, because the fitness values of the hypothetical bots in subsequent stages of evolution continuously rises. After implementation, this was exactly the course the bots have evolved through. Note, that saving at least a few potions for the final boss fight is basically a necessary condition for success. Genetic Operators Priorities of the strategies are represented by floating point numbers in the [0, 1] interval. Together with the strategy's parameter values, we can encode it as just a few floating point numbers, integers and booleans. This representation allows us to use classical operators like one-/two-point crossovers and small change mutations. And they make good sense and work, but they are not necessarily optimal, and after some trial and error, we have Fig. 3. Graphs describing the fitnesses of the evolution for each of our class-race settings. The three curves describe the total best fitness ever encountered, the best fitnesses averaged over all runs and the mean fitnesses averaged over all runs. The vertical line indicates the point, where the AI has killed the boss and won the game at least once in three attempts. This fitness value is different for each setting, since some raceclass combinations can gain more hitpoints or health potions than other, both of which directly increase their fitness (see Section 4.3). started using a weighted average operator to crossover the priorities for better performance. The AI evolved with these settings were just a little too greedy, often using all their potions in the early game, and even though they advanced far, they basically had no chance of beating the final boss. These strategies found quite a strong local optimum of the fitness, and we wanted to slightly punish them for it. We did so in two ways. Firstly, we rewarded leftover potions in our fitness value calculation, and secondly, a smart mutation was added, that modifies a few individuals from the population to not use potions to kill monsters of lower level than 5. After some balancing, this has shown itself to be effective. Mating and natural selection was done by simple roulette, i.e. individuals were chosen with probability proportional to their fitness. This creates a rather low selection pressure, and together with a large enough number of individuals in a generation, the evolution should explore a large portion of the candidate space and tune the strategies finely. Results After experimentation, we settled to do final runs with a population of 100 individuals, evolving through 30 generations. The population seemed large enough to be exploring the field well, and the generations sufficient for the population to converge. We ran the EA on 4 computers for a week, with a different combination of hero class and race on each computer. The result was a total of 62 runs, every hero class and race setting completed a minimum of 12 full runs. A single evaluation of an individual takes about 2 seconds, and a single whole run finishes in about 14 hours (intel i5-3470 at 3.2GHz, 4GB RAM, two instances in parallel). The data of the results contain a lot of good strategies, their qualities can be seen in Fig. 3. Every combination of hero race and class managed to beat the boss at least once, and the strongest evolved individual kills the boss 72% of time (averaged over 10000 runs). This is definitely more than we expected. Note that no AI can slay the boss 100% of the time, since the game's default PCG sometimes creates an obviously unbeatable level (e.g. all exits from the starting room surrounded by high level monsters). The evolved strategies also vary from each other. Different race and class combinations employ different strategies, but variance occurs even among runs of the same configuration. This shows that Desktop Dungeons can be played in several ways, and that different initial settings require different approaches to be used, which makes the game more interesting for a human. The different success rates of the configurations can also be used as a hint which race-class combinations are more difficult to play than others, either to balance them in the game design, or to recommend the easier ones to a beginner. Conclusion We present a platform for creating AI and PCG for the rogue-like game Desktop Dungeons. As a demonstration, we created an artificial player by an EA adjusting greedy algorithms. This AI functioned better than the hand-made greedy algorithm, winning the game roughly three quarters of the time, compared to a winrate of much less than 1%, and being as successful as an average human player. This shows that the game's original PCG worked quite well, not generating a great abundance of impossible levels, yet still providing a good challenge. A lot of research is possible with this platform. AI could be improved by using more complex EAs, or created from scratch using any techniques, such as search, planning and others. The PCG may be improved to e.g. create more various challenges for the player, adjust difficulty for stronger/weaker players or reduce the number of levels that are impossible to win. For evaluating the PCG, we could advantageously utilize the AI, and note some statistics, such as winrate, how often are different strategies employed or number of steps to solve a level. A combination of these would then create a rating function. Also, it would be very interesting to keep improving both the artificial player and the PCG iteratively by each other. Fig. 1 . 1 Fig. 1. Screenshot of the dungeon, showing the hero, monsters, and an item (a health potion). The dark areas are the unexplored parts of the dungeon. Fig. 2 . 2 Fig.2. The API, as a part of the game, connects to an application using a WebSockets protocol and provides access to the game by receiving and sending messages. The game offers no save/load features, it is always replayed from beginning to end. Some spells and effects move monsters, but that is quite uncommon and can be ignored for our purpose. Some rare effects have probabilistic outcomes, but with a proper game setting, this may be completely ignored.
23,075
[ "1030231", "1030232" ]
[ "304738", "304738" ]